James Thew - Fotolia

Interview: Rolf Krolke, CIO, The Access Group

We talk to The Access Group’s CIO about integration and ongoing management of legacy systems in an extremely acquisitive company, and the worldwide storage refresh he’s overseeing as part of that process

With an average of around 10 acquisitions a year since 2017, The Access Group got good at integrating disparate systems into its IT stack.

But it also ended up with a very disparate storage infrastructure and multiple difficulties when it came to maintaining it.

In this interview, we talk to Rolf Krolke, APAC regional technology director for The Access Group. It is the largest UK-headquartered software company, with a valuation of nearly £10bn and 7,500 employees, providing business applications to 100,000 organisations worldwide.

We talked to Krolke as he embarks on a storage refresh, which will see numerous legacy storage suppliers’ hardware replaced with Pure Storage all-flash storage arrays, procured on an as-a-service basis, and with plans to use the company’s Enterprise Data Cloud data management platform, as well as its Portworx container management environment.

The project will see storage consolidated to 10 datacentres globally, with total capacity running to tens of petabytes.

We caught up with Krolke at Pure’s Accelerate event in Las Vegas last week and asked him about:

  • The key challenges he faces as a technology director;
  • How a company like The Access Group integrates multiple disparate systems during intense mergers and acquisitions (M&A) activity;
  • The company’s new Access Evo platform;
  • The drivers for the storage refresh project;
  • How moving to a modern, single-supplier storage environment will impact skills in the organisation.

As a technology director, what are the biggest challenges you face?

Because we’ve grown through M&A, we end up with lots of different infrastructure, lots of different-sized companies, at different maturity levels.

[That means] we have data everywhere, [so the challenge is] how do we bring that together? And how do we bring that across into our reference architecture while not impacting that company’s performance, revenue, stability and availability?

So far, we’ve deployed FlashArray X and the XL series. We’re now looking at Portworx as we finalise our container strategy and what container platforms we want to use

Rolf Krolke, The Access Group

I think the other one is around the integration of our different platforms, that we have, again, [gained] through M&A. Normally, what would happen is you build something and you organically grow into it, but we very much inorganically grow. Our M&A activity is crazy.

We’re a very, very curious company, which is very cool, but there’s a lot of work around the integration side. So, it’s really around keeping up with what we need to do to make sure that’s a smooth transition and we don’t impact that whole process, but then also looking at how we get that into our standardised reference architecture.

We manage that integration very closely because what we don’t want to do is go, “Oh, congratulations, we’ve now purchased you. We’re going to move everything because we obviously can’t break it.”

What is the template for how you incorporate the IT you gain in these mergers and acquisitions?

They’re all different sizes. I guess what we have is a template of the bare minimum we need to do.

That means simple things like migrating their public cloud accounts under our public cloud accounts, so we get the benefit of our committed spend with AWS [Amazon Web Services] and Microsoft. We roll out our security tooling, so we know that everything is secure. That’s probably the bare minimum we do.

But then, each of them is done on a case-by-case basis, and depends on how it is architected, what development cycles look like, the pipelines, the platforms used.

We also do an assessment to see if there is a platform we don’t want to use, like if we’re buying companies that use Alibaba or one of the embargoed ones.

But we have a simple, standard template for what we do as a bare minimum, and then we spend the next 18 months working with them, or up to 18 months, depending on the size, to bring them in.

Can you tell us more about Access Evo? How will new infrastructure facilitate what you’re able to do on the Evo level?

What it will allow us to do is, especially as [Pure] Fusion comes out, bring the AI [artificial intelligence] closer to the data. So, rather than relying on disparate systems, we can start to look at putting AI pods inside the datacentres where the data will be, and run those queries from there.

So that’s potentially one benefit of putting this on this data layer. I think that is the main one, as well as managing that data and its size and the scale, because as customers start using it, they’re going to be generating more data, they’re going to be putting more data in there.

We’ve got some really interesting use cases, especially in Australia, where we are launching it on one of our payroll platforms and it’ll be the first time we’ve ever interacted with a customer. As in, with their data, because they run it on-prem. So, it might be sitting on a traditional server or laptop, and then with Evo, they’re now going to hook that up into our environment and they’ll be able to do analytics on their data through copilot within Evo.

What drove the storage refresh project? Were there some limitations in your existing scenario?

Yes, there were limitations.

One of the key drivers, apart from centralised management, was the lack of non-disruptive upgrades. Monthly patching, everyone’s scared to do it. No one wants to patch because you never know what you’re going to find.

And many times we’ve tried to patch infrastructure, and had issues and caused outages. But what we loved, and what kind of really sold us on Pure, is the fact that we can do non-disruptive upgrades. And we’ve put that through its paces. I did a non-disruptive upgrade in the middle of production on a Monday, and no one knew.

One of the key drivers [for our storage refresh] was the lack of non-disruptive upgrades. What we loved [about] Pure is the fact that we can do non-disruptive upgrades
Rolf Krolke, The Access Group

Previously, doing infrastructure upgrades was a long process. Even with just standard firmware and patching, it had to be a planned exercise using lots of people power over a long period of time.

It was due to the mixed infrastructure – and that many suppliers across storage and compute don’t always make it easy.

And we were always two or three behind because of the criticality of those patches we needed to do. And depending on when you did them, sometimes you needed to do the first one, then the next one, then the next one. And we can’t really turn our SANs [storage area networks] off.

We’re in the process of rolling out Pure. We’ve been on that journey for about 12 months, and over the next number of years, we will put Pure in as we take out the existing storage that we either purchased years ago or we’ve acquired through M&A. We can’t just do a rip and replace because we need to run the assets down.

So far, we’ve deployed FlashArray X and the XL series. We’re now looking at Portworx as we finalise our container strategy and what container platforms we want to use.

We are also looking at Cloud Block Store, especially for our [Microsoft] Azure and AWS environments, to see how we can then bring the data we have there into our Pure 1 environment to manage that. And then I’m really, really interested in Fusion, and how we can look at using that to move workloads around depending on the outcome we’re trying to achieve.

Have you been able to measure any benefits so far?

We have significantly reduced the number of outages we’ve had. We have reduced our total cost of ownership. I don’t have the numbers, though. We’ve seen a reduction in service-impacting events.

As the storage refresh progresses, what does reskilling look like, if there is any, between what your storage people do now and what they’ll be doing in future?

The one great thing about deploying Pure is [it means] the storage guys can go and do other things, because the total management overhead is lower.

We used to have these silos, like platform or infrastructure teams. This is breaking down those silos.

We will have multi-admins. So, they will administer storage, compute, the whole platform, everything inside the datacentre. They’re going to cover the whole lot. A lot of them do now. So I don’t think we need that storage person.

But I think our traditional infrastructure teams need to come up to speed with the skills that you have now, like a CloudOps or a DevOps team would know. They need to know networking, storage, compute and databases. They need to know all that stuff that anyone building in the public cloud, that those infrastructure teams need.

I think that’s the upskilling component that’s going to be needed.

Read more about data management

Read more on Storage management and strategy