Colin Rees, IT director at Domino's Pizza, and Josko Grljevic, IS director of theTrainline.com, reveal how transforming their IT departments transformed their businesses.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Two weeks into the post, Rees found himself in the office before the rest of his IT team. As he walked past the Domino’s datacentre, he was alarmed to hear a beeping noise coming from the servers.
“It took me 20 minutes to work out how to get in. It was 7am and there was no one else in the office,” he told a meeting of IT leaders.
Once inside, it was clear that the servers that ran the business were shutting themselves down one by one. One of the power circuits had died, but Rees had no idea which one.
“I was crawling around on the server room on my hands and knees trying to work out which power sockets were live, plugging things in and unplugging things,” he said.
Domino’s financial director walked in through the door, wanting to know why his e-mail wasn’t working.
“I had no idea what he thought when he saw his new IT director on his hands and knees crawling across the floor,” said Rees, “but I did not have any problem getting funding after that.”
Rees was speaking to IT leaders at a meeting of Computer Weekly’s 500 Club, along with Josko Grljevic, IS director of Thetrainline.com, and John Rakowski, advisor and analyst at Forrester Research.
Download their presentations:
Businesses are advised to carry out a 90-day health check on their datacentres, says Forrester analyst John Rakowski .
CIOs should ask themselves whether their datacentres are able to respond to the needs of the rest of the business.
It is, he says, the age of the customer, and if IT departments cannot respond quickly, the business will look elsewhere for solutions.
- Assess how you manage capacity. Do you use a spreadsheet or something more sophisticated?
- Evaluate datacentre infrastructure management (DCIM) solutions. The technology is quickly becoming an essential requirement for datacentres.
- If your infrastructure is ageing, model the costs of replacing it, and the costs of the services you are providing.
Source: Forrester Research
For Rees, it was clear that Domino’s needed to do something radical to transform its datacentre. He floated the idea of outsourcing with the datacentre staff. They were sceptical: no outsourcer would care as much as they did about the technology as Domino’s own IT staff did. They had a point, said Rees.
Domino’s relies on its datacentre to run its web-based pizza ordering service for its 730 franchises in the UK, Ireland and Germany.
Ensuring that the website was always available, particularly between the peak ordering hours of 7pm to 9pm, was critical. But during the first two weeks, the website was down three times, Rees told the group.
“I remember thinking, 'what have I done, what have I let myself in for here?',” he said.
Domino’s IT systems simply had not kept pace with the company’s rapid growth rate. When Rees joined Domino’s in September 2010, the business was opening one new store a week and online orders were growing by 30% to 40% a year.
It became clear, said Rees, that maintaining an in-house datacentre was not one of the company's core competencies. “Our people care passionately about the service they deliver, but caring is not enough, they are not always the best people to deliver the service,” he said.
Rees and his team began the long process of selection and negotiation with potential outsourcing partners. He decided to outsource everything apart from the pizza company’s strategically important core software.
I wanted to make sure my team kept the core IT around our solution. We decided to outsource everything but the application,” he said.
Rees opted to run software on physical servers, for performance reasons, but to put everything else on virtual servers.
And he opted for a hybrid cloud model, to allow Domino’s to pull in extra capacity and storage from the cloud when needed.
Rees and his team faced some difficult challenges. Domino’s had grown rapidly as a company, and, as is the case in many organisations, the IT systems had not been fully documented.
People who had designed key systems had left the company, taking their inside knowledge with them. “We had lost a lot of that enterprise memory,” said Rees.
Domino’s decided to transfer services over to its outsourced datacentre gradually, testing them and documenting them as it went.
More from the CW500 Club
- The business case for social media
- Transforming IT at HMRC
- Everything you need to know about big data in 15 minutes
- Getting to grips with big data
- The future of retail IT
- How CIOs are exploiting the cloud
- How to deal with disruptive technologies
- Public sector IT in an age of austerity
- How organisations are sharing IT services to beat the downturn
- How IPSOS met the challenge of creating a global IT department
- The Royal Mail approach to supplier relationship management
- CIO keeps Network Rail on track for £1bn transformation
- Top CIO trends for 2012
“We moved the lower priority services first, so our team could build confidence with our partners, and build confidence with the technology,” he said.
Rees warned the business to expect difficulties and interruptions during the transition period. In the event, everything went smoothly – apart from a problem with Domino’s most important service.
The website suffered performance issues during peak ordering periods, but the team had been unable to reproduce the problems during testing.
“We realised the only way we could fix this was by observing it in a production environment,” he said.
Rees decided to find a way to "buy time" to keep the live website running long enough for the IT team to identify the source of the problem.
“We were able to double the capacity of our website overnight. We told them on Thursday evening we wanted to double the number of servers, and by Friday morning they were all there,” said Rees.
Transferring the datacentre to a hybrid cloud service has helped Domino’s improve the reliability of its e-commerce site dramatically, said Rees.
In 2010, the site had 25 hours of downtime. Since the migration in March last year, it has been running at 99.999% availability, with virtually no downtime.
Without moving to outsourcing, the number of internal datacentre staff would probably have had to double just to keep pace with Domino’s growing online business.
More importantly, the move has freed up Domino’s infrastructure specialists to work on added-value projects, rather than spending their time simply keeping the datacentre running.
“All the team members that were building servers and delivering switches are now able to focus on delivering value to the business,” said Rees.
Over the past year, the company has rolled out apps for the iPhone, Android and Windows 7, to allow people to order pizzas from their mobile phones.
And the IT team is half way through the deployment of a major e-commerce system.
1. Use infrastructure-as-a-service (IaaS) to stay on top of your workloads
Most people assume that cloud computing generates a return on investment by turning capital expenditure into operational expenditure, said Forrester analyst John Rakowski.
But the real benefit of cloud computing is the ability it gives businesses to manage fluctuating demands.
He advises businesses to move to the IaaS cloud computing model, to speed up provisioning of new IT services.
Today, if a customer wants a new server, IT can deliver one in three days, or even a day, if the request does have to go through a change approval board.
“It's too slow,” said Rakowski. “IaaS is all about speeding it up in a controlled, orchestrated way.”
2. Evaluate converged infrastructure
Today's IT has too many moving parts and takes too long to deploy, said Rakowski.
One way to reduce that is to deploy converged infrastructure. Converged infrastructure technology is being developed by most major suppliers. Cisco, for example has its unified computing system, while IBM is offering Pureflex.
All of them pool together server, network and storage, and optimise them to make them easier to manage, said Rakowski.
The technology is effective, though organisations can run into difficulties if they want to transfer legacy applications or mission-critical applications.
“Converged infrastructure works well for virtual IT environments, private cloud environments and back office applications, but be wary when you are looking at legacy applications or mission-critical applications,” he said.
3. Explore containers and modular datacentres
“Eventually, if you are responsive to the needs of your customers, you will run out of space and power. You will have to evaluate new facilities,” Rakowski told Computer Weekly’s 500 Club.
He said businesses should consider modular datacentre systems, which allow companies to enlarge their datacentres as they go, while at the same time keeping up to speed with the latest datacentre technology.
Major cloud providers, such as Amazon, Google and Microsoft, buy in shipping containers that are fully configured with server, storage and networking components.
For smaller organisations, modular datacentres are a more cost-effective option. “Think of Lego building bricks. Effectively, they are configured modules, whether that is service, storage or network, that you can build together,” he said.
4. Use DCIM solutions to monitor your datacentre
Over the past couple of decades, organisations have monitored the performance of their datacentres, in an inefficient way.
Management systems generally monitor individual functions separately. There are different systems, for example, to monitor servers, hardware and power consumption.
And datacentre managers frequently rely on simple tools such as spreadsheets to tell them what they need to know, said Rakowski.
Datacentre infrastructure management (DCIM) technology brings together physical monitoring of the datacentre with facilities monitoring and infrastructure monitoring. It is able to analyse and report on server utilisation, power, memory and process use in real time.
“It should be able to display that information on a simple pane of glass so my CIO can walk up to that screen and understand what is going on in that datacentre,” he said.
As well as its own site, Thetrainline.com hosts websites for a number of the UK train operating companies, as well as travel booking sites for large businesses.
Thetrainline.com signed an outsourcing deal for its infrastructure and application support services with Gapgemini 10 years ago.
When the contract came up for renewal three years ago, Thetrainline’s IT department came under pressure from the business to make substantial cuts.
Technology and people
Thetrainline began working with Capgemini on a new datacentre strategy.
The technology was the least difficult element, according to IS director Josko Grljevic (pictured left), although, he says, it caused some sleepless nights. “We took our traditional physical server estate in our Rotherham datacentre. We virtualised pretty much everything in production,” he said.
The people strategy was more challenging. The company employed two teams in Mumbai and London, working on infrastructure and applications support. “We ended up moving the majority of Capgemini operations and support to Bangalore and Mumbai,” he said.
That meant training up a new 20-strong application support team in Bangalore from scratch, people who had no idea what the application stack was about, and training them up to be effective support staff, said Grljevic.
The project, which took almost nine months to complete and went live in May last year, produced “astronomical” savings, according to Grljevic.
The project also gave Thetrainline.com a better understanding of its IT costs, said Grljevic.
The IT department was able to take each element of IT – networking, storage, power, air-conditioning, and so on – to work out the cost for a standard processor and memory building block.
“Our operational cost cut was significant and our availability improved significantly because we no longer had single instances of software on physical machines,” he said.
“We had a big transformation programme and we delivered substantial cost savings. The business said, 'excellent, thank you very much, we want more',” he said.
Trainline.com turned to the cloud for answers. The company already had experience of running commodity services in the cloud including its human resources service.
Grljevic began a project to move more IT services into the cloud. He started with Thetrainline’s software testing systems as a “quick win”.
“We had 10 to 12 test integration environments. For lack of a better word, they were flaky,” he told the group.
“Regression teams were tearing their hair out because testing was such a complex environment. And developers were fed up because whenever they tried to do a fix, the service was not available,” he said.
In 6 months, Thetrainline.com moved from 10 to 12 flaky test environments to 16 fully working environments deploying 380 servers.
Productivity has increased dramatically, and support costs are much lower, said Grljevic.
“I now have a testing environment that is bigger than my production environment, but it is run by just four support people,” he said.
Driving down costs is Grljevic’s next priority. The team is beginning to question whether it can do things more cheaply by deploying alternative software.
“We are asking, 'why we are using VMware?'. It is expensive in the scheme of things. 'Why don’t we move to Hyper-V?' It has all the functionality we need. 'Why are we using that SAN?',” he asks.
More importantly, the project has given has given the IT department the ability to develop software much more quickly, which is helping Thetrainline.com move more rapidly into new areas of business.
“I love pressure," said Grljevic. “It drives us to do things differently, it really does.”
Cloud computing is developing rapidly, but there is still some way to go before companies will feel comfortable putting all of their systems in the cloud.
There are still some nagging concerns about the security of large cloud providers, said Josko Grljevic, IS director of Thetraineline.com.
One concern is that cloud providers such as Amazon of Microsoft Azure could become a target for hackers seeking to attack multiple large clients at once.
“Have any of these guys been broken into? We can’t get an answer to that,” he said.
View CW500 Club video interviews here
Making sure sensitive data is deleted once you leave a cloud service provider is another challenge, said Colin Rees, IT director of Domino's Pizza.
“You know internally if you delete data it is gone. If you have a cloud provider, how do you know your data is gone, that it is not sitting around on back-up servers when you leave?” he said.
However, the technology is moving so quickly that these problems are likely to be solved sooner rather than later.
“I can’t see any reason why our core web services could not be operating completely in the cloud,” said Rees.
Grljevic agreed: “I don’t think there is any issue with switching your production systems to the cloud.”
One obvious deterrent is cost. Cloud services make their money from charging for peaks and troughs of computing power. And paying for the troughs is expensive, said Grljevic.
“If you are a small organisation with a couple of web servers, it works out okay, but when you add the full complexity into the cloud, it is probably a heck of a lot cheaper to build it internally,” he said.
But Grljevic is optimistic that competition will bring prices down. The globalisation of cloud services will make it possible for cloud providers to even out peak and trough demands, bringing prices down, he said.
For John Rakowski (pictured left), analyst at Forrester, the barriers are not so much technical as cultural. “It's about people accepting cloud and being open to cultural change. That is the barrier,” he said. “It’s a completely different model. If you are not internet-based now, it is going be quite a painful transition.”