When buying a car, what you do depends on whether you are looking for speed or reliability, not to mention the size of your wallet. And you’ll also need to think about petrol consumption, insurance, replacement part costs, resale value…
In implementing business critical solutions (BCS), deciding which servers, storage and infrastructure you’ll need to run your business-critical solutions requires the same kind of thought processes. Is it speed or reliability the business most needs, and if people choose ‘speed’, exactly how speedy? And what are the implications for the IT department of giving them that speed?
Sadly, according to James Governor, principal analyst and co-founder of analyst firm Redmonk, there still aren’t enough of these kinds of discussions going on. ‘It’s the age-old problem. Systems people worry about systems uptime and business people worry about business uptime,’ he points out.
And that leads to a lack of joined-up decisions. ‘If line of business is in charge, as if often the case, it may be looking for the best application, but not thinking about the best architecture. What’s needed is joined-up purchasing and management,’ suggests Governor.
Only by battering down the barriers between business and IT (and between different IT projects) will the best business solution be worked out. Technology should be bottom of any to-do list for running a business-critical solution. After all, you could deploy a state-of-the-art application, server, storage and networking and not only pay a hefty wad, but still miss out on the business value jackpot.
‘It’s a classic case of leaping to the answer rather than looking at the question,’ says Peter Critchley, strategy director at technology integrator Morse.
The question is: what exactly does the business want to achieve?
IT people are still too blinded by vendor claims of scalability and high availability to think of the true business implications, maintains Governor. They are not assessing what the business needs, the impact on infrastructure or the development costs.
‘Organisations are not very good at working out the implications of scalability and availability and mission-critically of systems they run,’ he says. ‘They also have a tendency to buy something that’s “five-nines” without thinking about which particular service that’s for. To get the most out of IT you need to analyse what you mean by scalability. Four or five nines sounds good, but if you measure that over a year, that can mean significant downtime. For some services that can be fine, but for others it’s not.’
The business isn’t interested or impressed by phrases such as ‘five nines’. It doesn’t matter if financial applications go down over a whole weekend, but a few minutes on the last day of the financial year could be extremely damaging.
But IT’s response to demands for uptime is often to buy big. ‘Hardware salesmen would prefer it if everyone over-prioritised their systems. If an application needs scaling, their answer is to buy more hardware,’ adds Governor. ‘The thinking is that as hardware is coming down in price, the answer must simply be to throw more hardware at the problem.’
We’re now mopping up the mess from what Peter Hindle, enterprise solutions manager at HP UK & I, calls this ‘just in case’ scenario. Rather than buy the six processors needed for a big capital project, companies would buy 10 - just in case - then add more for failover and yet more to keep development work separate, just in case. End result: 30-odd CPUs instead of the six the system actually required.
That means companies are sitting on vast pools of untapped capacity. ‘I usually quote that we use about 20%, but most people it’s nearer to 8-10% and I’ve had people publicly say 6% and privately lower,’ says Hindle.
Paradoxically, one of the key gripes of IT heads is a lack of resources where they need it. And of course this will closely be followed by complaints about IT costing too much. In fact, three-quarters of IT budget goes on ‘just keeping the lights on’ rather than innovation, according to Hindle.
Rather than buy more tin, successful companies are now looking at how to get rid of systems and to consolidate and centralise both their hardware and software. And that is a proving a far from easy task.
‘Most large enterprises have a 100 or 1,000-plus business-critical systems – too much of a large landscape to grapple with in one fell swoop,’ says Michael Allen, global performance director at services company Compuware. ‘You need to look at evaluating that, not only measuring end-user availability and performance, but strengthening stakeholder and business involvement.’
Consolidation makes total sense, but it is complex to achieve. When HP started its own internal consolidation programme – reducing roughly 10,000 applications (roughly, because the company decided to stop counting at 10,000) down to a more manageable 1,500 – it hit some problems. Every business process had a variable in some region or country, which kept adding complexity where it was trying to strip it away. The answer was to move up a level and streamline the business processes themselves, not the applications. As Hindle points out: ‘Sometimes the complexity is not always in the application or the hardware.’
Allen also believes centralisation of business critical systems is not always as straightforward as some believe. ‘Many legacy applications are set up to run on departmental servers and when they run on the WAN they are just too heavy. So when organisations try to centralise they are often not looking at the network aspect,’ he says.
Whether centralising or not, a key weapon for reducing costs and ensuring that business-critical systems are well supported is server and storage virtualisation, Server virtualisation partitions a large server into smaller virtual servers, which means one device can take the place of many smaller servers.
A November IDC report states that three-quarters of 500+ employee organisations are using virtual servers and more than 50% are using it to for business-critical systems. By 2009, virtualisation will be a $9bn market.
Just as important as the virtualisation is service management – lifting eyes above the infrastructure layer and looking at IT as a service, popularised by the IT Infrastructure Library (ItIl) best-practice framework.
Nicholas Carr’s hotly debated article in the Harvard Business Review a couple of years back contended that IT is becoming a commodity. While it’s true many technology elements are becoming commoditised, it’s what you do with them that counts.
Consolidation of IT hardware, software and processes may reduce costs, but it will only start delivering business value when it’s totally aligned with your business goals.