ComputerWeekly.com.com

High availability strategy key to saving costs, says Sungard AS

By Archana Venkatraman

The cost of IT downtime -- both in the economic sense and business reputation sense -- is extremely high for businesses. But must all systems and applications be highly available all the time?

"Enterprises are jumping to conclusions when it comes to high availability," said Keith Tilley,  enterprise vice-president, EMEA and APAC at Sungard Availability Services, the company that provides managed datacentre services, hosting and availability services to customers such as Serco, Barclays, and Sainsbury's in the UK.  

According to Tilley, enterprises can save as much as 30% of costs by planning the availability requirements for their workloads and applications.

High availability (HA) systems are continuously operational for a long time.

Availability can be measured relative to "100% operational" or "never failing". A widely held but difficult-to-achieve standard of availability is known as "five 9s" (99.999%) availability.

"If you want to have a truly highly available infrastructure, it is very, very costly," he said. "It is important to prioritise applications that need to be highly available."

This is because HA can only be obtained through the complete mirroring of the whole architecture across facilities and by building redundancy (no single point of failure) in individual components. This requires heavy investment and fully monitoring transaction and replications streams. 

Built-in redundancy

Another strategy is to think about availability during the applications design stage, he said. "Anything enterprises are building with a digital element to it, they have to consider availability right at the onset."

Highly available systems have redundancy aspects built into the application and the architecture layers to effectively mitigate downtime. But it is important to build redundancy in every layer of the IT system because all parts have to be highly available.

But many enterprises have built websites, systems and applications without considering their availability needs. "These enterprises are now retrofitting HA into their systems. This can be very expensive," Tilley warned.

HA and high-performance computing (HPC) should be part of the inherent design of systems, datacentre expert Clive Longbottom said. New technical approaches -- such as big data and cloud computing -- require technology platforms that can not only meet the resource levels required by the workloads, but can adapt resources in real time to meet changes in workload needs. 

Aligning people, processes and business

Enterprises have to determine what apps are critical to their business. For airline companies, applications that deal with taking customer bookings and tracking pilot hours are most critical and need to be highly available, he said. But for retailers, it may be their website; and for financial service providers, it could be their trading desk apps, he said.

"Businesses need to understand what apps are critical for them. It is not always the most obvious ones," he said.

"They should also think about seasonal peaks. For instance, payroll apps run only once a month but, for that time-frame, availability of those apps are critical."

IT must turn down the availability settings on those applications when not in need to save costs.

"CIOs understand how to plan latency and availability strategies on tight budgets, but their overall businesses and business stakeholders demand high availability for all their systems at all times, making IT costs soar," Tilley added.

Closely aligning people, processes and business availability must be seen as a strategic driver that can help enterprises succeed, he said.

But currently the collaboration between the three is too weak, Tilley concluded.   

03 Jun 2014

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement