Feature

Buyers guide to IT infrastructure

Best practice in datacentre design dictates that as much IT infrastructure as possible should be virtualised. Doing so improves agility, allowing the IT department to flex resources up and down to meet the demands of the business.

Given that virtualisation is happening, IT departments have a choice: roll out a scale-out architecture using a large volume of x86 servers to run virtual machines, or deploy a scale-up architecture, comprising fewer much larger, and often more expensive, Unix and Linux server boxes over commodity Windows hardware.

The choice of deployment architecture will be based on calculating the cost of running a workload on one of these architectures. Modern applications that conform to a three-tier architecture can thus be organised such that the database runs on servers with high levels of availability and high throughput, while the presentation tier of the application, or web servers, may run on commoditised x86 hardware. Where the business logic, or middle tier, runs, will depend on the nature of the application.

Paul Kember, HP country director UK and Ireland for industry standard servers, says, "We are seeing consistent deployment of scale-out servers, as products are increasing in performance. For instance, HP launched a terabyte blade in June." However, he notes that there are still critical applications that demand HP's scale up architecture, namely its Integrity and Non-stop servers.

Beyond the hardware choices, it is possible to optimise workload from a purely software licensing perspective. says Kember. "In a classic three-tier architecture I would put Oracle on Integrity because of the way that software architecture works, but if I was running Microsoft Dynamics I would look at using industry standard servers, and perhaps put the database on Integrity." In other words, by assessing licensing of the database, middle tier and presentation tier, a datacentre manager can run the different components of an application on the most cost-effective platform.

Server side

There is a wave of innovation being driven by suppliers in the server technology market to enable users to get more flexibility in their IT infrastructures. Businesses are looking at how to virtualise their servers, storage and networks on x86 hardware across Windows, Linux and Unix systems.

Virtualisation promises to improve datacentre manageability. Deploying new applications across a virtual infrastructure should be far easier than across physical servers, storage and network hardware. The industry is beginning to sell virtualisation management tools to drive flexibility in the datacentre and enable IT departments to change and adapt IT infrastructure built on a virtual infrastructure with just a few mouse clicks.

While HP offers both scale-up and scale-out datacentre architectures, depending on how businesses want to deploy applications, IBM has recently taken a different approach to datacentre computing.

IBM's new z/196 mainframe aims to tackle datacentre complexity by pulling together different applications, or workloads, in a single system. The system comprises a mainframe and Power and x86 blades, which enables the datacentre to run mainframe, Aix and Linux applications in the same floor space.

"Many businesses run DB/2 on the mainframe as the data store for the Linux or Aix-based SAP system, says IBM distinguished engineer Jim Porell. This requires two groups of IT administrators. But with the z/196, the management of the Linux, Aix and mainframe systems can be combined.

He says, "By tying in both the traditional mainframe of System z, along with the new zEnterprise Bladecenter Extension, we can help improve workload deployments and in doing so, avoid some of the pitfalls inherent in a Windows-only environment. We are attempting to enable a fit-for-purpose deployment mode."

Porell concedes the mainframe does not run Windows, but he says, "Many of the applications that run on Windows are available on Aix and Linux on either System z [the mainframe] or System x [IBM's x86 servers]. Our goal is to adopt end-to-end deployment solutions rather than single-server focused processing."

Meanwhile, Cisco has entered the datacentre market with its Data Center 3.0 strategy. Cisco is tackling the other buzz phrase in IT, cloud computing, which effectively turns IT into a set of services available over an internal or external internet connection. By deploying a virtual IT infrastructure IT departments can begin to choose how the servers are deployed - internally or externally, through the concept of cloud-based computing.

"We are not trying to be another server provider," says Jim de Haven of Cisco. Instead, Cisco aims to deliver an IT architecture to enable end-user businesses and service providers to deliver IT as cloud services. Cisco is winning business by showing companies how its approach to datacentre IT architecture can halve their costs, de Haven.

Green drivers

Datacentres run 24/7, so electricity costs quickly mount up. Inefficient servers use more power and, as a result, require extra cooling, which adds to the power burden.

People are taking a wider perspective on flexible IT infrastructure in the light of power and cooling challenges. Replacing an inefficient legacy server with a modern green server can pay for itself in a matter of months. "For power and cooling cost alone you can get a payback in two months on a five-year-old legacy server," says David Chalmers, HP's UK and Ireland chief technology officer.

He says businesses are able to consolidate 40 to 50 servers down to one or two and get dramatic savings. These days server metrics are no longer purely about price/performance, such as TPC, which measures the cost per transaction. TPC-e measures the workload per kilowatt.

People and processes

Chalmers says the recession is driving CIOs to look at freeing up resources to push forward innovations. "They are looking to take a different view of the IT organisation. Is it still optimal to organise around a server team, storage team and networks team?" he asks.

Orchestration tools help businesses manage the workload of systems such as SAP. Such tools cut across the server, storage and network infrastructure, allowing an IT administrator to reconfigure application on the fly. "If you want an extra 150 users, the tools provide the network, storage and server configuration requirements," says Chalmers.

In theory, this will mean IT could be organised around virtual teams that manage line of business applications. Each virtual team would be responsible for its virtual server, storage and network configurations.

E G Nadhan, co-chair of The Open Group's Service-Oriented Cloud Computing Infrastructure (Socci) project, says, "We are no longer constrained by the physical location of datacentres. They are logical hubs, geographically dispersed, operated as a virtual pool of resources. We project a significant reduction in the number of people required to run a datacentre." He says virtual teams could be deployed around the world to provide follow-the-sun datacentre management, driven by business needs.

The concept of virtual IT teams is already starting to take shape. "We are seeing a shift in focus for IT departments to functional teams such as the website team, a portal team and teams focused on the business impact, " says Rami Rihani, senior manager, datacentre technology and operations at Accenture.

From an organisational perspective, such teams are better aligned to the business. "For instance, you have a talent pool dedicated to the marketing function." But who manages the team? "The team moves from reporting to the CIO to reporting to the CMO or CFO or department head. Taken to an extreme, the CIO function could disappear," he says.

However, while IT infrastructure management looks like it is heading in this direction, Daniel Singer, director of KPMG's performance and technology IT-enabled transformation division, believes that, in practice, organisations cannot get to this level of granularity.

He says, "As you proceduralise a business process, you can reduce the knowledge expertise to operate so it becomes easier to extract the human part and move it around, based on cost. But I don't think IT administration and orchestration in a datacentre is anywhere near mature enough to do this."

Singer warns that in may respects, a virtual environment is far harder to manage compared to physical IT infrastructure, because administrators can see the physical servers.

Cloud burst

The crux of the problem facing businesses that buy the idea of virtualisation is that companies such as Amazon operate public clouds extremely efficiently thanks to the scalability and size of their operations. But can a company, even a large enterprise, truly run an optimised private cloud service?

Singer does not believe it is cost-effective. "A public cloud service provider has many routes to market, but internally a business only has a captive market. This breaks the economic model for running cloud. It becomes a costly investment," he says.

Singer sees a distinction between providing IT as a public service and having the environment internally. "Companies aspire to be as standardised as possible - where nobody sees the servers and the operators - which makes it easier to run a stable and economic model. But to build this kind of model internally, with aggressive levels of standardisation, is just not possible inside a business," he says.

He believes that many enterprises lack the necessary skills to drive forward the massive levels of standardisation needed to reap the benefits of cloud computing. Further, the enterprise applications that run within business are often unsuitable for cloud deployments, since they are not standardised.

"I have seen examples where clients have built very flexible IT environments, but at a cost. They now have IT resources they are not using," Singer warns.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in August 2010

 

COMMENTS powered by Disqus  //  Commenting policy