Cisco’s Morrison Talks UCS from Networkers

SearchNetworking couldn't make the trip to Networkers in Brisbane this week, but managed to secure some time with Cisco's Data Centre General Manager, ANZ, Dylan Morrison.

Since Cisco’s biggest news of the year, more or less, has been the launch of its Unified Computing System, on show at Networkers, how better to begin than to quiz Morrison on how customers are responding to it?

Morrison: “I think what’s been surprising to a lot of the customers is that the common issues they’re facing in the data centre, the UCS is designed to address those issues. Power, space, and being able to scale through virtualisation in the data centre.

“We also had on show our C-class compute platform at the AlphaWest stand, and that demonstrates Cisco’s commitment to the compute environment, right from 1RU systems through to the blade environment.”

One of the biggest challenges facing Cisco has been to explain an apparent turnaround in its attitude in launching the UCS. Cisco has, throughout its history, maintained a focus on the network, content to leave the server market to those companies whose networks it powers.

Morrison: “It comes back to how Cisco looks at market transitions. Four or five years ago, Cisco recognised that one of the big changes in the data centre [was] that there are existing constraints about power and cooling. It’s not so much about space, but how much power you can get into the data centre.”

That constraint is fast looming as something Morrison says could cap the headroom available for data centre development.

“It is a very costly exercise to … upgrade the substations for the power that is now being consumed in data centres.

“It’s not so much space, because we’re getting denser on how we’re putting our compute platforms in the racks. It’s how much of the heat you can get rid of in a data centre.

“It’s a Catch-22 … for every watt I put into the data centre there’s 2.5W that I consume with cooling that environment.”

“I have seen the statistics from the NSW Energy Supply Association – they were looking at 2012, given the trajectory of supply in NSW, and the peak load will outweigh the supply.”

This is also leading to new design metrics, he claimed: “What sort of metrics are people using to design and talk about their data centre? They’re getting down to how many virtual machines they can fit into the data centre, per kilowatt.”

Cisco is hopeful that this will move the data centre design debate onto ground where it feels most comfortable: “It’s not about 20 blades or 20 servers, it’s that you need X amount of virtual machines in the data centre. And acquire an architecture to support that.”

That leads naturally to two related discussions – cost and “greenness”.

“Power will get more expensive, which will start to regulate how people use the power. [Today] you’re looking at something like 15 to 16 cents per kilowatt. I’ve spoken to customers who are now looking at 21 cents at peak load, per kilowatt – which is a major jump in the cost of power.”

To understand how Cisco is positioning the “green” credentials of the UCS, let’s first take Morrison’s description of its design.

Ground-up Design

“Cisco also recognised that some of the limitations of the legacy systems that were available in the environment came back to some key fundamentals: the networking or I/O capacity that the compute platform could take on, and the second being the amount of memory that people could feed into the compute platform to roll out a virtualised environment.”

In other words, Cisco is pitching its entry into the server space – what it has decided to call “the compute platform” – as a response to what were emerging as intractable issues in the data centre.

“If I look at a lot of customers, the first thing they run out of in capacity when they virtualise is the memory.

“So the first thing we did was to architect the compute environment from the ground up – we didn’t have any legacy environment that we had to look after. We went out to the market and hired some key developers and innovators within the IT market, one of those being Ed Bugnion. He was the co-founder and ex-CTO of VMWare.

“So we got a lot of industry resources to be able to architect it from the ground up.”

The result of that development, Morrison says, is a more efficient environment.

“We could strip out a lot of the legacy infrastructure … we’ve taken out a lot of the management modules a lot of the switching and network interfaces, to bring it down to about a third of the infrastructure within the compute platform.”

And that, in part, makes room for more memory: “You can scale the environment to about four times the virtual machines that would have been in the legacy systems.”

Management and Admin

By concentrating the VMs, Morrison said, there are benefits in management and administration.

“You can cut down the usual provisioning tasks from hours or days, down to minutes using the network – the network can scale, it’s in a central location, and it’s easier to manage large environments from the network.”

Of course, you can’t just junk the management – the capabilities are needed, but Cisco’s claim in the UCS is that by moving the management away from individual blades or blade racks, it’s more efficient in every way.

“In the legacy environment, you had the management modules built into the blade environment – we have moved the management to the fabric, the network.

“The benefit of doing that is that we can, from a single IP address … manage up to 40 environments. That equates to up to 320 blades from a single interface.

“It effectively means you only have to make changes a single time, not on multiple devices, not on multiple management modules.”

Those with long memories will, of course, remember that server-in-network isn’t a brand-new idea. Morrison said the big difference that makes 2009 the right time for these technologies is virtualisation.

“Although virtualisation is an old technology when you talk about mainframes, in an open systems environment, having virtualisation means I can send servers or computers across a wider area, and I can move applications around within that environment as well.

“So you have a tighter connection between your network and your computing load.”

Can the UCS Unify the Data Centre Network?

These kinds of considerations all feed back into how Cisco hopes to position the UCS as creating more efficient (and incidentally, hopefully greener) data centres, and Morrison sees it as reaching all the way back to fundamental design.

“We’ve built up the data centres as a king of ad-hoc or accidental architecture. We’ve got lots of silos that are very inefficient from a design point of view.

“A good example is that we’ve got multiple networks in the data centre … having a unified fabric so that I can drive my storage traffic, my IP traffic, and all my high performance computing traffic across a single network – you can start to drive about 20% efficiencies in the data centre.”

That hope for network unification is also reflected in the ability of the UCS to drive networking capabilities more towards individual VMs, “giving the virtual machines the same features, functions and view as if it was a physical piece of hardware.

“In the traditional model you associate a lot of the networking features and policies with the NIC. We’re taking that up into the VM.”

That means, he said, that virtual machines can more easily preserve network operations such as VLANs, access controls and security policies, even while that VM and its applications are being relocated between servers.

At the moment, according to Morrison, Australia and New Zealand are keen users of virtualisation, but only to a relatively low degree.

“Customers have maybe 20% of their systems on a virtualised environment – what we really need is to get up to 80-90% running on a virtualised platform to bring in the efficiencies and optimise the infrastructure.”

And partly, that’s been driven by concerns over security that Cisco hopes are addressed in the UCS.


Read more on Voice networking and VoIP