benedetti68 - Fotolia

Intel expands Xeon processor family to help enterprises build software-defined clouds

Chip manufacturer Intel unveils products and customer use cases in support of its software-defined infrastructure push

Intel has outlined its commitment to helping enterprises build clouds using software-defined infrastructures (SDI), claiming the approach will make their IT environments easier to manage.

The company defines SDI as being comprised of virtualised compute, storage and networking resources that run on standard hardware that can be dynamically scaled as and when needed, while providing automated self-provisioning capabilities.

In support of the strategy, the company used its Intel Cloud Day event in San Francisco to announce a slew of products, including the 22 core Intel Xeon E5-2600 v4 processor family and a supporting cast of solid state disk (SSD) drives.

These are aimed at enterprises and cloud service providers, and the latter offerings are being used by the likes of online auction site eBay to underpin its operations, Intel claims.

Speaking at the Intel Cloud Day in San Francisco, Lisa Spelman, general manager of the company’s Xeon Product Group, said SSD technology is fast becoming a staple of many datacentre environments.

“Datacentre SSD sales have doubled from 2013 to 2015, and over half of that volume comes from cloud workloads,” she said.

“Cloud infrastructure demands the high performance you get from using SSD, because access to high performing, low latency storage is critical.”

One of the key features of the E5-2600 is the inclusion of Intel’s Resource Director Technology (RDS), which has already been adopted by the Nasdaq stock exchange. It ensures mission critical workloads are prioritised when it comes to receiving their fair share of processor cache and memory.

Read more about Intel

Sandeep Rao, chief of technologies at Nasdaq, said his organisation and others in the financial services sector have been slow to adopt server virtualisation. This is because hypervisors have no way of knowing which workloads should receive the bulk of the pooled, virtualised resources.

Therefore, there are concerns that high-priority workloads may not receive sufficient cache, resulting in performance issues that – in turn – could cause problems for the traders that rely on its platforms.

“Over the past 10 years, how quickly you can do work on transactions has changed dramatically. People were happy with two second response times, now it’s tens of microseconds,” said Roa.

“Now, the high priority workloads get more cache, so the latency is less spiky and it gives people more trust in the platform. We’re hitting our SLAs, and everybody’s happy, and Nasdaq doesn’t get complaints saying your platform isn’t working as effectively,” he added.

Read more on Datacentre performance troubleshooting, monitoring and optimisation

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close