Intel bets on custom silicon as specialised workloads rule datacentres

Intel is betting on workload-optimised silicon as datacentre workloads become sophisticated, dynamic and varied

Intel is betting on workload-optimised silicon as datacentre workloads become sophisticated and varied and the facilities begin to feel the pressure of new trends such as mobility, internet of things (IoT), cloud computing and big data.

“20 years ago, datacentres were built for monolithic workloads,” said Alan Priestly, Intel EMEA director of big data analytics at the company’s datacentre and IoT innovation event in London on Wednesday. “But today workloads are more fragmented.”

According to the company, the datacentre architecture is under pressure and enterprise datacentre needs to be re-architected as we enter the “era of analytics”.

“By 2020, we would have entered the era of analytics where there will be 50 billion devices and 35 zettabytes of data,” said Shannon Poulin, vice-president of Intel's datacentre group.

“We are sensing that requirement from the market as a silicon provider because the pressure on storage, server and networking components as well as on end point devices is huge.”

“Some 90% of data produced in the industry today is ‘dark data’. This data has got value and analytics will be key to harnessing its value.”

Dark data refers to operational information that goes unused by other applications or is stored for regulatory compliance purposes only.

Matching silicon to workload

Sensing the pressure as a chip provider, it is beginning to focus more on “workload-optimised silicon”. “It is not just a single workload in the datacentre anymore,” Poulin said. “We need to build-in accelerated technology in silicon and best match silicon to the workload.”

Poulin said different workloads need different amounts of storage, memory, I/O (input/output) performance and compute power.

“We are working on 35 different silicon tech for different workloads such as security-focused workloads, graphics-focused or analytics-focused workloads,” he said.

Intel aims at allowing users to configure and customise the chips using the orchestration layer. When an ERP workload hits the system, it should understand it will need more compute capacity and this will give users a better ERP experience.

To accelerate the silicon technology, Intel is building in features such as analytics and encryption on the chips directly, through technologies such as field-programmable gate arrays (FPGAs). An FPGA is an integrated circuit (IC) that can be programmed in the field after manufacture. FPGAs have vastly wider application potential than programmable read-only memory (PROM) chips. 

The chip giant’s shift from making general-purpose processors to more custom-made silicon comes amid the rise of software-defined infrastructure and the demand from large cloud service providers for cheaper, more power-efficient and tailor-made CPUs to run web-scale applications.

“It is important to disrupt the datacentre and add custom-made silicon for different IT tasks,” Poulin said. “We have to add programmable capabilities into our silicon for users to customise it to their workload needs.”

This in turn will lead to a software-defined infrastructure, he said.

“While it is important to re-engineer current datacentre infrastructures, the configuration of hardware components, such as server, storage and networking, cannot be changed much,” he said. “So we need to allow software to define our infrastructures.”

The datacentre optimised, software-defined and enabled with analytics

Intel said cloud service providers, the big web companies and telecoms firms are leading the way in software-defined infrastructure. At its Developers Forum earlier this year, chief executive Diane Bryant said that the company had already sold custom-made silicon – FPGA Xeon - to web giants including Facebook and eBay.

But mainstream enterprise datacentre is yet to adopt these kind of newer technologies.

“If you look at cloud providers, they virtualise all elements of a datacentre – server, storage, network – and then they orchestrate it for more efficient performance,” Poulin explained. “But enterprises don’t do that.”

According to him, enterprise datacentres have done a good job of virtualising servers – but that’s as far as it goes. “But they haven’t done much on storage tiers or network infrastructure. There’s tremendous room for disruption in datacentre’s storage and network layers. How people deliver storage technologies needs to change.”

Speaking to customers and partners at the event, Intel’s datacentre vice-president and general manager Rose Schooler said datacentres are evolving today and go much beyond server infrastructure to include telecommunications and cloud infrastructures. “The datacentre is expanding to include the view of storage and network infrastructures too and today, these are all bottlenecks that enterprises must solve to accelerate their IT performance,” Schooler said.

“Today, we believe networking is ripe for becoming software-defined.”

Elaborating on a datacentre network scenario, she said, it takes two to three weeks to fully initiate a new network infrastructure into a typical enterprise datacentre. But in a software-defined network infrastructure at large telecoms firms, such as AT&T, it takes just one hour.

“Networks must evolve to enable the datacentre to meet end-users' requirements,” Schooler said.

But the enterprises’ move to a software-defined, optimised and analytics-ready datacentre will be a slow one, the company forecast.

“Just like we did with server virtualisation, we will be having this conversation for the next seven years - about people getting on that journey,” Poulin said.

Planning for future workloads

He admitted Intel is yet to see huge volumes of custom-CPU sales to traditional enterprises. “We are continuing to invest heavily in mainstream Xeon processors and there will continue to be a big market for it,” Poulin said. “But we are also investing in custom silicon for specific customers.”

According to Gartner’s second-quarter server shipment figures for 2014, x86 server revenue increased 12.7% in Europe.

"Xeon is pretty good at everything because of the speed of development in Xeon technology," said Gartner analyst Errol Rasit.

But no-one can really predict what future computational workloads will look like. Rasit believes Xeon’s status quo could eventually be disrupted by new workloads with capabilities not yet considered.

Poulan is optimistic he FPGA-based chips will improve the performance of datacentre resources and cut power use, so enterprises can bring down the cost of running their datacentre infrastructure. And when this happens, more enterprises will move to it.

“We want to deliver the best TCO possible. The best performance per-watt, per-dollar for whatever configuration of a datacentre our users are in,” he said.

Read more on Datacentre systems management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close