willcao911 - Fotolia

APAC datacentres gear up for the future

Datacentre operators in the Asia-Pacific region are dabbling in advanced power and cooling solutions, along with machine learning and edge computing, to keep pace with growing demand for their services

This article can also be found in the Premium Editorial Download: CW Asia-Pacific: CW APAC: Trend Watch on datacentre operations

For most consumers and businesses, a datacentre sits in the background and they are oblivious to how it operates as long as it works.

But growing complaints about unstable or unresponsive technology and communications services have cast the spotlight on challenges faced by datacentre operators and providers.

The obvious culprit is the ongoing Covid-19 pandemic the world has been experiencing for the past 18 months. According to the findings of IDC’s Datacentre operational survey 2020, the top three challenges faced by datacentre service providers – either enterprise-owned or colocation providers – across Asia-Pacific (APAC) are performance, downtime and capacity.

“Some 33% of datacentres are struggling with latency and network performance to support the increase of demand,” says Duncan Tan, senior research manager at IDC ASEAN.

“As more organisations demand high availability of workloads, 32% of datacentres still struggle with downtime due to system failures. And some 26% of datacentres still suffer from having insufficient power and space, which translates to delays in project deployments,” he notes.

But as convenient as it is to blame the Covid-19 pandemic, the APAC datacentre market had already been grappling with challenges due to strong growth in the use of artificial intelligence (AI), internet of things (IoT) and big data analytics, says Tony Gaunt, vice-president for hyperscale and colocation at Vertiv.

Citing research from Structure Research and Cushman & Wakefield, Gaunt notes that the APAC datacentre market is forecast to be worth $28bn by 2024, with a compound annual growth rate (CAGR) of 12.2%.

“With the increasing digitisation efforts, datacentre services have practically achieved utility status. This has led many datacentre operators to see where they can further improve and become more resilient to growing market demands,” he says.

Today’s scenario

As datacentre demand grows, there are several impediments facing the industry.

Sandy Gupta, vice-president for sales, marketing and operations at Microsoft APAC, says ensuring consistency in power grid reliability and availability, as well as last mile connectivity, should be a priority for datacentres.

Citing a report by power infrastructure player Aggreko, Gupta notes that the APAC region is particularly susceptible to electricity and connectivity constraints as power grid companies and power networks in the region are still trying to match the rapid pace of technology infrastructure investments in recent years.

“Specifically, more energy-efficient and sustainable datacentres are needed to bridge this gap, by enabling highly available, scalable and resilient cloud services,” says Gupta.

“Regions where hyperscale datacentres have not been in high demand previously are gradually growing. They demand larger datacentres to meet their customers’ needs in a timely manner, with cooling systems that can handle high heat generation and low-latency networks”
Takeshi Kimura, NTT

Meanwhile, other issues facing the industry are the need to handle high heat generation and provide high-connectivity and low-latency networks, says Takeshi Kimura, vice-president and regional head of datacentres for NTT.

“Regions where hyperscale datacentres have not been in high demand previously are gradually growing,” says Kimura. “They demand larger datacentres to meet their customers’ needs in a timely manner, with cooling systems that can handle high heat generation and low-latency networks.”

More than that, Vertiv’s Gaunt adds that with the rise of compute-intensive applications and data storage, conventional air-based systems such as chillers, computer room air-conditioning (CRAC) and computer room air handlers (CRAHs) will not be sufficient to serve the cooling requirements of high-density datacentres in the long term.

However, with the high upfront implementation costs and the complications and time involved to redesign and potentially overhaul datacentre infrastructure, industry players are reluctant to embrace more efficient datacentre cooling technologies, he says.

Solving power and cooling challenges

This begs the question, what does the future hold for datacentres?

According to IDC’s Tan, 33% of datacentres are investing in tools that can provide improvement to infrastructure management and visibility to the operations, to reduce downtime and improve latency and performance.

“Some 29.4% of datacentres will focus on advanced power and cooling systems equipment that will provide efficient use of energy,” says Tan. “The use of advanced analytics and automation will provide insights to reduce power utilisation and the use of automation to reduce tedious and manual tasks.”

Microsoft’s Gupta argues that the first step, even before addressing the cooling and power challenges, is to ensure datacentres have proper measurement of activities. He believes detailed metrics can explain how resources are being deployed and ensure that appropriate incentives are in place to drive greater efficiency across operations.

“Microsoft has achieved significant reductions in energy consumption at our datacentres by providing incentives to our managers not just for uptime, but also for improving energy efficiency as measured by the power usage effectiveness [PUE],” he claims.

PUE determines additional power consumption over and above the power needed to run server farms.

“With machine learning tools in place, large amounts of telemetry can be readily processed, and deviations can be spotted rapidly to identify operational bottlenecks or even predict potential problems before they manifest”
Sandy Gupta, Microsoft

As for dealing with cooling and high-power requirements, Vertiv’s Gaunt notes that in parts of the region with tropical climates, liquid cooling systems will help to reduce energy consumption significantly compared with other cooling techniques.

“By using such technology, a 1°C reduction in water temperature offers 2% to 3% of savings in power consumption compared to a typical chiller,” he claims. “Liquid cooling systems are self-contained and protected from external environmental elements such as dust, heat and air pollution, and they have lower noise levels compared to air-based cooling techniques that use fans and equipment for airflow.”

Gupta concurs, adding that Microsoft has invested in liquid cooling and undersea datacentre concepts and has experimented with the use of hydrogen fuel cells for backup power at datacentres. “This has created more efficient, high-bandwidth networks that can enable large-scale applications as well as the ability to transfer massive amounts of data,” he says.

Rival Google has similar systems in place. At its Changhua datacentre in Taiwan, Google uses a thermal energy storage system that cools water at night, when temperatures are lower, storing it in large, insulated tanks.

“The liquid is pumped throughout the facility to cool our servers which are then used during the day,” says Randy First, Google’s datacentre operations director for APAC. “In Singapore, our facility relies on recycled water with each element custom designed to operate at optimal efficiency.”

But as advantageous as it seems, liquid cooling does have its drawbacks, especially in the hot and humid climates of Southeast Asia, notes NTT’s Kimura.

“Water scarcity and groundwater depletion and contamination are also significant concerns and the [un]availability of water can limit the available cooling solutions and strategies in many cities,” he says. “The whole community including datacentre players and vendors need to come together to deliver the best practical cooling systems.”

Kimura says NTT’s laboratories have dabbled with using “immersion cooling” in special cases to cool the rack, both in the form of direct on-chip cooling and integrated cooling coils. “These are currently most common in specialist systems and high-density workloads, but we expect to see them being applied more widely as they become more mature,” he adds.

Additionally, alternate green fuels such as ammonia, fuel cells and improved energy storage are also in development, says Kimura. “As the use of renewable energy sources increases, the need to integrate these sources with the power grid will be an opportunity and driver of innovation.”

Beyond power and cooling

To squeeze more performance out of future datacentres, hyperscale players are looking at the use of artificial intelligence and machine learning to optimise performance and conserve power.

Google’s First says the search giant began applying machine learning capabilities to run its facilities more efficiently as opposed to utilising large industrial equipment such as pumps, chillers and cooling towers.

He says using a system of neural networks trained on different operating scenarios and parameters within its datacentres allowed it to create a more efficient and adaptive framework to understand datacentre dynamics and optimise efficiency.

“By 2016, our DeepMind AI system was able to consistently achieve a 40% reduction in the amount of energy used for cooling,” claims First.

Meanwhile, Microsoft applies machine learning to maintain datacentres, especially with the enormous volume of telemetry, such as logs and alerts, that are too complicated for manual human analysis.

“With machine learning tools in place, large amounts of telemetry can be readily processed, and deviations can be spotted rapidly to identify operational bottlenecks or even predict potential problems before they manifest,” says Gupta.

Jake Saunders, vice-president for advisory services at ABI Research, points out that larger datacentres and hyperscalers are deploying localised facilities at the edge of the network to improve latency and availability performance, especially for applications like cloud gaming, connected vehicles and unmanned aerial vehicles.

“A data request to an out-of-country, even out-of-region, datacentre can take 1,000ms [milliseconds], and some applications can’t tolerate these long latency times,” he explains. There is also a need to enhance “east to west” links – data to applications and/or third-party apps and content with the datacentre before sending it to the requestor.

“Exacerbating this are the soon-to-be available commercial 5G networks, which will introduce advanced features such as network slicing and require very low latency.”

Saunders predicts that datacentre operators will soon collaborate with web-scale providers to cultivate a hybrid approach – one that allows for a central office or localised street cabinet or base station edge computing for low latency – to be combined with access to highly cost-optimised regional datacentres.

Vertiv’s Gaunt agrees. Citing a survey of its customers, Gaunt says, on average, 32% of data is stored in local datacentres today, but facilities of the future will increasingly shift to the edge.

Pointing out that as governments in the region pursue smart city and other digitisation initiatives, Gaunt says state and local use cases that deal with large datasets will need edge computing resources and industrial applications with real-time components and infrastructure.

“With the roll-out of more 5G technologies, there will be a widespread shift from local datacentres to cloud and edge. We will also begin to see self-configuring and self-healing datacentres become a significant part of datacentres.

“With data processing done closer to the source, companies can speed up deployment times and adopt new concepts such as prefabricated datacentres to meet enterprise needs.”

Read more about datacentres in APAC

Read more on Datacentre systems management

CIO
Security
Networking
Data Center
Data Management
Close