Petya Petrova - Fotolia

Four key need-to-knows about CXL

Compute Express Link will pool multiple types of memory and so allow much higher memory capacities and the possibility of rapidly composable infrastructure to meet the needs of varied workloads

A change is coming to datacentre architectures.

That is the likely result of the CXL – Compute Express Link – interconnect, which will allow memory to be pooled for use by compute nodes. It promises to enable composable infrastructure in ways previously unseen and boost performance that has been, to some extent, bottlenecked between processing and working memory.

Here are four key need-to-knows about CXL, its benefits, the workloads it will enable, what is under the hood and the likely changes it will bring to the datacentre.

What is CXL?

CXL is an open source interconnect for memory to connect to processing in servers and storage.

Its big advantage over existing ways of doing things is that it potentially allows pools of memory to be created that provide much greater capacity than is currently available.

It also allows pools of memory to be created that are made up of multiple suppliers’ products and that can be connected directly to processors in CPU, GPU, DPU form, as well as to smart NICs and computational storage.

That is because CXL is open source and is designed as a universal interconnect for memory that will allow a pool to form cache for working datasets.

In terms of product and technology evolution, it has arisen in the context of storage-class memory failing to meet the needs of the rise in multiple processing nodes. The difficulties of making SCM work with these modules opened a gap for something easier from which to build pools of memory. CXL offers that capability.

Benefits of CXL, and workloads

CXL eliminates proprietary interconnects between memory, and storage-class memory, compute and storage. That means numerous processors can share much larger memory pools than has previously been the case.

The key takeaway is that CXL allows for the composability of potentially large stores of memory, and tailored to workloads.

That makes CXL a likely candidate for any workload that would benefit from having large amounts of application data in-memory. That can range from transactional processing through to analytics/artificial intelligence/machine learning.

CXL is likely to be attractive to cloud providers and hyperscale operators where rapid provisioning and scaling is required.

CXL memory expansion allows additional capacity and bandwidth beyond what is possible from DIMM slots in hardware. CXL allows addition of memory to a CPU host processor through a CXL-attached device. So, if paired with persistent memory, for example, CXL allows the CPU to use it in conjunction with DRAM memory.

CXL technology under the hood

CXL has achieved version 2.0. It builds on PCIe 5.0.

It has these three component protocols:

  • io, which is similar to PCIe 5.0 and handles initiation, link-up and device discovery.
  • cache, which is an optional protocol that defines interaction between host and memory device to enable CXL coherency.
  • mem, which is also an optional protocol that gives host processors direct access to accelerator-attached memory. 

The three protocols can be combined in different ways to allow access for different types of device to the memory pool and additional device memory where required.

These allow, for example, the use of specialised accelerators with no local memory (such as smart NICs) to access host CPU memory; general purpose accelerators (GPU, ASIC, FPGA) to access host CPU memory and access to its own; and access for host CPU to memory extensions or storage-class memory.

CXL brings an architecture shift

Servers (and storage) have for decades been built with their own memory on board. But the datacentre hardware model is shifting from each node having its own processing and memory.

What is emerging is a disaggregated architecture that matches resources to workloads and potentially being used for multiple CPUs, GPUs, DPUs, etc, as well as smart NICs working on data in motion and computational storage working on data closer to bulk storage.

That means datacentre requirements can be composed with multiple tens of TB of memory in pools, and avoid storage I/O to external capacity. Some have likened this shift to that which occurred when Fibre Channel enabled SANs to provide external shared storage capacity for servers from the early 1990s.

Read more on storage technology

Next Steps

How CXL 3.0 technology will affect enterprise storage

The promise of CXL still hangs in the balance

Read more on Storage performance

CIO
Security
Networking
Data Center
Data Management
Close