ValentinValkov - stock.adobe.com
Cambridge-based infrastructure provider DC Intelligence (DCI) plans to use Nebulon smartEdge storage for a range of liquid-cooled rack-mounted servers it plans to offer for high-performance 5G-capable edge deployments, such as manufacturing and rural use cases.
The Nebulon storage will allow DCI to offer what is effectively hyper-converged infrastructure (HCI), but with liquid-cooled rack-mounted 19” blades on Supermicro hardware. The liquid-cooled aspect allows deployment of servers in locations without the usual facilities expected of a datacentre in terms of power and cooling, so more suited to edge requirements, and in particular in remote areas and heat-intensive regions.
Nebulon provides the Nebulon ON storage control plane, which performs analytics in Amazon Web Services or Google Compute Platform public clouds. Meanwhile, media on-site is in the form of PCIe slot-resident Storage Processing Units (SPUs). These offload storage input/output (I/O) processing and management from the server to the Nebulon hardware.
That capability has allowed DCI to offer its DataQube, which is effectively a hyper-converged infrastructure modular datacentre compute and storage node. Its target market is customers that want to be able to deploy to edge locations. The company has grants in the pipeline for delivery of 5G-enabled modular datacentre hardware to agri-tech and manufacturing environments.
Nebulon makes a play for specialised applications that depend on speed and performance. Nebulon also targets customers that require ease of use, with claims its technology enables application owners to provision storage, without the intervention of a storage administrator.
The SPU replaces RAID cards and storage host bus adapters found in servers. Each SPU has two 25 Gigabit Ethernet (GbE) ports that form the data plane for each application cluster, along with 1 GbE cloud connectivity.
The SPU connects flash storage residing in each node and emulates to the host the functions of a local storage controller. This local PCIe device only manages host I/O – application, server and storage metrics are offloaded to the Nebulon ON cloud for prescriptive analytics. Enterprises can build a Nebulon cluster, known as nPod, that scales to 32 servers. Data services are configured in the Nebulon ON cloud portal, including mirrors, snapshots and volumes.
More on hyper-converged infrastructure
- Five reasons to look at hyper-converged infrastructure. HCI promises easy setup, purchasing flexibility, low procurement and operation costs, reliability and all in one box – but there are challenges, too.
- Five hyper-converged infrastructure use cases to watch. We look at workloads ripe for hyper-converged infrastructure deployment. These include virtual servers and desktops, SMEs/remote offices, Kubernetes, and analytics and machine learning.
DCI had been using separate compute and storage nodes but had looked at hyper-converged infrastructure as a means of delivering DataQube, said Chris Ward-Jones, the firm’s chief technology officer.
“The problem with HCI is that you give up a chunk of CPU/RAM for the storage workload,” said Ward-Jones. “Or, do I build separate compute and storage nodes? Then we discovered Nebulon, and saw we could get the benefits of hyper-converged but offloading the storage workload.”
Nebulon provides the storage control plane from the public cloud, while media is resident in the servers. “It’s like a RAID card, I suppose,” said Ward-Jones. “You can build a SAN across nodes.
“The obvious benefit is that we can get more compute and memory in the confines of the physical footprint of the rack. It’s cheaper than software-defined storage and hits the sweet spot between cost and compute/memory density.”
For its broader storage needs DCI is a Pure Storage customer, and it had recently bought around 2PB of capacity from that supplier. Unfortunately, “it doesn’t work well in an immersed [ie, liquid-cooled] solution”, said Ward-Jones.
The DCI chief technology officer summed up the benefits of Nebulon as “lots of bang for your buck. It does storage in the servers, which allows it to be like hyper-converged but with compute independent of the storage stack”.