NGX: Turkish arrays to challenge US mainstream storage vendors

Ankara-based NGX offers block, file and unified storage with use of cache and RAM to speed access. Software-defined storage, cloud and NVMe-over-fabrics are also coming

Turkish array maker NGX, which provide file and block access in its hardware, with cloud, software-defined storage and NVMe-over-fabrics is planning “an alternative to NetApp, Vast Data, Infinidat and other US vendors”. 

NGX offers storage array products for a variety of performance levels and workloads. Bulk storage is on spinning disk after passing through a number of levels – SSD, Intel Optane, RAM – of which the profile can vary depending on application requirements.

“The banks use our products as SAN for databases and as NAS for their developers,” said Ali Kemal Yurtseven, director general of NGX. “Hosting providers use block mode for VM disk images and file for video storage. Research institutes use file mode for their VMs in OpenStack and block for volumes shared between compute nodes in Lustre.”

“We won’t pretend we’re cheaper that the US vendors,” said Yurtseven during a recent IT press tour event. “But our production is local, reactive, and without supply problems to European customers. We are open to requests for customisation and we think our arrays are among the easiest to use. They can be operational five minutes after unpacking.”

Intensive use of cache

NGX arrays are modularised. On the network, controller nodes share access and handle data transport to and from storage nodes via Ethernet RoCE switching and Infiniband. From the controller point of view, storage nodes look like their internal drives, and comprise SSD and/or HDD capacity.

The architecture allows for this or that component to grow depending on need. So, you can have more controllers to maximise parallel access, more shelves of SSD to boost speed, or more HDDs to maximise capacity. According to NGX, it’s possible to grow to 20PB of capacity and only deploy two controllers. Each controller transmits data over the network at a speed of 9GBps and with latency in milliseconds, which is via 100Gbps Ethernet or 32Gbps Fibre Channel.

The controllers make intensive use of their RAM and Optane modules to store metadata that allows for rapid access to data blocks. There isn’t one volume pere disk or SSD in a shelf. Instead, data is striped between a number of shelves to cut down on latency. Modification of data in a block creates a new block in free space, then erases the existing block when there is sufficient compute available.

Other controller functionality includes compression, data deduplication, thin provisioning, all in real time. There are also snapshots.

Storage nodes comprise HDDs with cache on SAS-connected SSDs, while SSD bulk storage uses NVMe for cache. Writes pass from one cache to the other while the system evaluates which blocks need to stay on which type of storage, according to the application and the likelihood that those blocks will need to be re-read soon.

Coming soon: Software-defined, NVMe-over-fabrics and object

Since being founded in 2015 in Ankara, NGX has sold 200 clusters and its annual turnover has come close to €20m.

NGX has a number of projects in the pipeline. It plans to offer its storage solution in software format, potentially useable in the cloud. “From the start of 2023, you will find us on AWS, Amazon or Google,” said Yurtseven.

Later this year, NGX will market a version of its product with NVMe-over-fabrics capability. That will most likely be NVMe-over-TCP for Ethernet deployments and then over Fibre Channel. Based on Intel-supplied libraries, the new protocol can also work with CXL drivers that will allow NGX clusters to access cache memory elsewhere on the network.

Finally, integrated object storage functionality is also planned. For now, NGX array deployments that need S3 access use S3 gateways from MinIO.

Read more about block and file storage

Read more on Flash storage and solid-state drives (SSDs)

CIO
Security
Networking
Data Center
Data Management
Close