alunablue - Fotolia
Cutting containerised analytics times from weeks to less than a day – that’s the claim from Pure Storage, which says one of its customers has done just that using the new PX-Fast functionality that came as part of the upgrade to Portworx 3.0 last month.
PX-Fast is aimed at bringing huge increases in storage performance in containerised application deployments for data services such as Kafka, Elastic and mongoDB, as well as transactional and analytics workloads.
Portworx is Pure Storage’s container storage and data protection platform which can be deployed in on-site, cloud and hybrid modes to deliver storage, data protection and data services (databases, messaging, etc). It incorporates Portworx Enterprise, which allows customers to provision and manage storage for containers from Portworx.
Meanwhile, PX-backup offers data protection and Portworx Data Services brings databases, event handling and messaging platforms.
PX-Fast is essentially a reworking of the back-end code architecture to bring better performance in storage I/O, said products vice-president Venkat Ramakrishnan.
“It is taking the Portworx stack to the next level and will bring near bare metal performance. We’re talking millions of storage IOPS in a containerised cluster.”
So, what have been the drivers to the development of PX-Fast?
Key among them are increased customer momentum towards containerised workloads and the need for scale and performance that go along with that. That, in turn, has pushed Pure Storage to refactor Portworx to make it better suited to discrete hardware accelerators such as via data processing units (DPUs).
“Portworx is a scale-out product and until now, if customers wanted to add performance, they added nodes,” said Ramakrishnan. “Also we made trade-offs to bring in thin provisioning, snapshots, and so on. But as customers have tried to pack more containers in, limits have been reached.
“So, we have rewritten the back end. It is essentially a hyper-converged product and there were some inefficiencies in between the application and the media. We had combined control and data paths, but we have improved it to get a more direct path and optimise I/O.
“Everyone who is developing new applications is aggressively moving towards containers. Customers are trying to do more with them, so we are aiming to move towards more use of accelerators like DPUs.”
Read more on container storage
- Container-native storage: A definition, and what to ask suppliers. We look at container-native storage – aka cloud-native storage – what defines it, and its benefits as a way of bringing persistent Kubernetes storage to containerised applications.
- Container storage 101: What is CSI and how does it work? We look at the container storage interface, which provides an interface to persistent storage in the products of storage array makers, and how it relates to Kubernetes.
Ramakrishnan cited a Pure customer – a US ISP – that runs analytics on a petabyte of data. That batch had taken “a few weeks, but is now delivered in a day with PX-Fast, almost bare metal Kubernetes – it’s near real-time analytics”, he said.
Also, the 3.0 launch brought Portworx Enterprise as-a-service. This, too, addresses increasing complexity faced by IT teams that need to deliver containerised workloads.
“It’s for situations where the platform team is managing thousands of container nodes in production, and can be on-prem, cloud or both,” said Ramakrishnan.
“It’s a managed service offer to help customers better deliver production workloads and to offer proactive support.”
Portworx 3.0 also added Near Sync DR, which allows for cross-region business continuity. Portworx already offers Sync DR – billed as zero RPO, but actually 10ms – and Async DR, which works across different continents.
Near Sync DR offers an RPO of 24ms between different cloud regions or geographically distant datacentres.
Finally, Portworx 3.0 added the ability to configure an object storage proxy from its control plane. File and block are already configurable from Portworx.