LovePhy - stock.adobe.com

Containers will oust VMs and storage arrays, says Red Hat

Red Hat launches storage delivered via containers and predicts a future in which costly and inflexible storage hardware and pricey hypervisors will be a thing of the past

The traditional storage array will become a thing of the past – and so will the virtual machine and hypervisor. Costly and cumbersome and lacking the flexibility to support the unpredictable and “bursty” requirement of applications in the cloud era, they will be replaced by containers.

That is the view of Red Hat, which earlier this month launched Container-Native Storage 3.6 for Red Hat OpenShift Container Platform 3.6, its distribution of the Kubernetes container orchestration software.

In Openshift Container Platform, customers can deploy applications via container, rapidly scaling instances according to workload requirements.

Container-Native Storage allows storage for container-based apps to be spun up, run and be decommissioned as required to support those applications and can run on-premise or in the cloud and with service levels set by policy.

New features in Container-Native Storage include the addition of support for block (via iSCSI) and object storage.

Containers – such as Docker – are emerging as an alternative to server virtualisation. They are effectively a form of software-defined server/storage and offer a form of virtualisation, but more lightweight, that runs directly on the operating system.

Containers lack the hypervisor layer, and also the often-duplicated data in virtual machine images.

Containers excel where the rapid scaling of web serving requests is met by the creation, use and dying out of many containers managed by an automated orchestration platform.

Read more on storage and containers

In the Red Hat Openshift Platform, this is met by storage containers underpinned by Red Hat’s Gluster scale-out file system.

Red Hat marketing manager Irshad Raihan contrasted the world of traditional storage with the efficiencies offered by container-native storage and DevOps modes of working.

He said: “Traditionally, there is a chunk of storage outside the application space and that is requested by the developer according to an educated guess and provisioned by a storage admin in a process that lacks visibility and can take a long time. Traditional storage is built for steadily-increasing application data.”

Raihan added: “With containers, it’s a different game. The application and the storage – or actually many instances of them – can be managed from the same platform. The Kubernetes layer orchestrates their growth, operation and decommissioning. We are moving from a world of hundreds of thousands of VMs to one of hundreds of thousands of containers.”

According to Raihan, applications will also be increasingly deployed in containers because it is cheaper. “There are lots of cost efficiencies to be gained,” he said. “You can remove the cost of the hypervisor and the guest operating system.

“People have gone from P to V [physical to virtual] and are now going V to C [virtual to container]. And you can move to proprietary storage hardware to x86 servers. People want to be able to use heterogeneous hardware and the latest components and not have to wait for the supplier to incorporate it into its products.”

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Cloud storage

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close