Kubernetes storage scaling needs to be automated, and there’s a lack of community solutions available that can meet that need. That’s the view of execs from Percona, which supplies open source software and consulting in the database market.
Percona is also a member of the Data on Kubernetes Community (DoKC), which aims to develop Kubernetes storage scaling solutions to meet that need of automating scaling.
The key aim of DoKC is to develop a comprehensive community solution to automate storage scaling on Kubernetes, said Sergey Pronin, group product manager at Percona. Some limited solutions already tackle parts of the process, and some of the big vendors have their own management platforms.
He said: “Based on certain thresholds and metrics, storage resources should be automatically scaled to meet the capacity demanded.”
Kubernetes is one of the key container orchestrators in the market. Containers are a form of virtualisation in which the application and all microservices needed for its execution run on top of the host server operating system, with only the container runtime engine between the two.
Kubernetes handles functions such as the creation, management, automation, load balancing, relationship to hardware – including storage – of containers, which are organised, in Kubernetes-speak, in pods.
Such functionality explains the centrality of containerised environments to cloud-native IT, with their ability to scale operations up and down to meet demand spikes and troughs.
So, a key characteristic of Kubernetes environments is their rapid growth, and shrinkage, as processing and storage demands fluctuate. It is the impact on storage of such potentially sudden and massive changes in resource requirements that the Data on Kubernetes community aims to address.
That’s because there are limits to what extent storage can be scaled in Kubernetes, said Pronin, adding: “Kubernetes itself does not automate scaling, but it is very good at providing APIs to execute the scaling operation when it is needed.”
Scaling down is also not possible in Kubernetes, said Pronin: “This limitation comes from the container storage interfaces [CSI] themselves, which in turn is driven by underlying storage capabilities.”
The ultimate aim is go further than alerts to IT and development teams when storage starts to run short, said Pronin: “We are working on a fully automated solution, where the tool is going to perform scaling based on certain metrics. This will involve making it possible to scale up to certain levels automatically using those metrics, but if you see other conditions then you would want a human to step in and make that decision.
“There are different models for scaling we should consider too. We are mostly focused on vertical scaling where we add more resources to an existing storage instance, when it might be more beneficial to scale horizontally sometimes, adding a new instance or shard to extend capacity overall.”
Drilling down, the view is – at least from Percona – that a unified solution would work with standard Kubernetes primitives such as persistent volume claims (PVCs) and StatefulSet APIs and with the custom resources of various Kubernetes operators.
At its most basic, storage in Kubernetes is ephemeral (non-persistent). But, Kubernetes also supports persistent storage. Kubernetes persistent volumes (PVs) and PVCs are used to define storage and application requirements. They decouple storage implementation from its functioning and allow block/file/object storage to be consumed by a pod in a portable way.
Read more about Kubernetes storage
- Kubernetes storage 101: Container storage basics. We look at the basics of creating storage and specifying it for applications in container storage using Kubernetes Persistent Volumes and Persistent Volume Claims.
- Container storage platforms: Big six approach starts to align. Container storage is a complex but vital task. We survey the big six storage makers and see methods are starting to align around management platforms.