There are no shortage of mysteries in the datacentre, as unknown influencers undermine the performance and consistency of these environments, while remaining elusive to identify, quantify, and control.
One such mystery as it relates to modern day virtualised datacentres is known as the “working set.” This term has historical meaning in the computer science world, but the practical definition has evolved to include other components of the datacentre, particularly storage.
What is a working set?
The term refers to the amount of data a process or workflow uses in a given time period. Think of it as hot, commonly accessed data within the overall persistent storage capacity.
But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify.
For example, does “amount” mean reads, writes, or both? Does this include the same data written over and over again, or is it new data?
There are a few traits of working sets that are worth reviewing. These are:
•Driven by the applications driving the workload, and the virtual machines (VMs) they run on. Whether the persistent storage is local, shared, or distributed doesn’t matter from the perspective of how the VMs see it.
•Always related to a time period, but it’s a continuum, so there will be cycles in the data activity over time.
•Comprised of reads and writes. The amount of each is important to know because they have different characteristics, and demand different things from the storage system.
•Changed as your workloads and datacentre evolves, and they are not static.
If a working set is always related to a period of time, then how can we ever define it? Well, a workload often has a period of activity followed by a period of rest.
This is sometimes referred to the “duty cycle.” A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code.
Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.
Why it matters
Determining a working set size helps you understand the behaviours of your workloads, paving the way for a better designed, operated, and optimised environment.
For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets.
Therefore, understanding and accurately calculating working sets can have a profound effect on a datacentre’s consistency. For example, have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyperconverged environment?
Not accurately accounting for working set sizes of production workloads is a common reason for such issues.
The hypervisor is the ideal control plane for measuring a lot of things, with storage I/O latency being a great example of that.
It doesn’t matter what the latency a storage array advertises, but what the VM actually will see. So why not extend the functionality of the hypervisor kernel so that it provides insight into working set data on a per VM basis?
Then, once you’ve established the working set sizes of y our workloads, it means you can start taking corrective action and optimise your environment.
For example, you can:
•Properly size your top-performing tier of persistent storage in a storage array
•Size the flash and/or RAM on a per host basis correctly to maximize the offload of I/O from an array
•Take a look at the writes committed on the working set estimate to gauge how much bandwidth you might need between sites, which is useful if
you are looking at replicating data to another datacentre.
•Learn how much of a caching layer might be needed for your existing hyperconverged environment
•Demonstrate chargeback/showback. This is one more way of conveying who are the heavy consumers of your environment, and would fit nicely into a chargeback/showback arrangement
Determining an environment’s working set sizes is a critical factor of the overall operation of your environment. Providing a detailed understanding of working set sizes helps you make smart, data-driven decisions. Good design equals predictable and consistent performance, and paves the way for better datacentre investments.