News Stay informed about the latest enterprise technology news and product updates.

Application workload issues = flies in the ointment for storage performance

Buying a storage array and deploying it to support your apps might seem a relatively straightforward task, especially now that flash storage allows us to throw IOPS at workloads.

But there are several key pitfalls in which application issues can affect storage performance, according to Len Rosenthal, CMO of Virtual instruments, which recently announced Workload Central, a cloud-based service that provides free workload analysis and a library of test workload samples for users in the test and dev stage of application/storage deployment.

1. Block/file size distribution. Some applications are perceived to operate at one block size, 6Kb, 1Kb, 8Kb etc, but actually use various sizes. Also, workloads can start at mostly one block size but that can change over time. Why that matters is that storage, and particularly flash storage, is often optimised to a particular block size and when things go out of kilter that can bring latency.

2. Compression and data deduplication issues. These technologies can also be subject to optimal working around a particular block size from particular vendors. It’s not they they cannot handle other block sizes but they will need to to apply more CPU to processing them and this will have a knock-on effect on latency.

In some cases,” says Rosenthal, “you can fix block size issues in the app, for example in databases, by setting the block size. Also, application developers can write to a certain block size.”

Also, during procurement some customers are stuck on one vendor, which others are open to new technologies and test three or four systems to see what is best for the workload,” he adds.

3. Spatial layout of the workload. Where data is written to can change over time. This is particularly the case with spinning disk HDDs but flash drives can also be subject to hot spots. These can lead to bottlenecks and thus latency as well as uneven drive wear.

This can be a little more difficult to deal with,” says Rosenthal. “There’s nothing that can be done when writing the app, as it has to do with how users are accessing the systems. Usage patterns can vary over time and are hard to predict.”

4. The random/sequential mix. Storage systems can be deployed, for example, that suit sequential data, but over time workloads with a larger proportion of random access arise. The most obvious example here is server and desktop virtualisation, with the so-called I/O blender effect, which is effectively a bottleneck in I/O caused by extreme randomisation.

Again this can be hard to predict and is a result of workloads and access patterns changing over time. The key thing is to be able to spot that it is happening by monitoring systems,” says Rosenthal.

5. Metadata and data issues. Over time the proportion of metadata to data can grow as, for example, different types of workload are run from servers. And the more metadata the slower things go, with a lot of overhead going back and forth.

NFS in general is notorious for high metadata traffic but hosting a web server will create a significantly higher amount of metadata traffic, especially if you use something like PHP in your web server,” says Rosenthal.

More on storage performance

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchNetworking

SearchDataCenter

SearchDataManagement

Close