stefanovi - Fotolia

A guide to choosing and using all-flash array storage

We take a look at the considerations when devising an all-flash array storage strategy, and what enterprises can do to get the most out of the technology

This article can also be found in the Premium Editorial Download: CW ASEAN: CW ASEAN: Unlock flash opportunities

As more enterprises harness emerging technologies such as the internet of things (IoT) and big data to keep pace with their business, they are often held back by a key component in their datacentres – enterprise storage.

Traditional disk-based media can no longer keep up with the needs of the modern enterprise, one that relies more on business-critical software to digest heaps of corporate and customer data used to make business decisions on the fly.

The big e-commerce companies and investment banks already know this. For years, they turned to high-speed flash storage systems such as all-flash arrays to speed things up, but few others took the plunge because the technology was expensive.

That is no longer the case. According to Phil Hassey, principal advisor at technology research firm Ecosystm, the total cost of ownership (TCO) of all-flash arrays has been reduced to make it comparable to traditional storage technology, with benefits including a reduced storage footprint and increased automation.

Equally important, the speed of adoption of the technology has accelerated, improving the business case and driving tangible business outcomes. Put simply, decisions are made better, faster and more cost-effectively.

Not surprisingly, the market for all-flash arrays has been growing by leaps and bounds. Research from IDC suggests the market will grow at a compound annual growth rate of 26.2% through 2020 to hit $9.39bn.

All-flash benefits

Traditionally, flash had been targeted for use with mission-critical, latency-sensitive primary workloads. However, as flash costs continue to decline, the use of flash with less latency-sensitive workloads, along with higher infrastructure density and better reliability, has emerged, says Ted Aravinthan, director for modern datacentre at Dell EMC South Asia.

An imperative behind the adoption of all-flash arrays in Asia-Pacific, says Aravinthan, is greater agility as compared with hard disk-based systems – the need for far fewer devices to deliver required performance, lower energy and floor space consumption, higher CPU utilisation and better device reliability.

All-flash arrays also allow for integrated copy data management and deduplication that reduces the number of data copies, thus lowering TCO and generating faster response times. Ultimately, all-flash arrays will be the underlying storage media that delivers the efficiencies and cost savings to drive digital business results, says Aravinthan.

Enterprises should take claims about “maximum IOPS” with a pinch of salt unless a supplier can provide more details.
John Martin, NetApp

Hewlett Packard Enterprise (HPE) – a Dell EMC rival – for example, is working with a utility company that supplies electricity to a moderately-sized population, for which generating just-in-time energy is critical to profits. As a utility company, it is inefficient to store excess energy generated, so it ends up being “thrown away”.

Jean Paul Bovaird, HPE Asia-Pacific’s general manager of storage and incubation, says the utility company is now using analytics to understand meteorological factors, human factors and mechanical engineering factors to produce just enough energy to satisfy demand (with a small buffer) without a lot of wastage. This has lowered energy bills for its customers, while allowing the company to provide a better service – all of which is powered by all-flash arrays.

Laying the foundation

A typical transformation to all-flash array storage will not require large infrastructural changes, according to John Martin, director of strategy and technology in the CTO office at NetApp Asia-Pacific. That said, Martin points out a few things that enterprises will need to watch out for before making the transition:

  • Network: Enterprises should check their network capability before implementing all-flash array storage products to avoid data bottlenecks. With an increase in density and capacity, the network bandwidth requirements for storage systems will increase. Just a handful of solid-state drives (SSDs) have the potential to saturate a typical 8GB and 16GB fibre channel. Enterprises should also see if their existing fibre channel network can support the most efficient ways of accessing NVMe storage over the network. If not, they may need to invest in new network infrastructure to support NVMe over fabrics.
  • Latency: High-end flash storage systems can support latencies as low as 1ms (millisecond). However, applications that require extreme performance may require consistent, sustained response times measured in hundreds, even tens of microseconds, rather than milliseconds. These applications do not tolerate latencies that fluctuate due to back-end storage services or that increase rapidly as the input/output operations per second (IOPS) load increases. To support the lowest possible latencies, enterprises should investigate ways of combining new technologies inside the hosts, along with the performance of the flash array.
  • Throughput: Throughput is a measure of the amount of data that can be moved in or out of a storage system and is typically reported in megabits per second (Mbps) or gigabits per second (Gbps). If there is a throughput-oriented application, enterprises should evaluate throughput performance before deciding on an all-flash supplier.

Although some experts have advised enterprises to optimise or even rewrite applications for non-volatile memory to remove code that accounts for the latencies of hard disk drives, HPE’s Bovaird says most enterprises are seeing a four-and-a-half times increase in application performance from all-flash arrays, even without any changes to their applications.

Meng Guangbin, president of Huawei’s IT storage products, agrees, noting that “there is no need to consider too much about applications” as all-flash arrays that replace disk-based storage can double performance without rewriting applications.

Mark Jobbins, vice-president of technical services at Pure Storage Asia-Pacific and Japan, notes that for flash platforms with fixed block sizes, applications may need to be exported and imported to align to the input/output (I/O) transfer size, a common recommendation with online transaction processing (OLTP) databases and virtual infrastructures.

“Do you have to load balance your application across volumes and/or controller nodes? Some flash solutions limit performance on a per-volume or per-node basis and require considerations for application data layout, both today and in the future as an application grows,” he says.

Key considerations

Not all all-flash storage products are made the same, and may vary widely in features and capabilities. For organisations evaluating all-flash storage, NetApp’s Martin suggests the following key considerations:

  • Cloud connectivity: All-flash array storage products should scale from the edge to the core to the cloud, enabling organisations to integrate data seamlessly across multiple clouds.
  • Performance: Seek offerings that can demonstrate consistent, scalable IOPS performance at latencies under 1ms based on third-party benchmarks that simulate real-world workloads, such as SPC-1 and TPC-E. Enterprises should also take claims about “maximum IOPS” with a grain of salt unless a supplier can provide more details.
  • Resiliency and availability: Most all-flash storage systems incorporate some form of redundant array of independent disks (Raid). If an SSD that is part of a Raid group fails, rebuilds happen much more quickly, limiting exposure to a second failure. Always look for suppliers that have well-designed architecture, mature processes, a proven track record of reliability, and first-class support and professional services.
“Many data types do not benefit from deduplication which does nothing but slows a system down”
KC Phua, Hitachi Vantara
  • Backup and disaster recovery: Organisations should also ensure there is backup and disaster recovery (DR) in place to protect against user errors, bugs in application software, widespread power outages, and other natural and manmade disasters. A mature all-flash storage system should include data protection and disaster recovery features, including snapshots, asynchronous and synchronous replication, application-level integration, and support for an ecosystem of data protection partners.
  • Total cost of ownership: All-flash storage can significantly lower TCO relative to traditional storage systems. A quick TCO comparison of a proposed all-flash array versus an organisation’s existing disk-based or hybrid flash systems would help to justify the purchase, along with considerations of effective capacity, application integration, datacentre operational costs, IT management and software licences.

Pure Storage’s Jobbins notes that cost comparisons can be as simple as comparing acquisition prices, but valuing the price per raw gigabyte of storage capacity without considering effective capacity and software licensing will lead to an inevitable rise in maintenance and support costs.

“Customers need to know the total cost of flash over a reasonable time frame – typically five or six years – or they may be in for a big surprise,” he says.

Management tools

Ongoing management simplicity plays a key role in minimising operational costs and reducing the risk of unplanned downtime, says Jobbins, noting that some products using an outdated management model will hamper the strategic goal of bringing a cloud IT model to internal infrastructure.

“Next-generation management models are built using a SaaS [software-as-a-service] approach, eliminating the need to deploy and manage more internal infrastructure. SaaS approaches also ensure that you get access to the latest management and analytics features automatically.

“Therefore, as you consider all-flash storage, evaluate its management model and determine how easy it is to consume and keep current as vendors introduce new innovations,” says Jobbins.

Data compression, deduplication and analytics

High-density solid-state drives (SSDs) can replace multiple racks of hard disk drives, allowing datacentres to recover space and reduce power and cooling expenses, so it is important to check if existing storage systems can accommodate them. 

“The effective capacity metric is also vital at measuring how much data a storage system can hold after deduplication and compression are applied. Many all-flash arrays on the market today already provide inline deduplication and compression,” says Martin.

KC Phua, director of technical experts for data management and protection at Hitachi Vantara Asia Pacific, says while data reduction is important to minimise storage capacity needs, selectable data reduction is more crucial.

“Many data types do not benefit from deduplication which does nothing but slows a system down. Customers should be very careful with data reduction guarantees. For example, is there a performance hit? Do not rely on figures from the lab, but make sure the requirements are fitted to the workloads and overall environment,” he says.

“IOPS values range from hundreds of thousands to tens of millions. Some vendors only promote their high IOPS values without even mentioning latency, while others never talk about performance degradation after deduplication and compression are enabled”
Meng Guangbin, Huawei

Predictive analytics is another important feature that enterprises need to look out for. However, many all-flash suppliers miss the fact that analytics must be “end-to-end” for real value, according to Phua.

“Analysing issues or needs in an array may not give you a real picture of what is happening. For example, a server with too many virtual machines may be the real problem, not storage,” he says.

Scale-up or scale-out architecture

Architecture is the key to unlocking the full benefits of all-flash arrays. Scale-out architecture allows enterprises to start small and scale both in capacity and performance as their requirements grow.

“Scale-out is definitely more flexible versus that of the dual-controller, whereas scale-up architecture is only able to scale in capacity, not performance. The dual-controller (active-active), scale-up architecture is the minimum standard enterprises should expect,” says Dell EMC’s Aravinthan.

“An active-passive dual-controller, scale-up architecture will be too dated to maximise the performance jumps from an all-flash array,” he adds.

From a workload perspective, Aravinthan says it is important to ascertain the growth expected when deciding which architecture to go for. “That said, it may prove challenging for enterprises to foresee its growth trajectory. With that in mind, scale-out architecture is likely the safer choice for a start.”

Huawei’s Meng adds: “In the hard disk drive era, the performance of storage systems can be improved by simply stacking disks. However, in the all-flash arrays era, the system performance bottleneck lies in controller enclosures instead of SSDs.

“The scale-up architecture can only improve the system capacity of all-flash arrays, and cannot enhance system performance. The overall system performance can only be improved by scaling out controller enclosures.”

Pitfalls to avoid

Technically speaking, the performance of all-flash arrays varies with application scenarios, application models and workloads. As a result, different suppliers will have different marketing strategies.

“IOPS values range from hundreds of thousands to tens of millions. Some vendors only promote their high IOPS values without even mentioning latency, while others never talk about performance degradation after deduplication and compression are enabled,” says Huawei’s Meng.

“It is recommended that enterprises not only consider the declared performance values when formulating policies, but also pay more attention to the performance required in their own business scenarios.”

HPE’s Bovaird advises enterprises to first determine their type of workloads before implementing all-flash arrays. Although most workloads are well suited for flash, some work better with traditional hybrid architectures.

“Determining workloads is more critical than it ever has been: avoid all-flash arrays for workloads that don’t benefit from it and focus on architectures that enable hybrid flash with the performance of all-flash – but with a cost aligned to spinning disk.”

Read more about storage in APAC

  • On-premise and cloud-based flash arrays that offer big improvements over spinning disks are making a splash in Australia.
  • Led by Singapore and Thailand, the object storage market in Southeast Asia is expected to grow at a double-digit rate in the next few years.
  • Even as Aussie firms such as Bauer Media and Redbubble are lapping up object storage, others are facing visibility and analytics issues with the technology.
  • Airtasker is using Amazon Web Services to ramp up its storage capabilities to meet the needs of an expanding user base.

Read more on Flash storage and solid-state drives (SSDs)

CIO
Security
Networking
Data Center
Data Management
Close