Cloud storage is increasingly sophisticated and flexible. The initial appeal of storing data in the cloud was flexibility and cost, and to achieve that, the majority of cloud services based their products around object storage, most famously Amazon Web Services’ (AWS’s) S3.
AWS’s use of the term “bucket” for its cloud-based Simple Storage Service is an excellent metaphor for how it works. Users effectively throw their data in, and AWS’s object technology takes care of the rest.
But object storage – or at least, object storage in isolation – cannot fulfil all an organisation’s data storage needs, and that has prompted the big three cloud service providers – AWS, Microsoft Azure and Google Cloud Platform – to introduce an increasingly rich menu of storage options.
These include mainstream storage architectures with file, block and object storage, and services such as archiving and backup.
By fine-tuning their storage offerings, the big three aim to give their customers flexibility as well as a viable alternative to local storage.
Block storage – such as Google’s Persistent Disk – provides an alternative to the datacentre for virtual machine (VM) storage.
Each of the three suppliers offers its own flavour of object, block and file. The benefits include application support, a range of performance and price options, and the ability to scale up or down on demand.
Against this, CIOs need to weigh up potential costs. Cloud storage is by no means always cheap, and performance-based storage, in particular, can become expensive. There are also the potential performance hits for moving data into the cloud and the fact that data cannot (yet) be moved seamlessly between the three providers.
The trade-offs between cloud and local storage also depend as much on how an organisation needs to manage its data, as on its choice of platforms.
Microsoft Azure may have a head start among CIOs for its (perceived) support for Windows operating systems, but its file-based storage is multi-platform. Amazon, for its part, offers FSx for Windows File Server, another SMB-based system. Windows, or SMB, compatibility is a useful tool to span storage across local and cloud installations, although it is not the only way to do so.
Storage architectures, too, are less rigid than they were. The large cloud providers, along with a host of smaller, specialist storage companies, are increasingly using hardware or technologies such as software-defined storage to bridge the gap between applications and storage architectures.
Cloud storage in practice: Use cases
Nonetheless, there are still some clear use cases for different cloud storage implementations.
Object storage is the system of choice for archiving, backup and analytics, which are all applications that need to store large volumes of data and where the efficiencies and resilience of object come to the fore.
As performance improves, though, object storage is moving into areas such as the internet of things (IoT), running websites, and potentially enterprise applications. The nature of object storage also makes it easier for suppliers to offer storage tiers based on performance, cost and frequency of access.
Against this, block and file can seem less versatile, although this is not always the case.
Block storage is closely tied to the growth in cloud-based virtual machines, and this is its main use case.
Azure Disk, for example, works with Azure VMs, and comes with performance tiers. Google’s Persistent Disk operates in a similar way, with Google Cloud VMs, while Amazon’s Elastic Block Store (EBS) integrates with Amazon’s EC2 compute resources. Workloads can move to the cloud, but block storage is not typically shared between cloud and local instances.
Microsoft suggests that its Ultra Disk is suitable for SAP Hana, SQL, Oracle and other IOPS-intensive applications.
File-based storage can give more flexibility about where data is located. Amazon’s EFS is designed to combine cloud and on-premise volumes, but brings the flexibility of the cloud.
Google and Microsoft have their own take on harnessing file-based storage to bring the elastic nature of the cloud to local applications. An additional mention should go to NetApp, as a storage technology provider with tight – and multi-supplier – integration with the cloud.
File-based cloud storage is not usually application- (or even OS-) specific. Instead, buyers can choose performance levels to match their capacity and IOPS requirements.
And, although performance for cloud-based file shares is increasing, higher-performance stores will usually cost more.
Big three cloud storage options
File: Amazon’s Elastic File System (EFS) is an NFS-based file system that operates on cloud and local storage. AWS provides this as a Standard storage class and EFS IA (infrequent access). EFS throughput is more than 10GBps. FSx For Windows File Server is storage dedicated to that platform.
Block: Elastic Block Store works with Amazon Elastic Compute Cloud. “General purpose” SSD volumes offer a base performance of 3 IOPS/GB. Provisioned IOPS SSD volumes support up to 64,000 IOPS and 1,000MBps throughput.
Object: S3 is AWS’s object storage offering with a claimed 11 nines availability.
File: Azure Files uses SMB and allows concurrent file share mounting in the cloud or on premise. Support for Windows, Linux and MacOS. Maximum storage capacity is 4PB, ingress 25Gbps and egress 50Gbps.
Block: Azure Disk provides managed disks for Azure virtual machines, with five nines availability and a maximum disk size of 65,536GB for Ultra disk, with 160,000 down to 32,76GB for standard disk, with 2,000 IOPS.
Object: Azure Blob offers petabyte-scale object storage with 16 nines availability.
Google Cloud Platform
File: Cloud Filestore provides NAS for Google Compute Engine and Kubernetes Engines, with storage offered as standard and premium. Standard ranges from 1TB to 10+TB with 1000 IOPS and 180MBps read throughput for 10+TB systems. Premium starts at 3.5+TB with a read throughput of 1.2GBps and 60,000 IOPS.
Block: Persistent disk block storage runs up to 64TB and offers standard persistent disks, persistent SSDs, and local SSDs and NVMe storage. Write IOPS range from 15,000 to 30,000 and read IOPS from 15,000 to 100,000.
Object: Google object or blob storage provides different locations based on performance and redundancy requirements. The main storage tiers are Standard, Nearline, Coldline and Archive. GCP’s Object Lifecycle Management tool automatically moves storage to a lower-cost tier, according to user-specified rules.
Cloud services need not work in isolation. The main suppliers are also adding hardware options, to allow data to span local and cloud locations and to help with data migration.
Amazon offers its Snow range, from the portable Snowball Edge device, which supports up to either 42TB of block or object storage or 80TB for data transfer, to Snowmobile, which comes in a 45ft shipping container. AWS Outposts is its hybrid offering.
Google also offers a transfer service, and Anthos for data storage, although the supplier also works with a number of third parties, including Komprise for hybrid cloud management.
Microsoft provides ARC and Stack as building blocks for hybrid cloud, and data migration. On-premise Kubernetes clusters and Azure data services can both be managed from Azure.
Read more on cloud storage
- Cloud storage 101: Specifying for cloud storage. We run through key questions to ask when specifying cloud storage, such as disk type, performance, availability and the cost of getting data out of the cloud.
- Do you use cloud storage for these use cases yet? We look at the use cases suited to a quick transition to the cloud: backup, archiving, disaster recovery, file storage and cloud bursting.