HP has upgraded its Proliant SL 4500 family of scale-out, storage-dense servers and renamed them the Apollo 4000 series.
The Apollo compute-plus-storage scale-out environments are aimed at customers that want low-cost, high-storage volume without super-fast access. Use cases including object storage, cloud storage, big data, Hadoop and more mainstream enterprise workloads, such as Microsoft Exchange.
The Apollo 4000 series comprises three configurations. These begin with the 4200, which packs 28 large form factor or 50 small form factor HDDs into 2U of rack space for a total of around 228TB. This is aimed at traditional enterprise use cases where a high density of storage is required, as well as service providers.
Read more about big data storage
- Big data storage demands capacity and processing/IOPS performance and a range of choices exist such as scale-out NAS, object storage, hyperscale and hyper-converged storage.
- We look at how Hadoop crunches big data, its key storage requirements and survey the vendors that offer Hadoop storage products.
Large storage volumes at low cost
The Apollo 4510 comprises a single compute node and packs 68 large form factor drives into 4U. It is aimed at object storage use cases and can come with object storage software from Scality or Cleversafe, both of which are HP partners. When scaled to full 42U capacity, the 4510 provides 5.5PB.
The Apollo 4530 comes with three compute nodes and 45 large form factor drives in 4U and is aimed at big data use cases such as Hadoop clusters or the likes of Microsoft Exchange environments. At 42U the 4530 can provide 3.6PB of capacity. HP’s Hadoop reference architecture employs the Apollo 4530 in conjunction with HP Moonshot servers.
HP technologist and sales director Clive Freeman said the market the Apollo series addresses is a growing one, despite the lack of flash storage so prominent in storage over the past few years.
“Service providers have always been cost-conscious, but banks, retailers etc are also now looking for storage to do stuff with large quantities of the data they have. Systems like this deliver a big volume of storage at low cost, compared to a tier one array,” he said.
“HDFS is a growing use case, with increasing numbers of customers seeing it as a standard file system to dump big data to and access from various systems.
"It’s not all about one million IOPS, but more about petabyes with compute access.”