In the high-performance computing (HPC ) arena today, many users are in the process of evaluating—or already have integrated—graphics processing unit (GPU) based HPC systems into their HPC IT infrastructure. This move has been prompted by the fact that GPU-based HPC systems have much higher processing capabilities than traditional HPC systems, owing to the increased number of processing cores that complement the CPU when processing huge workloads.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
With GPU-based HPC systems, there is a perceptible reduction in time taken to process queries, when compared with traditional HPC systems. Storage systems also form a critical component of GPU-based HPC systems, as all data stored and retrieved by the HPC system is handled via the storage system. Thus, to derive maximum benefits from a GPU-based HPC cluster, the storage system must complement the HPC cluster.
One suitable alternative to achieve this is EMC VMAX, a scale-out SAN that can complement GPU- based HPC clusters and prevent storage bottlenecks. Here are five tips to help you get the most out of EMC VMAX in terms of performance:
If the HPC clusters are dealing with mixed workloads—that is, sequential as well as random—then the HPC clusters should be zoned to front-end adapter (FA ) ports in such a way that sequential and random workload is distributed across the front-end ports. System administrators usually allocate dedicated EMC VMAX front-end ports for sequential and random workloads, but in reality this does not help enhance performance. Rather, FA ports should be zoned to HPC clusters such that a front-end adapter deals with sequential as well as random workloads.
EMC VMAX offers the option of using SSD—what EMC calls EFD—SATA and FC drives. Hence, while designing the storage layout, data workload profiling is mandatory to ensure that the workload is served by the appropriate drives. SATA drives should be used for archival purposes. Performance-hungry workloads can be distributed among EFD and FC drives. For random workloads RAID 1/0 is recommended, while RAID-5 is a better alternative for sequential workloads. While creating devices on Symmetrix, use the striping method to create meta devices, so as to take advantage of the Symmetrix architecture. A meta device should not go beyond 32 members. For example, when creating a 256 GB meta device, use the configuration 8GBX32. If you are using thin provisioning on Symmetrix, it is recommended that concatenated meta devices be created.
The administrator should make sure that workload is distributed equally among the front-end directors of VMAX, to avoid overutilization of some FA ports. Keep an eye on the fan-out ratio, and make sure that is equal among all FA ports. This can be achieved by maintaining a spreadsheet with the last zoning details in order to follow a round-robin allocation for zoning of HPC clusters to the VMAX FA ports.
Use the Symmetrix Performance Analyzer for analyzing the performance of FA ports and disks. This provides the administrator with a real-time picture of the workload that is being directed to VMAX, thus enabling balancing of the workload. With this analysis, the administrator can also identify devices with high response times, and migrate those devices to highperformance disks as per requirement.
Fully automated storage tiering
In scenarios where the workloads are not predictable, fully automated storage tiering (FAST) is the best option. FAST analyzes the data residing on the EMC VMAX and automatically moves the data among the FC, EFD and SATA storage tiers. Storage administrators could apply FAST policies among a group of devices and define what percentage of those devices is to reside on EFD, FC and SATA drives. VMAX automatically analyzes the workload among the devices and places the devices on the appropriate tiers. An administrator could also enable the Symmetrix Optimizer that keeps an eye on the back end of Symmetrix and moves devices across the backend directors in order to balance the loads.
About the author: Anuj Sharma is an EMC Certified and NetApp accredited professional. Sharma has experience in handling implementation projects related to SAN, NAS and BURA. He also has to his credit several research papers published globally on SAN and BURA technologies.