Defining the scope of a high-performance computing (HPC ) project at the onset and understanding users’ pain points can be a formidable challenge. The inability to define requirements clearly, will only create problems. Project owners must maintain accurate work estimations and pay attention to details to properly streamline HPC project management.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Use the following work estimation checklist as a best practices guide for an effective HPC implementation.
- Outline your data center structure
It’s essential to provide vendors and integration partners with the minutiae of your data center architecture. This ensures your HPC setup will be implemented optimally. Details on your architecture are critical to determine factors such as the nature of your installation, the best location for high-density racks, and more.
- Communicate existing hardware details
It is important to be completely open about your existing setup and communicate all thoughts on a potential optimization with the systems integrator. As the HPC project owner, you know the hardware setup, configuration and installation better than a partner. So, it’s essential to communicate these parameters with the implementation team.
Organizations often have existing HPC infrastructure within clusters. To integrate a new setup with a legacy environment, implementers could disturb the ecosystem. Providing partners with in-depth details and documentation on the existing setup can minimize the likelihood of future complications. This could also help reduce costs because components such as existing servers can be used in the new setup, instead of having to procure completely new equipment.
- Understand application behavior
Systems integrator and implementation team expect customers to be familiar with application behavior. Therefore, you need to educate your organization on application behavior and hardware needs, and share data from respective ISV s or the open source software world. Pay attention to the behavior of applications from a system perspective. Application behavior can also help to better define storage requirements.
- Clarify scheduling preferences
Be clear about scheduling algorithms or policies. For example, make sure all those involved know how you categorize—by type of user or type of project, by priority, etc. Designing your scheduling policies or providing partners with a policy overview can help them properly use resources to better design the implementation.
More High Performance Computing stories
- Configure file systems and storage
Defining storage and its use is one of the most important components of planning a HPC project. End users should be able to define tentative storage per user needs, for example, and there should be a provision for setting quotas. End users should also be able to help define backup types and important data so managers can define policies accordingly.
Knowing what data an application generates or accesses can help to better define storage configurations. For example, large-sized files will need a matching stripe size; small file sizes with a large number of files will require a different stripe configuration.
HPC project owners and implementation teams may also want to consider using a set of best practice questionnaires as part of work estimation process. These questionnaires are sent to user organizations before beginning the project. The scope, which is defined based on the answers, is known as the scope of work (SOW). The SOW is revised at the time of installation based on discussions with project owners; these revisions are essential to ensure the project meets all stakeholders’ expectations.
About the Author: Jigar Halani is the Technical Consultant for high-performance computing and open source at Wipro Infotech. He has more than eight years of experience in HPC and grid computing. Halani has practical hands-on experience in deploying high performance clusters, grid computing, virtualization and parallel storage.