While infrastructure migration to cloud platforms provides benefits, it can also hamper system performance if not...
addressed meticulously. In this context, a physical to virtual migration of infrastructure results in better resource utilization. It also eases the time and managerial efforts that go into infrastructure migration to cloud platforms. Take this physical to virtual to cloud (P2V2C) migration in a step by step manner in order to ease the hassles.
Step 1: Analyze your physical environment
When you add physical infrastructure to your environment, the CAPEX cost increases. Analysis of your environment grants you insights into the resources that aren't being utilized 100%. In such scenarios, a P2V migration helps increase utilization of your resources. It eliminates the need for new physical infrastructure, and reduces management overheads.
Categorizing applications according to those that support virtualization should be your first approach. You could classify applications using various criteria. You could segregate apps using criteria like those based on platforms (for example, Java based applications) and apps that require middleware. Categorization based on databases for applications is yet another approach -- like applications running on SQL or Oracle servers.
Testing and revaluating your environment may reveal applications that are not compatible for virtualization. For instance, it is recommended that a Linux based platform run at least version 2.6 kernels. You can accordingly make changes to your environment and redeploy applications. Enterprise level applications like SAP require high CPU power and have huge databases. Hence they are not recommended for virtualized environments.
Step 2: Consolidate and virtualize your servers
Server requirements fluctuate throughout the week, leaving certain servers idle at times. You should carry out a thorough analysis of your usage patterns and determine the compute capacity of your applications before you execute a physical to virtual (P2V) shift of your servers and migration to the cloud. Analyze your computing requirements during peak hours or upgrades, as it impacts performance, operations and administration responsibilities.
You could also segregate and club servers. If you have an application running on two database servers, you could use a middleware server or a SQL server that hosts multiple databases. After consolidating your architecture run a test environment to eliminate any network and storage glitches. Execute virtualization and migration to the cloud only after this check.
Step 3: Network and storage virtualization
Next up is the analysis of network and storage infrastructure to address possible performance issues. Segregate and isolate networks using virtual LAN (VLAN) setups, wherein you separate your production traffic from the other traffic to ensure proper bandwidth utilization. There are several tools that you can deploy on this front. For example, Cisco Nexus 1000V switches and distributed virtual switch (DVS) integrated with VMware VSphere can allow you to track issues that interrupt production traffic.
On the storage front, scalability is of key importance. Analysis of storage usage patterns is imperative for capacity planning and management. NetApp, IBM and HP have tools that help you meter and report data for better performance and capacity planning.
Storage should be tested to ensure it can manage hypervisor loads and sustain virtualization. You could look into automated storage management that allows allocation of storage resources on the fly or a multi-tenant infrastructure that lets you share storage among different applications.
Step 4: Migration to cloud
Infrastructure migration to cloud platforms should be addressed in a phased manner. You could start migrating to cloud with less critical applications and accompanying infrastructure. Mission critical infrastructure could follow suit based on the success of these migrations.
Ensure that the physical production environment is taken down, but not completely decommissioned. In case of any incidents, the physical production environment can be called upon and the platform reused. The physical environment can later be utilized for those applications and servers that cannot be virtualized.
You should also ensure that your service provider is compliant to industry standards like SAS 70. Strict service level agreements and regular audit reports are a must. It is also recommended to have strict access controls for every tier. For instance, this can be achieved in the following manner.
Tier 1 - Non-mission critical applications
Tier 2 - Database servers
Tier 3 - Third party applications
For end to end network access you could ask your service provider for secure P2P VPN connections. As a second layer of protection, customization of perimeter / server tier firewalls and intrusion detection will be an add on. These will provide you with secure dedicated access to the infrastructure.
About the authors:
Vinod Mahajan is the principal consultant - storage and virtualization for infrastructure management services at Syntel.
Ajay Kumar has close to 10 years of experience in the IT industry, which include architecting, designing and implementation of heterogeneous systems (AIX, Solaris, HPUX and Linux). He is a consultant for the enterprise computing practice – infrastructure management services at Syntel. Kumar is responsible for practice development in areas related to cloud and virtualization technologies.
(As told to Mitchelle R Jansen.)