agsandrew - Fotolia
The DevOps movement has given rise to number of new buzzwords, causing the phrase “infrastructure as code” to rise to prominence.
The phrase itself is actually nothing new. In a world of physical servers, infrastructure was defined in terms of servers, networks and storage systems, and software was provisioned onto the hardware.
When the IT world moved to a more virtualised and abstracted environment, it became much more attractive to be able to look at the whole IT platform in terms of capabilities.
As servers moved to become abstract concepts with virtual central processing units (CPUs), storage and networking resources, it became theoretically possible to use declarative constructs around software provisioning.
That means a system administrator could define what the ideal conceptual environment would be for a specific workload, and the elasticity of virtual environments could ensure those conditions were met.
As enterprises have moved to expand the variety of ways in which to consume IT resources, moving beyond private datacentres to include colocation sites and public cloud, the need for greater flexibility in dealing with workloads has risen further.
Workloads can be moved from one environment to another more effectively if all the hardware dependencies have been removed or can be managed using code. It also means system administrators do not have to remember to set all the variables themselves, as long as the system does this effectively, and errors are avoided.
Brave new world
In the on-premise hardware world, the provisioning of code into the operational environment was often carried out by system administrators writing scripts that contained specific settings for the countless variables involved, such as IP addresses and LUN targets.
These scripts were run manually, and checks had to be carried out to ensure all those variables were set correctly. The process was error-prone, with rollbacks often required when some of the settings were applied incorrectly.
Some of the early configuration management (CM) suppliers, such as SaltStack, CFengine, Chef and Puppet, started to provide the means of managing the more automated provisioning of code onto shared cloud computing platforms. CM aims to make the creation, automation and reuse of these scripts easy and predictable.
Code forms the backbone of this approach, giving rise to the term infrastructure as code (IaC), which, in simple terms, means code that helps in provisioning systems out onto an IT platform.
Today, IaC has grown to be full-function and highly flexible, and there are several variants to consider, including declarative, imperative and intelligent IaC.
The declarative approach creates a required state and adapts the target infrastructure to meet those conditions, while the imperative version creates a target environment based on hard definitions set out within the script.
The intelligent state, meanwhile, takes into account other pre-existing workloads within the target environment, and reports back to a system administrator about any problems it encounters.
Of these three approaches, imperative is the least useful and has been known to create performance problems in the target environment. The declarative take is good, but should be stretched into being an “intelligent” approach through a full understanding of the existing stresses on the target environment.
IaC through the ages
DevOps and continuous delivery have driven the evolution of IaC. Rather than just being a means of taking code and pushing it out into the operational environment, the whole process flow of the development, test and operational environments has to be taken into account.
Software configuration management tools from Serena Software, Jenkins, IBM Rational, Perforce and the like sit alongside the CM tools to provide end-to-end lifecycle management.
Others, such as Automic Software, aim to provide an orchestration engine that sits over the top of a set of code lifecycle management tools to provide a coherent and consistent controllable environment.
Regardless of the route taken, it is important to create scripts that are reusable. Different CM suppliers will have different terms for these. For example, Chef uses “recipes” and “cookbooks”, while rival Puppet favours “manifests”.
Read more about infrastructure management
- With every action, there is an equal and opposite reaction. The same rule applies to IaC: Although it is beneficial, it creates some problems.
- Multicloud deployments create new IT challenges. Consider infrastructure as code to streamline the app development and management process across multiple cloud platforms.
These reusable components can be created in different ways, and each CM tool has its own domain specific language (DSL). For most, this will depend on the programming language in which the CM tool is written.
This is also the case with containers. Docker has a built-in basic CM system, enabling it to use the abstraction of state required for containers to operate fully. Although Docker can operate independently, many organisations prefer to use it with other CM tools.
How the code is orchestrated is key, and without full monitoring, audit and control of how the scripts are run, chaos could follow.
This is where intelligent IaC comes to the fore – but things can still go wrong. Each system should be able to roll back to previous known states and defined preferred states should be capable of being maintained if something starts to move the workload environment outside of that state.
Why is it needed?
IaC is becoming a much-needed way of dealing with the complexities of an organisation’s hybrid IT platform. Today’s businesses need a more responsive IT environment; they demand the capability to change and adapt the IT functionality to support the business’ changing needs better.
Attempting this with hard-coded one-off scripts, or through a mish-mash of siloed tools, will not give businesses the support they need. In fact, it may well further hinder IT management in attempting to prove its worth to the business itself.
As such, IaC – whether or not the term itself rubs you up the wrong way – will provide that much-needed set of checks and balances to enable the provisioning and management of workloads across complex, hybrid IT environments.