alotofpeople - stock.adobe.com

The role of infrastructure as code in edge datacentre computing

As datacentre computing is pushed to the edge of the organisation’s network, IT has to address the overheads associated with remote server management

Having servers at the edge is nothing new. IT leaders recognise that while they may want to centralise IT as much as possible to improve efficiency and reduce admin costs, there is a need to put systems, services and data closer to where they are required.

A content delivery network (CDN) uses a network of servers to cache data closer to where it is being consumed in order to speed up access. In 2020, James Staten, vice-president, principal analyst at Forrester, blogged that CDNs offer a way to pair endpoint-deployed applications with endpoint device content analysis. “Rather than simply enabling clients to push their apps and data closer to the customer, these edge technologies are collecting data from internet-of-things [IoT] devices and end-user mobile devices,” Staten wrote.

Conceptually, CDN can be regarded as a form of edge computing in that it makes data available closer to where it is being consumed. In effect, the network of servers that a CDN is built from is used to distribute data closer to the edge.

Edge computing and the idea of datacentres at the edge extends this concept. Data is not only consumed at the edge, but vast amounts of data processing can take place at the edge. This avoids the need to overload network bandwidth with huge amounts of data, such as when collecting video analytics streamed from networked CCTV cameras.

The devices and supporting IT infrastructure at the edge of the network take IoT to the next level, offering the potential to run sophisticated enterprise systems decentralised.

From a software architecture perspective, the industry uses the term “serverless computing” to describe a way of delivering compute, storage and network resources required to run cloud-native workloads. The benefit of serverless is that the application developer does not have to worry about physical servers, and these can be located in a public cloud, a private cloud running in an on-premise datacentre or one running at the edge of the organisation’s network.

However, the IT operations landscape increases exponentially as workloads increasingly move to the network’s edge and more advanced data processing takes place outside a traditional datacentre environment. The ease of centralised IT administration is replaced by the need to manage a diverse and highly distributed estate of servers and devices at the edge.

Running enterprise IT at the edge

Managing datacentre computing located at the edge of the corporate network is becoming as important as the management of centralised IT systems. There has always been a need to manage branch office IT and remote server rooms efficiently, because they are generally less resourced in terms of IT administrators on site, compared with the central IT function.

There is now a groundswell of activity around managing the configuration of IT systems in the same way that source code is managed by software development teams. Scott McAllister, developer advocate at PagerDuty, says: “As IT infrastructure has progressively decoupled from physical machines we can touch, managing and provisioning that infrastructure has moved to software services in the cloud. Those services are built with robust user interfaces for manual configuration. However, handling those configurations at scale is tedious and can lead to system fragility.”

Once hailed as the future of infrastructure management and now the de-facto best practice, infrastructure as code (IaC) is a process that automates the provisioning and management of compute resources with machine-readable templates. According to Chris Astley, a partner and head of engineering at KPMG UK, in a cloud context, IaC is the clear choice for automation and is also making inroads into private datacentres.

“Prior to IaC, system engineers had the laborious task of manually provisioning and configuring their compute infrastructure,” he says. “With cloud providers in particular updating features and capabilities daily, this had become an overwhelming task. With IaC, engineers now have the means to better manage version control, deploy and improve their enterprise’s cloud infrastructure more quickly, more cheaply and more efficiently than ever before.”

Benefits of infrastructure as code

A lot of great information on the value of infrastructure as code (IaC) comes from the perspective of security expert Sounil Yu and his approach to distributed, immutable and ephemeral (DIE) model security, says Daniel Riedel, senior vice-president for strategic services at Copado.

The DIE model is a corollary to the confidentiality, integrity, availability (CIA) triad of security and centres on working with ever-changing and growing systems. Riedel offers three main reasons why IaC is a good fit for DIE:

Distributed: IaC is designed to build distributed systems. By its very nature, it is set up to be run on cloud-based, decentralised systems.

Immutable: The ephemeral environment needs to be fixed and immutable, making it very difficult to make changes, adding further security capabilities while also creating auditable records.

Ephemeral: IaC allows for rapid deployment of containers, or short-lived functional interfaces (such as AWS Lambda or Copado Functions). It supports a single question or event and is then disposed of and replaced.

Although DIE has been around for several years, Riedel says many organisations are still building out their approaches for managing their infrastructure. “Stateful systems will always exist, but the ephemeral nature allows us to protect them and use them in a secure environment while giving flexibility to scale and compute costs,” he says.

As well as IaC enabling faster, more consistent and automated provisioning of infrastructure to DevOps teams, Piyush Sharma, vice-president of cloud security engineering at Tenable, believes its greatest impact lies in its ability to transform the processes used to develop, deploy and operate immutable infrastructure. “Whether or not development and DevOps teams realise it, the tools and approaches they adopt to solve engineering challenges have impacts across the business,” he says.

“IaC enforces immutability into runtime infrastructure, which means each component of the architecture is built using an exact configuration. This capability reduces the possibility of infrastructure drift, which could move it away from the desired configurations.”

Whereas IT provisioning processes have traditionally required long waits and manual effort, the advantage of IaC is that it enables teams to provision the infrastructure they need in a matter of minutes at the press of a key, says Sharma. “Even better, modifying, scaling or duplicating the environment is as simple as modifying the source code and reprovisioning,” he adds.

For Sharma, IaC holds the key to modernising manual processes in operations, breaking down organisational silos and delivering more value. He points out that applications need to scale automatically and ecosystems have developed around approaches such as Atlantis, Kubernetes and GitOps. “Operational tasks are reduced to code commits that trigger automated processes that reconcile the runtime configuration with the committed changes,” he says.

Securing the edge

What is less understood is IaC’s role in security. KPMG’s Astley urges organisations to integrate IaC into their cyber security strategy as quickly as possible, because it can help to prevent and remediate cyber attacks. A study from Harvey Nash reported recently that nearly half (43%) of digital leaders say they have a cyber security talent shortage.

“IaC is something that can help automate some security tasks, therefore lightening their workload and allowing InfoSec teams to focus on more business-critical issues,” says Astley.

While engineers previously had to provision and configure their cloud manually, in Astley’s experience, using input scripts through IaC offers a single source of truth. He says: “The positive effect of this is the removal of possible human error when any changes to infrastructure are made, drastically reducing the potential for the opening of a new exploitable vulnerability for threat actors to take advantage of. It is also possible to view all code misconfigurations in one place and therefore faster to manage and remediate them.”

Astley also points out that the automation offered by IaC gives IT operations teams a way to deploy updates from cloud providers instantly. “When new and secure iterations of cloud tools are released, there is minimal delay to updates, reducing their risk exposure,” he says.

As Astley notes, one of the greatest benefits of IaC is that when done right, it is a 100% accurate and up-to-date documentation of the live environment itself. “InfoSec teams will find this invaluable in performing threat assessments,” he says. In fact, these threat assessments can be run automatically based on the code.

Also, Astley believes IaC provides a way for teams to build up an understanding of common vulnerabilities and having a documented response and improvement process to address weaknesses discovered during the threat assessment audits.

He says IaC is also vital to an organisation’s recovery after a cyber incident – especially with regard to common exploits such as ransomware. “With IaC, requirements of resources are already codified, which makes it ideal for incident response and disaster recovery,” he says.

“If an attack should occur, with IaC it is now possible for IT teams to perform disaster recovery by rapidly generating a new, identical environment from the IaC scripts and the previous backups. Being able to restore to a known working state in minutes is critical to fast recovery from that scenario.”

Common language

John Davis, distinguished engineer at Kyndryl, believes DevSecOps has encouraged developers to become more familiar with infrastructure and operations to be more application-aligned. He points out that IaC serves as a common language through which both can communicate, collaborate and co-create.  

But to be successful with IaC, Davis urges IT leaders to consider the wider systems context of the build process. “Most organisations will have several systems that need updating because of new environments,” he says. “Anyone can provision a server in the cloud quickly, but becoming production-ready, secure and segregated with the correct network flows is where a well-defined IaC design is key.”

Davis recommends that to maximise the opportunity of using IaC, IT leaders need to assess how automated IT configuration updates can be integrated into the wider IT support ecosystem. “Once you have moved to IaC, you should be able to remove secondary controls that were previously in place to validate the completeness of manual work,” he says.

For Davis, an environment built using IaC via a DevOps pipeline offers preciseness of execution and audit capabilities that make some of these controls redundant.

Such capabilities are essential at the organisation’s network, in the context of an edge datacentre, whether it is to support a remote server room, artificial intelligence-powered data acquisition from edge devices, or branch office enterprise systems.

Read more on Server virtualisation platforms and management

CIO
Security
Networking
Data Center
Data Management
Close