Unlike digital-first organisations, traditional businesses have a wealth of enterprise applications built up over decades, many of which continue to run core business processes.
In this series of articles we investigate how organisations are approaching the modernisation, replatforming and migration of legacy applications and related data services.
We look at the tools and technologies available encompassing aspects of change management and the use of APIs and containerisation (and more) to make legacy functionality and data available to cloud-native applications.
This post is written by Amith Nair, VP of product marketing at HashiCorp — the company is known for its open source tools and commercial products that enable developers, operators and security professionals to provision, secure, run and connect cloud-computing infrastructure
Nair writes as follows…
To thrive in an era of multi-cloud architecture, driven by digital transformation, enterprise IT must evolve from ITIL-based gatekeeping to enabling shared self-service processes for DevOps excellence.
For most enterprises, digital transformation efforts mean delivering new business and customer value more quickly… and at a very large scale. To unlock the fastest path to value in the cloud, enterprises must consider how to industrialise the application delivery process across each layer of the cloud: infrastructure, security, networking and application runtime.
The adoption of cloud means organisations shift away from provisioning and managing static infrastructure towards using a dynamic infrastructure. The implication of dynamic infrastructure means IT operations teams must now provision and manage an infinite volume and distribution of services, embrace ephemerality and immutability… and deploy onto multiple target environments.
For organisations that have implemented infrastructure automation workflows, they are challenged with processes imposed by security, compliance and central IT to govern operational best practices and enforce relevant policies which impact productivity and risk.
Infrastructure automation lacks automation that is scalable, consistent and reusable. Infrastructure automation for cloud is best addressed with Infrastructure as Code for provisioning, compliance and management of any public cloud, private datacentre and third-party service. There are three milestones organisations can follow as they build out mature workflows for infrastructure automation as they adopt cloud.
- Collaborative Infrastructure as Code to write, share, manage and automate any infrastructure.
- Cloud Compliance and Management with automated policy enforcement for security, compliance and operational best practices enforced across the organisation.
- Provide self-service Infrastructure workflows to enable developers to provision their desired infrastructure from within their own workflows.
This shift in operating models requires a fundamentally different approach to security: instead of focusing on a secure network perimeter with the assumption of trust, the focus is to acknowledge that the network in the cloud is inherently “low trust” and move to the idea of securing infrastructure and application services themselves through a trusted source of identity and secrets management.
People: Developers and operations staff struggle with the sprawl of secrets and access credentials sprinkled throughout their applications and infrastructure given the highly dynamic nature of a cloud environment.
Process: Teams lack a consistent workflow for interacting with secret data as applications start to span public and private clouds. Teams are challenged to ensure sensitive data is secure and has proper access controls.
Tools: Organisations have no central trusted system to manage secrets and protect sensitive data. This can lead to multiple cumbersome solutions that lack consistency across environments.
Security automation for the cloud is best addressed with a way to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data across public and or private cloud.
- Provide Secrets Management for cloud essentials with a way to centrally store, access and distribute dynamic secrets such as tokens, passwords, certificates and encryption keys.
- Enable advanced data protection and keep application data secure with centralised key management and simple APIs to encrypt/decrypt data.
- Provide identity-based access for developers to authenticate and access different clouds, systems and endpoints using trusted identities.
Cloud adoption is driven by organisations prioritising application delivery. The essential implication is the operating model to accommodate the shift from “static” IP infrastructure private data centers to “dynamic” IP infrastructure across clouds. Organisations should build a foundation for cloud network automation via a central service registry that can then be used to monitor service health, automate network middleware and enable identity-based zero trust networking.
People: Central IT struggles to deliver a consistent NetOps workflow in a dynamic IP environment. Developers want to improve application delivery cycles, but are stymied by many manual networking tasks.
Process: Developers still request networking via ticketing which is slow. Policy enforcement is manual or non-existent which impacts productivity and risk.
Tools: Teams have to use multiple networking tools which decreases productivity. Services networking lacks automation that is scalable, consistent and reusable.
Network automation for cloud is best addressed with a shared registry for connecting and securing services across any runtime platform and public and or private cloud.
- Provide a Service Registry and Discovery workflow to track and improve the resilience and visibility of running services.
- Enable networking middleware automation to improve productivity by automating networking operations.
- Assume zero trust networking and provide a Service Mesh to enable secure service-to-service communication.
At the application layer, new apps are increasingly distributed while legacy apps also need to be managed more flexibly. Organisations should look for a flexible orchestrator to deploy and manage legacy and modern applications, for all types of workloads: from long running services, to short lived batch, to system agents.
To achieve shared services for application delivery, IT teams use a workload orchestrator in concert with networking, infrastructure and security tools to enable consistent delivery and orchestration of applications on cloud infrastructure incorporating necessary compliance, security and networking requirements.
People: Developers are creating and packaging applications across a variety of application platforms, including container packaging and existing or legacy applications. Developers are increasingly pressured to create and deploy applications faster and are impacted by the adoption of cloud and learning the new tools that manage the infrastructure and the deployment of the applications. Rather than relying on filing tickets to operations teams to handle the deployment and updates through multiple stages of review and testing, developers prefer a self-service experience to deploy applications on-demand.
Process: Rather than relying on filing tickets to operations teams to handle the deployment and updates and going through multiple stages of review and testing, developers now prefer a self-service experience to deploy applications on-demand to improve their velocity.
Meanwhile, operators are overwhelmed by increasing demands/requests from developers and look for a standardised, automated deployment workflow to support mixed application types in a consistent and efficient way.
Tools: The heterogeneity of clouds and runtime platforms has developers learning many new tools. Developers or individual development teams with different backgrounds may be familiar with a particular platform-specific tool, while some developers are only comfortable with legacy VM-based deployment workflows and platforms. This challenges central IT teams to maintain multiple inconsistent deployment approaches resulting in dedicated resources for debugging the run-time errors. Furthermore, this adds significant operational complexity as moving or migrating the workloads across platforms causes a lot of friction.
For modern applications — typically built in containers — a runtime automation tool should provide the same consistent workflow at scale in any environment. Most organisations take a multi-year approach to modernise their existing applications through refactoring or reachitecting.
Tools at this layer should focus on simplicity and effectiveness at orchestration and scheduling to enable incremental migration and avoid a big-bang refactoring. Adopting complex orchestration platforms with many moving components across multiple infrastructure layers in the beginning will slow down the process and also increase the risk of failures. Companies that are successful, generally start with evaluating the top priorities or goals around adopting a workload orchestrator, selecting the most critical use cases they want to achieve and implementing their runtime application automation strategy and tools based upon that.
- Simple container orchestration: Easily deploy, manage and scale containers in production, avoiding steep learning curves.
- Non-containerised application orchestration: bringing orchestration benefits to existing applications without the need to containerise all of them. Allow teams to containerise at their own pace.
A common cloud operating model is ideal for enterprises aiming to maximise their digital transformation efforts. Enterprise IT needs to evolve away from ITIL-based control points with its focus on cost optimisation, toward becoming self-service enablers focused on speed optimisation. It can do this by delivering shared services across each layer of the cloud.