As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
This post is written by Sean Roth, in his capacity as director of product marketing for Karbon & Cloud Native Solutions at enterprise cloud company Nutanix. Roth’s full title for this discussion is: How to transform your datacentre into a Kubernetes powerhouse for developers.
Roth writes as follows…
Containers and microservices architecture have emerged as top choices for building modern applications. Over half of organisations surveyed for the 2020 State of Enterprise Open Source report noted they ‘expected their use of containers to increase in the next 12 months’ — and another survey by the Cloud Native Computing Foundation suggested containers reached 84% use in production in 2020, up from 23% in the first survey conducted in 2016.
It’s no surprise cloud-native technology is growing in popularity — it simplifies complex applications while simultaneously enabling businesses to build, deploy and scale them faster. Other benefits of going cloud-native include decreased time to market and lower costs. But figuring out the building blocks of cloud-native architecture and creating a Kubernetes powerhouse is easier said than done.
Here’s what to consider.
Identify key organisational challenges
Kubernetes is deep and complex, evolving fast with its growing ecosystem of technologies — and many organisations don’t have in-house expertise, so they can’t keep pace with developers’ needs. Legacy infrastructure isn’t built with Kubernetes in mind; Kubernetes is a distributed system, requiring compute, storage and networking that adapts to the way it operates and uses IT resources. Finally, it can be difficult to optimise multi-cloud usage. Without the right tools, costs and management burden can skyrocket. A myriad of open source and commercial offerings can help confront these challenges and address every aspect of building, deploying and managing modern applications. But the number of choices can easily paralyse decision-makers.
When you take a step back, you can see many decisions are governed by the desired tradeoff between simplicity and control.
For example, public cloud providers offer many cloud-native technologies as services decoupled from the underlying infrastructure and its associated complexities. These services are production-ready and easy to consume, but the drawback is the lack of control available to experienced users, in many cases. When building a dedicated cloud-native environment in one’s own datacentre, there is substantially more freedom in terms of how it’s architected and operated, but it comes with the price of high complexity and heavy lifting on the part of IT operations to get it up and running.
Thinking through these challenges and important decisions before implementing a new technology will [at least potentially] help ensure you set your organisation up for long-term success.
Invest in Kubernetes management
The majority of enterprises today operate a hybrid mix of on-prem and public-cloud Kubernetes environments, but they don’t all get there the same way in terms of what is operationalised first. If hybrid cloud Kubernetes is a key objective, organisations also must take into account which cloud-native tools and technologies can help them without locking them in as a result of containing too many proprietary elements. Here, organisations should seek out solutions that deliver a native Kubernetes user experience.
Leaders can look to Kubernetes management services, which enable teams to do more with fewer resources and without a robust team. Customers require a single console, hybrid cloud management system; they need visibility into and control of all of their Kubernetes deployments through a single pane of glass.
A Kubernetes management system solves this issue — it allows administrators to provide infrastructure required to enable developers to build the next generation of applications, all within their private datacenter.
The transition from silos to DevOps
IT leaders want to make Kubernetes easy and eliminate the heavy lifting to enable IT Ops to keep up with developers. This means focusing energy and investments on transitioning from silos to DevOps. The DevOps market is expected to grow to US$10.31 billion by 2023 and it’s clear that an increasing number of organisations are turning to DevOps methodologies.
Creating a DevOps environment means realigning people, processes and technologies as a broader cultural shift to work together seamlessly and enable agile software development.
Talk is cheap — goals must be set (ideally with metrics) so you can form an action plan and measure how your DevOps organisation performs. These action plans will call for cross-team disciplines, coordination and priorities to unblock conflicts and break through silos. Change is hard and an ad-hoc approach won’t work.
Design for automation
As this Google article covers, while upfront investment in automation might be high, it pays off in the medium term when it comes to effort, resilience and performance of your system. Automated infrastructure is core to going cloud native and enables your team to spend less time on repetitive tasks and more time on seamless scalability.
From an infrastructure perspective, automation is essential to keeping up with developers’ needs. Kubernetes lifecycle management involves a lot of manual work and the ability to automatically execute upgrades, scale the cluster and provision new clusters is essential for speeding time to production for applications. Automation also ensures a much higher level of consistency across the software development pipeline in terms of test, staging and deployment.
I’d recommend automating everything from notifications, provisioning, testing and deployment to fully load-balanced KPI-based auto-scaling. Give developers autonomy and access to your infrastructure, operations and budget to work with developers to create systems and tools to drive that automation — and you’ll see good things happen.
Combined with DevOps methodologies, the right technology to manage Kubernetes can enable your organisation to stay ahead of the competition. The right tools can increase throughput, reduce lead times and reduce production failure rates — and this can happen without specialised expertise in the infrastructure required to deliver cloud-native applications. Success just requires the right mindset and an openness to change.