ashumskiy - Fotolia
The rise of DevOps in the enterprise has seen many organisations streamline the processes they use to push their code out of test and development environments and into production, gaining greater capabilities in areas such as continuous delivery along the way.
This approach to software delivery isn’t without its problems though, as some organisations allow developers to push through code without checking if the business needs it. This can lead to chaos as root cause analysis of any problems can be an issue and users have to cope with rapid changes to the way they work.
When done right, DevOps can create a dynamic and flexible IT environment that is far more responsive to the business’s needs, as many of the process management jobs IT teams were previously required to do during traditional cascade projects are wound down.
But some organisations are looking to take things even further. After all, if development teams can be empowered to move things from test and development and into operations, then why not make them responsible for maintaining that code in the production environment?
But, although NoOps has some undoubted benefits, it’s an approach to software delivery that is only suitable for use in specific situations.
NoOps: Where it does and doesn’t work
In an owned IT platform environment – for example, a private datacentre, or where equipment is installed in a colocation facility – NoOps isn’t really viable. This is because the equipment still needs provisioning. Servers need connecting to networks and operating systems need to be installed, while storage volumes and network addresses also need setting up.
This isn’t the work of developers, therefore specific operations staff will be required to carry this out, as well as monitor and maintain the entire IT estate.
Developers are there to develop specific code and they need to ensure it’s in the right place for it to run effectively.
For NoOps to work, it needs an IT platform that developers don’t need to worry about in terms of resource constraints – and that’s where the cloud comes in.
Once the hardware is out of the hands of the organisation, the operations side of the equation becomes someone else’s problem. The cloud provider has the job of provisioning, monitoring and maintaining the hardware and – provided a suitable service level agreement (SLA) has been settled – the physical aspects of the platform become relatively immaterial.
Then it’s up to the developer to ensure the code they’ve developed and tested in a contained environment is fit for purpose. All too often, even in cascade projects, developers fall into the trap of believing their operational environment will perform the same as their development one, forgetting that much of what they do is self-contained in their own workstation or hived away from the vagaries of the main network.
Virtualised test environments with synthetic workloads can provide some guidance on how code will operate in production, but running code as a parallel stream to existing code in the operational environment gives a real-world experience.
A team of tame guinea pigs can roll out code in the operational environment against live data across the same networks as those who are using the existing code. Therefore, the user experience will be very similar to when the new code goes live.
However, this does not negate the need to ensure the code is packaged and provisioned correctly. Developers may not be the best people to do this, but automation tools are coming through that will make this easier.
Tech answers to the NoOps conundrum
Suppliers in this space include Automic, which provides an application release automation capability that can ensure code is packaged correctly with all dependencies covered. Electric Cloud offers something similar and integrates with existing tools to enable developers to work in their usual way.
StackIQ Boss provides a system that can provision everything from bare metal environments to full-stack platforms – an approach it calls palletising – to provide warehouse-grade IT.
A newcomer to the space is Verilume, which has launched a system aimed at identifying where spare, idle capacity is available and harvesting it for use. It can easily be adapted to become a DevOps/NoOps tool – however, NoOps in an owned environment isn’t really feasible.
Another newcomer, Platform9, is positioned to become a major player in the private cloud space, with its aim to minimise the need for operational involvement. It provides automation tools to provision private cloud and can deal with containers, such as Docker.
Containers currently hold the most promise for the future of NoOps. A container, built up from a collection of discrete microservices, can be created dynamically and provisioned to a platform with relative ease.
The key is for there to be sufficient intelligence in the software around the provisioning engine to understand the needs and workload characteristics of the container. This will ensure it is provisioned to the right place at the right time.
With big data analysis becoming more mainstream, along with the growing abilities of pattern recognition and predictive capabilities, it is likely we will see the automation of workload provisioning become more effective in the future.
As the use of cloud platforms continues to grow and the need for hardware provisioning, monitoring and maintenance becomes a task dealt with by cloud service providers, the possibilities for NoOps will grow – along with the benefits for organisations that adopt the approach.
Read more about NoOps
- Enterprises stand to lose business to startups by refusing to give their development teams the freedom to create, test and deploy new service offerings, according to one IT professional.
- The NoOps term reflects a trend in the diminishing number of people in operations relative to developers.