This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.
This contribution is written by David Archer, sales engineer at Contrast Security — the company’s technology works to automatically detect and fix applications vulnerabilities as it identifies attacks.
As we know, the purpose of a CI/CD pipeline is to automate the checks and processes involved with delivering software into production so that it can be performed in a consistent way and is not affected by human error.
What is sometimes overlooked is that the ‘CD’ part of CI/CD can refer to either Continuous Delivery or Continuous Deployment and there is a subtle difference.
The process of Continuous Delivery means delivering code into production-like environments in order that you can confidently push a release into production as and when required. With Continuous Deployment the process is completely automated, and code is deployed into production.
The process starts with a developer committing code to a repository which triggers the CI/CD pipeline to start. During this, a number of tests are run. These include integration tests that verify that the changes the developer made do not affect any other components of software.
There is no limit to the number (or size) of the changes. Having said that, developers are encouraged to keep changes small and commit code regularly so that should any tests fail, the root cause can be quickly isolated.
The use of a CI/CD pipeline arguably becomes more important as organisations move from deployment of a single monolith application to a loosely coupled architecture using microservices. When microservices are used there are exponentially more components to release and teams must be able to deploy changes into production independently of other teams.
By eliminating human error in the delivery process and running a large number of automated tests in the CI/CD pipeline, it is possible to significantly reduce the number of configuration problems or bugs seen in production environments. However, the benefits do not stop there.
The CI/CD pipeline also results in a shorter feedback loop for developers. This allows bugs to be remediated not only earlier, but faster, as the developer still has good context around the code that they wrote. When you couple these benefits with a fast, automated and reliable delivery process, then it is obvious why companies have embraced this approach to delivery.
Having said that, the learning curve for CI/CD can be steep. There are a number of CI/CD tools available but figuring out the right tool for your organisation will depend on a number of factors including your code languages, deployment environments and complexity of your build process. When using microservices, each team will normally be able to choose the technology stack which best serves their requirements and therefore a single build process will not be enough. The CI/CD tool needs to be able to accommodate each team’s preferred language, build process and test frameworks, so better to look for a flexible solution.
The bottom line is that a poorly configured or maintained CI/CD tool can result in intermittent failures which will quickly frustrate developers, so reliability of your pipeline is key. In order to get the most from your CI/CD tool it requires up-front effort to create a robust pipeline. However, this will yield numerous long-term benefits. If you have a few issues at the start it is important to deal with any failures early so that you maintain trust in the delivery process.
Once a CI/CD pipeline is in place there can be a temptation to jam additional tools into the process, but you should approach this with caution. There are a number of tools which can be disruptive to the pipeline including security scanners which may extend your pipeline duration past the magical 10-minute feedback loop for developers.