Docker is a great technology that has been keenly adopted by the developer community, and increasingly by enterprises. A big part of its appeal is its ability to make the act of creating, deploying and running applications that much easier, as all the elements it needs to run can be wrapped in a container and deployed in an...
on-premise or cloud environment as required.
It also provides a way to create and scale machines quickly and easily in an efficient and controlled manner, but there are some problems looming large.
Managing Docker at scale in a controlled manner, while taking steps to ensure it is running in a secure, compliant and patched way, can be problematic. Especially as the technology moves beyond smaller-scale, developer-centric use and its enterprise adoption ramps up.
One of the biggest problems is cost management. Docker growth, up to this point, has been mostly grassroots-driven by developers and subject to developers’ whims, in terms of requirements and needs rather than those of a traditional enterprise.
So, when it comes considering the application requirements, what developers perceive as minimal specs often bear no relation to real-world constraints.
Bringing in some business oversight is a key first step, as bill shock often affects many developers when they get their first invoice through for the cloud services they have used and paid for on the company credit card.
The development can suddenly get stripped back to bare-bones almost overnight, and requests to shut down workloads in production can occur, much to the dismay of the organisation’s support staff.
Read more about Docker and containers
- Global corporates are waking up to containers and orchestrated containerisation for software development that is fast and safe. Computer Weekly looks at the best approach to ensure security is not compromised along the way.
- Mark Ramsey, chief digital officer of research and development at GSK, describes how banking on big data, containers and on-premise technologies is speeding up the company’s efforts to uncover new drugs and improve the hit rate of its clinical trials.
It is not as easy to fudge the issue of who pays when every virtual CPU and block of disk space has to be paid for.
Meeting somewhere in the middle is best practice here, and will involve a proper cost management of the infrastructure being carried out, where the business lays out its requirements for development and production systems.
Having a proper business-driven, project-centric plan and legitimate costings for the development, delivery and continuing support of the application once it enters mainstream production is essential. It also means that costs can be apportioned correctly.
Locking down security
The next area that needs addressing is the security surrounding the organisation’s Docker estate, and is one that is being keenly addressed by the network management technology community at the moment.
Such technologies can come with rather large bills. For some environments, however, it is very much worth the price to step up security at the Docker instance level. It all depends on the level of risk the business is willing to entertain.
On a lower level (read easier to implement and still important) is the ability to integrate vulnerability scanning into your continuous integration build process.
It helps avoid some of the lower hanging fruit in terms of vulnerabilities, and there are also lots of free options that readers are recommended to try out.
Keeping tabs on instance types
Other small but equally beneficial steps include being watchful of the instances that are downloaded and used.
There are quite a number of docker instances that have proven to be malicious in different ways. Avoid this by firstly ensuring the Docker instance you use is the official offering from the verified developer. Don’t just put your trust in a random image the developer found.
Also enforce the use of signed docker images. Doing so provides an additional level of security. It won’t solve all the potential issues around provably safe and correct images but it costs nothing and can really help.
Just as important is preserving access control to the infrastructure. While developers may be trustworthy, they are not usually experts in cloud security.
The administrators should be given control over the infrastructure, to allocate resources, rights and network infrastructure as needed.
A fully centralised management centred around the projects makes life not only easier but more secure and easier to audit and manage. If the developer whose credit card is on file leaves, it could end in some outages and virtual tears otherwise.
Tackling infrastructure management
Infrastructure-level management is another area that can prove taxing, because a lot of Docker installations are started in “Stealth mode” by developers without the C-suite’s oversight. Also, unlike virtual machines, Docker instances have a finite life span.
Look at the application level for management. Put controls around the application (in terms of application availability, rather than virtual machine availability) and the virtual machine concept essentially goes away.
In summary, Docker provides a wonderful opportunity to get the best of both worlds, from a scale and contract on-demand perspective, as well as being able to reduce costs to cover resources consumed. But these abilities have to be managed in a professional and secure manner to ensure business continues in the expected fashion.