Herrmann states that, as many of us know, Kubernetes was originally designed and developed by Google employees when, at the time, Google was one of the first supporters of Linux container technology.
Kubernetes is an open source project forming the basis for container management in many deployments. It provides an environment for the automated provisioning, scaling and management of application containers.
Herrmann writes as follows…
Back in May 2014, Google publicly announced that Google cloud services run on containers. Each week, Google generated over two billion containers on Borg, its internal platform, which served as the predecessor to Kubernetes.
The wealth of experience that Google gained from developing Borg over the years significantly influenced Kubernetes technology.
Google handed over the Kubernetes project to the newly founded Cloud Native Computing Foundation in 2015.
Kubernetes provides a platform on which containers can be deployed and executed using clusters of physical or virtual machines. As key technologies, Kubernetes uses Linux and a container runtime.
Linux runs the containers and manages resources and security. The container runtime manages host-level instantiation and resource assignment (for example Docker or CRI-O). IT departments can use Kubernetes to:
- Orchestrate containers across several hosts
- Make more efficient use of hardware resources required to run company applications
- Manage and automate application deployment and updates
- Mount storage and add storage capacities in order to run stateful applications
- Scale application containers and their resources
Moreover, Kubernetes integrates and uses the services and components of additional open source projects, such as Atomic registry, OpenvSwitch, SELinux or Ansible.
Current focus of Kubernetes
Version 1.8 of Kubernetes has been available since September 2017. The members of the community are currently focusing on five areas:
#1 Service automation
One of the new features of Kubernetes 1.8 in the area of service automation is Horizontal Pod Autoscaling (HPA). HPA enables Kubernetes to automatically scale the number of pods based on usage. Integrating custom metrics allows users to benefit from greater flexibility in scaling workloads.
#2 Workload Diversity
Workload diversity takes two main considerations into account. The first of these is batch- or task-based commuting. Many users are interested in moving some batch workloads to their OpenShift clusters. That is why several new alpha-stage features have been added. These concern batch retries, the waiting time between failed attempts, and other activities necessary for managing large parallel or serial implementations. The second consideration is scheduleJob, which has now been renamed cronJobs and is in beta development.
#3 Security: role-based access control
The Red Hat OpenShift Container Platform was one of the first Kubernetes solutions to support multi-tenancy. Multi-client capability simultaneously makes it necessary to develop role-based access control (RBAC) for the cluster. RBAC version 1 is generally available with Kubernetes 1.8. RBAC authorisation has been a direct port from the OpenShift authorisation system since version 3.0 and enables granular access control to the Kubernetes API.
Immediately deployable RoleBindings are another new feature; these range from discovery roles, to user-facing roles, through to framework control roles and controller roles. New features also include integration with escalation prevention and node bootstrapping as well as the possibility to adapt and expand RoleBindings and ClusterRoleBindings.
It will be possible for kubectl – a command line tool for running commands against Kubernetes clusters – to work with plug-ins, thanks to the work carried out by the CLI Special Interest Group. This function is still in an early stage, and will make it possible to expand kubectl without having to clone the code repository. Developers write the desired code in the language of their choice and then issue the command. This leads to new subcommands, since an executable file is stored at a particular storage location on the hard drive.
#5 Cluster stability
Kubernetes 1.8 features a client-side event filter in order to increase cluster stability. This filter will stop excess data traffic on the API server caused by internal cluster components. There is also a new option to limit the number of events processed by the API server. Threshold values can be globally set on a server, or set per namespace, user, or source+object. Moreover, Red Hat has worked to enable API users to receive the results in Pages. This will minimize the memory allocation impact caused by comprehensive queries.
Finally, another new feature in Kubernetes 1.8 is a stable version of the lightweight container runtime CRI-O. This makes it possible to use OCI-compatible (OCI = Open Container Initiative) containers in Kubernetes, without requiring additional code or other tools.
At present, CRI-O focuses on starting and stopping containers. Although CRI-O has a command line interface, this was only designed to test CRI-O itself and is not suitable for managing containers in a live system. The Red Hat OpenShift Container Platform, for instance, provides an opportunity to create and operate containers via CRI-O.