Enterprise Linux company SUSE loves Linux, obviously.
As Linux lives so prevalently and prolifically in the server rooms of so many cloud datacentres, the firm has worked to develop technologies designed to help those datacentres become software-defined.
A software-defined datacentre being one that relies upon programmable elements of code that control, shape and manage many of the network actions that we might (perhaps 10-years ago, certainly 20-years ago) have relied upon dedicated highly specialised hardware for.
In a software-defined network, a network engineer or administrator can shape traffic from a centralised control console without having to touch individual switches in the network.
Why all the software-defined contextualisation?
Because SUSE says that enterprises are now accepting the software-defined is the way to go… if not perhaps quite a full de facto standard, SUSE still says that customers are now looking to move beyond the software-defined datacentre to also embrace edge and cloud computing in a wider context…
… it is, if you will, an edge to core to cloud play that we’re seeing here.
“Because our customers have a growing need for computing solutions that span the edge to the core data center to the cloud, SUSE must be able to deploy and manage seamlessly across these computing models, unencumbered by technology boundaries,” said Thomas Di Giacomo, SUSE president of engineering, product and innovation.
Di Giacomo looks fondly back upon the fact that SUSE has been delivering enterprise-grade Linux for more than 25 years now.
Given this timeframe, he says that it’s only natural that the company now expands to cover the entire range of customer needs for both software-defined infrastructure and application delivery.
SUSE to create, deploy and manage applications and workloads on premises as well as in hybrid and multi-cloud environments – and it does so with an open source first and container-first technology approach.
In terms of roadmap developments, the company notes that SUSE Cloud Application Platform 1.4 will be available this month. This will be the first software distribution to introduce a Cloud Foundry Application Runtime in an entirely Kubernetes-native architecture via Project Eirini.
NOTE: Project Eirini is an incubating effort within the Cloud Foundry Foundation enabling pluggable scheduling for the Cloud Foundry Application Runtime. Operators can choose between Diego/Garden or Kubernetes to orchestrate application container instances. The goal is to provide the option of reusing an existing Kubernetes cluster infrastructure to host applications deployed by CFAR.
Project Eirini allows users to take greater advantage of the widely adopted Kubernetes container scheduler and deepens integration of Kubernetes and Cloud Foundry. It also allows developers to use either Kubernetes or Cloud Foundry Diego as their container scheduler. Whichever is used, the developer experience is the same.
“There is immense value in commercial distributions using new architecture and projects early because it enables us to get feedback from the upstream community,” says Chip Childers, CTO, Cloud Foundry Foundation. “We’re thrilled that end users will be able to try out SUSE Cloud Application Platform 1.4, which offers the Cloud Foundry Application Runtime in a Kubernetes-native architecture via Project Eirini, and comment back on the upstream version.”
So SUSE now highlights increased multi-cloud flexibility with new support for Google Kubernetes Engine (GKE), Google’s managed Kubernetes service. This expanded support for multi-cloud environments extends the options to use the platform in public clouds (Amazon EKS, Azure AKS or GKE), on-premises with SUSE CaaS Platform, or as a multi-cloud combination.
SUSE’s latest enterprise-ready OpenStack Cloud platform will also be available in April as SUSE OpenStack Cloud 9. It is the first release to integrate a selection of SUSE OpenStack Cloud and HPE OpenStack technology into one, single-branded release.
Based on OpenStack Rocky, SUSE OpenStack Cloud 9 helps simplify post-deployment cloud operations using the new Cloud Lifecycle Manager day-two user interface – it also helps transition to SUSE OpenStack Cloud from HPE Helion OpenStack.
One final open source project to mention: SUSE OpenStack Cloud 9 simplifies the transition of traditional workloads through enhanced support for OpenStack Ironic.
Isn’t it (project) Ironic?
NOTE: For the record, OpenStack bare metal provisioning a.k.a Ironic is an integrated OpenStack program which aims to provision bare metal machines instead of virtual machines, forked from the Nova baremetal driver — it is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors