

KubeCon/CloudNativeCon
At KubeCon Europe in Paris this week, SUSE updated both its Rancher container management and edge computing platforms with more security and automation.
Rancher, SUSE‘s popular complete software stack for running and managing multiple Kubernetes clusters across any infrastructure, comes in two main commercial versions: SUSE Rancher Prime and SUSE Edge. So, as you might expect, SUSE announced major updates, Rancher Prime 3.0 and Rancher Edge 3.0, at KubeCon+CloudNativeCon EU, being held in Paris this week.
Rancher Prime 3.0, SUSE’s commercial iteration of the open source enterprise container management platform Rancher, is at the heart of these enhancements. The update introduces features designed to empower platform engineering teams to offer developers self-service Platform as a Service (PaaS) capabilities alongside bolstered support for AI workloads.
As always, Rancher Prime supports any certified Kubernetes distribution. It streamlines cluster operations, offering provisioning, version management, monitoring, and centralized audit capabilities. You can also use it to automate processes and enforce consistent security policies across all clusters, regardless of their location.
In this latest version, Rancher Prime enhances security through Supply-chain Levels for Software Artifacts (SLSA, pronounced Salsa) certification and the provision of software bill-of-materials (SBOM), ensuring trusted deliveries for enterprises. Putting this into use, Rancher Prime introduces the general availability of the Rancher Prime Application Collection. This is a curated library of minimal, hardened developer and infrastructure images with signatures and SBOMs.
SUSE also uses NeuVector 5.3.0 to bring open source zero trust to the platform. In addition to securing the containers, Rancher Prime also uses NeuVector’s layer seven network application inspection capabilities as a firewall. For example, it checks on DNS for fully qualified domain names (FQDNs), into IP addresses to make sure external connections are legitimate.
Another key enhancement is bundling in Harvester 1.3.0. This enables you to create and use virtual GPUs (vGPUs). In Kubernetes, a vGPU is a mediated device that allows multiple VMs to share the compute capability of a physical GPU. You can assign a vGPU to one or more VMs created by Harvester. Harvest also enables hyper-converged infrastructure (HCI) deployments. These are highly optimized platforms that tightly integrate compute and storage.
In addition, the Certified Kubernetes distributions RKE2 and K3s have been enhanced to automatically detect and configure the use of NVIDIA’s container runtimes. Put this all together, and if you’re considering moving AI and ML to Kubernetes, Rancher demands your attention.
The Rancher Prime software family has an 18-month lifecycle. This comes complete with support, security patches, and maintenance updates.
The SUSE Edge 3.0 stack release is based on Prime, but it’s designed to run in resource-constrained, remote locations with intermittent Internet connectivity. Edge 3.0 is underpinned by SUSE Linux Enterprise (SLE) Micro and the Cloud Native Computing Foundation CNCF-certificated Kubernetes distributions, K3s/RKE2. This gives you all you need to support containers, virtual machines, and microservices.
All of these programs will become generally available in April 2024. SUSE has its eye on more than just its latest commercial offerings, though.
As Peter Smails, SUSE Enterprise Container Management business unit general manager, said in a statement, “At SUSE, our commercial and open source users are equally important We need to deliver the capabilities our enterprise customers require in order to deploy and manage their business-critical production workloads, while also continuing to invest in innovation to support and grow our huge community of open source users.”
The post SUSE Upgrades Its Rancher Kubernetes Management Family appeared first on The New Stack.
The new features are designed to empower platform engineering teams to offer developers self-service capabilities alongside bolstered support for AI workloads.