
Kubernetes has a well-deserved reputation for being complicated. In one sense, this is inevitable: container orchestration on multiple infrastructures and feature configurations is complex stuff, involving many potential dependencies.
At its core, Kubernetes is simple: from inside a cluster, the world looks like a stack of “only as complex as they need to be” abstractions that Kubernetes operates on in a reliable, predictable way.
And of course, that’s the point of Kubernetes: to replace prior notions of data centers and cloud platforms with a new, unified, singular open source platform and automation standard: something people can use to “pave the world of infrastructures,” enabling true workload portability, agility and operational economies of scale everywhere.
That led to the introduction of the open source k0s project two years ago by teams originally at Docker and Kontena, brought together by Mirantis, focused on serving these two, related goals.
Simplicity: Create a Cloud Native Computing Foundation–validated, agile, easily customized distribution that consumes essentials from upstream quickly (security patches in less than three days, minor releases in days, full version updates in weeks, all tested). Package it in a way that lets it run in virtually any environment (any CPU, any common Linux, as processes or in containers, on IoT nodes, laptops, servers, scaling to thousands of nodes in data centers), and make it as operationally feature-complete and easy to use as possible.
K0s deploys with one command anywhere, and brings along its own CLI and kubectl — with additional open source deployment and operations tools available for download from the same source, for example, k0sctl or Lens Desktop).
Ability to pave the world of infrastructures: Kubernetes is both a platform and a readily extensible automation system — one that is, among other things, engineered to operate on itself.
K0s approaches Kubernetes operations challenges in a Kubernetes-native way by configuring things declaratively using YAML or JSON, creating custom abstractions when necessary and applying and managing them with kubectl or other automation.
Since the original version, k0s has evolved rapidly releasing:
- An Autopilot k0s operator that manages safe updates according to user-set plans;
- A Kubernetes Cluster API (CAPI) operator, with providers and extensions, that lets you use ClusterAPI as a (Kubernetes-native) vehicle for cluster operations;
- Support for Konnectivity — a protocol-enabling secure bidirectional communication between Kubernetes workers and control planes, even when separated by passive firewalls, thus enabling more complete (and “it just works” simple) control plane/worker separation with less network fiddling.
Then k0s (and its growing flock of operators) is becoming what we think most Kubernetes users want: a plain-vanilla, full-featured, configurable Kubernetes that runs anywhere, minds its own updates and can (increasingly automatically) manage infrastructure — all in Kubernetes-native ways compliant with Kubernetes-style infra-as-code best practices.
Also, k0s is a Kubernetes that’s hugely flexible, letting you configure and put control planes and workers wherever that makes sense for your use cases.
In short, k0s is “Kubernetes, complete and simple, working as intended.” It’s open source software and you can use it to pave your infrastructure world.
K0smotron
K0smotron, just being introduced as open source software, is the next step: its operator (runs on any CNCF-validated Kubernetes) that lets you host, scale and lifecycle-manage containerized k0s control planes on a Kubernetes cluster, using Kubernetes-native methods — and then configure and attach workers to these virtual control planes from anywhere.
K0smotron is being built to solve big challenges now being faced by organizations that want to leverage Kubernetes cost-effectively and with agility — with minimal need for platform engineering or special skills and with low operational overheads.
These days, some of these use cases are ubiquitous — basically, everybody has this problem:
Questions: How do I deliver and maintain a lot of Kubernetes environments at different scales for developers, teams and projects? How do I enable Kubernetes self-service by individuals (“I need a test cluster!”) and teams (“I want to easily perform a blue/green app deployment!”)?
Conventional approaches to meeting this challenge begin with a series of big, important, technically demanding and potentially costly choices that became more fraught as organizational scale increased. Should you host Kubernetes clusters on virtual machines or on bare metal? Public or private cloud (or perhaps both)? DIY open source or proprietary IaaS cloud solution?
And then, based largely on these choices, you’ll need platform engineering and deep automation skills — for managing underlying virtual infrastructure(s) (if you’re using a cloud), setting up clusters, scaling them, keeping them secure/policy-managed/compliant over time and updating them (possibly around running applications).
All this means a ton of knowledge required, complicated automation code to maintain and lots of new procedures to map out and keep up with.
Life is better with k0s/k0smotron — By comparison, k0s lets you lay down a robust Kubernetes host cluster on any infrastructure (with k0sctl, a multi-node cluster can be built with a single command in minutes), install the k0smotron operator (one command), then:
- Define a control plane in a single, simple, human-readable YAML file;
- Apply it with one command;
- Derive a worker join key for the hosted control plane with one command;
- Install workers anywhere — each requiring just three commands: install, proffer the key securely and start, joining them to the control plane.
The process is incredibly fast, very robust, and because it requires so few interactions, super-easy to automate using whatever tools you’re using now.
K0smotron child cluster control planes are Kubernetes applications, so Kubernetes does all its standard stuff to keep them up and healthy, with no single points of failure.
- The k0s child cluster is self-updating and self-healing via the Autopilot operator, which is very smart about doing things like updating control plane nodes before worker nodes and rolling back in the event of issues, etc. So you have virtually no operations overhead for child clusters.
- The k0s host cluster can also self-update using Autopilot. Once you have configurations set the way you want them, the whole system is largely self-maintaining without human intervention.
- It’s also self-scaling. The k0s host cluster can be scaled out and back dynamically using the CAPI operator to drive underlying cloud infrastructure or provision/de-provision bare metal (using a k0sctl derivative codebase).
- k0s child clusters can also self-scale across the host, as required, and even petition the host to scale itself if more physical or virtual capacity is needed. And k0smotron can talk via the host’s CAPI operator to provision nodes for use as workers, too. So every aspect of solution operations is performed and controlled by Kubernetes-native means.
Other use cases for k0smotron are more nuanced. For example:
Questions: How do I use multicluster Kubernetes to efficiently, and centrally manage a large distributed network of IoT devices running workloads?
Again, the conventional solution is pretty agonizing. First, you solve the multi-cluster management challenge (see above), but with some additional gotchas. For starters, remote workers aren’t accommodated by public cloud Kubernetes, and most enterprise platforms struggle to deliver this functionality out-of-box, so you might be forced to DIY (complex, time-consuming, many unknowns).
Then you solve the challenge of provisioning, updating and maintaining potentially thousands or tens of thousands of workers — perhaps at the other end of wireless or other iffy network links. If “setting up a worker remotely” is at all complicated, your automation will be challenged to scale and work reliably.
And even if you can stand up workers out in the field — without robust control plane/worker separation, most Kubernetes cluster models won’t function well in these circumstances.
With k0s/k0smotron, these problems basically vanish. K0s runs on almost anything (we’ve tested down to a single ARM7 CPU with 512MB RAM) and maintains control plane/worker separation (including recovery/reconnection, etc.).
Konnectivity handles workers to control plane links past dumb firewalls and NATs. The Autopilot operators keep the host cluster, child cluster control planes and their distributed workers updated. The CAPI operator manages infrastructure for the host cluster and potentially provisions the IoT nodes.
Your investment to build an MVP of the “infrastructure” layer of this application involves writing some fairly simple automation to make simple, robust procedures run at scale and be user-friendly.
Fully Supported by Mirantis
K0s and k0smotron were built to make Kubernetes simple and to popularize the powerful idea that Kubernetes can and should function as a largely autonomous abstraction layer over infrastructure.
Both are CNCF-validated and promptly updated to keep up with kubernetes.io and CNCF ecosystem innovation, and are secured, validated and completely tested by Mirantis. k0s/k0smotron offers a clean, safe route whereby organizations can consume upstream innovation to flexibly leverage multicluster Kubernetes anywhere, with minimum cost and risk.
They’re also fully supported by Team k0s and Mirantis Opscare, up to the “ZeroOps” state provided by Mirantis Professional Services and/or Opscare. Plus: Mirantis’ global bench of cloud native platform experts custom-craft and manage your solution, letting you focus 100% on innovation that will drive your business.
We encourage you to try open source k0s and k0smotron. Our new blog, “Getting Started with k0smotron,” makes it simple.
The post Reimagining Multicluster Kubernetes with k0s/k0smotron appeared first on The New Stack.
k0s is highly-flexible Kubernetes, letting you configure and put control planes and workers wherever that makes sense for your use cases.