Quantcast
Channel: Kubernetes Overview, News and Trends | The New Stack
Viewing all articles
Browse latest Browse all 274

To Solve Kubernetes Sprawl, Try Kubernetes ‘All the Way Down’

$
0
0
"To Solve Kubernetes Sprawl, Try Kubernetes" featured image depicts a turtle

At some point, certain technologies effectively “win” and become ubiquitous standards by default. We have seen this over and over, but two that stand out are TCP/IP+Ethernet and Linux. More recently, it has become clear that Kubernetes is the de facto standard for delivering application payloads.

Kubernetes is used to deliver cloud native apps, legacy monolithic applications, virtualization and just about anything else. This happened because, like TCP/IP+Ethernet and Linux before it, Kubernetes is the 80-90% solution that “just works” for delivering and managing application payloads. Now, as Kubernetes becomes dominant both inside and outside the enterprise, we see increasing “sprawl” of Kubernetes clusters. Like router/switch, server and virtual machine (VM) sprawl before it, something must be done for organizations to be successful at managing Kubernetes at scale. The solution? Enter … Kubernetes.

First, Some Background

It may not be obvious at first glance, but TCP/IP+Ethernet was not originally the ubiquitous standard that it is today. In the mid-1980s, Cisco Systems was the first company to ship a multiprotocol router. At that time, the internet backbone was still run by the U.S. government, namely the National Science Foundation (NSF). Many business campuses ran on different L2/L3 protocols, including AppleTalk, SNA, DECnet, Banyan Vines, Token Ring, IPX/SPX (Novell) and others. Later, many other L1/L2 protocols debuted, such as Fiber Distributed Data Interface (FDDI) and Asynchronous Transfer Mode (ATM) in the 1990s.

Yet, here we are today, and the combination of TCP/IP and Ethernet has won time and time again because they are open standards, readily available, generally avoid lock-in and solve 80-90% of any networking problem. This allows anyone with deep knowledge of the combination to pivot from one solution (e.g., campus networking) to another solution (e.g., internet backbone networking).

The history of Linux is similar. It was not the first UNIX variant to run on the x86 platform; that honor goes to SCO/Microsoft Xenix. For many years, it was an inferior operating system to the earlier UNIX SVR4 flavors such as Solaris and AIX, which had better support, symmetric multiprocessing and much more going for them. However, like TCP/IP+Ethernet, over time Linux won over and over again, being an open standard that’s readily accessible, allowing you to avoid vendor lock-in, and effectively solving 80%-90% of the operating system problem for any solution, from a Raspberry Pi to massive high-performance computing (HPC) clusters. Now most of the world’s data centers run on Linux.

This Brings Us to Kubernetes

It has been apparent for some time that Kubernetes is effectively winning the application deployment wars. It started as a cloud native solution for scale-out applications, but over time has become generic enough that it is now frequently used for legacy applications, virtualization and just about any payload. It can run on a Raspberry Pi, Microsoft Windows, massive HPC clusters or really anywhere.

Importantly, it has a clean and extensible architecture and an API that is friendly to both operators and developers. It provides an abstraction you can run on your laptop as you develop that is the same when run in production.

Kubernetes is not the first attempt at an application platform. WebLogic and WebSphere came before it; they only supported Java applications but had a similar intent: to make packaging and deploying applications easier. Virtualization might also be considered to have a similar intent. However, over the last 10 years, over and over Kubernetes has won, to the point where now you run VMs, WebLogic or WebSphere on top of Kubernetes!

Kubernetes has matured into the de facto standard for application payload deployments. Like TCP/IP+Ethernet and Linux before it, it solves 80%-90% of the application packaging, deployment and operations problems. It is an open standard, readily available and allows you to avoid vendor lock-in. Perhaps most importantly, it has a baked-in architecture that assists with operations, allowing rolling upgrades, a way for clearly tiering your payloads (pods), an extensible API and more. This is something we know well. We were the first to take OpenStack and put its control plane on Kubernetes with Mirantis OpenStack on Kubernetes (MOSK). We knew early that Kubernetes would eventually win — while complex at times, the benefits to using it were far too great.

Kubernetes All the Way Down

Which brings us to the crux of the matter: Now that Kubernetes has won, how do we ever manage the unending torrent of Kubernetes clusters? Most large businesses are using it everywhere: AWS EKS, Google GKE, Microsoft AKS, Mirantis MKE, generic Kubernetes and on and on and on. Whether it is on premises or on a public cloud, K8s is everywhere, and it continues to grow like a weed. What is the solution?

Why, it’s Kubernetes of course! It really is “turtles — or Kubernetes — all the way down!”

It has become evident that the best way to manage the Kubernetes sprawl is to use Kubernetes as a control plane to manage other Kubernetes clusters. Kubernetes can become a natural “control point” for managing your “thin layer” of Kubernetes, supporting a broad range of container and virtualized workloads and abstracting the infrastructure.

For example, k0smotron separates out the control plane (managing cluster or control nodes) and data planes (managed cluster or worker nodes). This provides a single control point for your Kubernetes clusters to increase scalability, separate concerns (e.g., upgrading your control plane separately from your data planes) and manage clusters across different providers (i.e., agnostic to infrastructure options). This allows using Kubernetes clusters to achieve that Holy Grail of “hybrid multicloud.” In other words, you no longer care where your clusters reside because you are managing them all from the same control point.

Kubernetes Is the Solution for Kubernetes

When I say “control point,” it means that in any IT system, there is usually a natural control point to interact with and manage that system. For example, with Amazon Web Services, the AWS API/UI is the natural control point for managing that system. With Java applications, the natural control point is the JVM. Most systems have a control point, even when their control planes and data planes are collapsed.

What is unique about modern control points is that they almost always have an API. Legacy control points were far less likely to have an API-enabled control point. In this modern era of ubiquitous automation, all control points must have an API.

This is why Kubernetes is uniquely positioned to manage itself. With Cluster API (CAPI) and its providers, we can use Kubernetes to manage Kubernetes, because as a control point it can manage other control points. Cluster API includes “providers” for Amazon (CAP-A), Azure (CAP-Z), VMware (CAP-V), bare metal and so much more.

This means that all the tools necessary are at hand to use Kubernetes to “solve itself.” The answer to Kubernetes sprawl is Kubernetes itself. As an open standard, Kubernetes is the obvious solution for managing across on premises and on cloud clusters. Early control plane efforts such as k0smotron have proven the general direction, but now it is time to take it to the next level and deliver on the promise of “Kubernetes all the way down.”

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post To Solve Kubernetes Sprawl, Try Kubernetes ‘All the Way Down’ appeared first on The New Stack.

The best way to manage the unending torrent of Kubernetes clusters is to use Kubernetes as a control plane.

Viewing all articles
Browse latest Browse all 274

Trending Articles