
When we think about container orchestration, for much of the last decade Kubernetes has been the default choice. But it is not developer-friendly and requires significant time and understanding to deploy, operate and troubleshoot.
It’s not uncommon to see dedicated platform teams within organizations spending the bulk of their time just managing Kubernetes, and there are now signs of pushback against this.
Kelsey Hightower, who is somewhat synonymous with its rise, recently tweeted, “If you don’t need Kubernetes, don’t use it.”
But what other options do you have? In an article for The New Stack last year, I looked at the broader landscape which includes using a managed Kubernetes service from one of the major cloud vendors; a Kubernetes distribution such as Red Hat’s OpenShift; an alternative like HashiCorp’s Nomad; and taking what Adrian Cockcroft refers to as a “Serverless first approach”, by leapfrogging straight to a FaaS offering — such as Azure Functions, Amazon Web Services Lambda or Google Cloud Functions — and bypassing Kubernetes altogether.
I also touched briefly on cycle.io, which sits somewhere between a PaaS and an orchestrator. The company continues to invest in and improve their product, and I was interested to take a deeper dive into the product and see what that told us about the state of the orchestrator market more broadly.
Cycle describes their platform as being “LowOps,” which they define as abstracting away the implementation details of how applications are managed, “so that a platform engineer — even someone with limited DevOps experience — is able to describe what they want, and the platform is responsible for making it happen.”
While Kubernetes aims at full customization with both the flexibility and complexity that entails, Cycle hopes to find a sweet spot between full customization and something that, to coin a phrase, “just works.”
Ultimately, the goal here is to have a way of managing containers and infrastructure that provides a Heroku-like experience with Apple’s always up-to-date approach.
How Cycle Works under the Hood
Core to understanding Cycle is that it combines two things — platform orchestration and infrastructure management — with the goal of simplifying both.
From an infrastructure standpoint, Cycle puts the focus on containers, with the servers underneath appearing as a pool of distributed resources.
Supported sources for the container images are all OCI-compatible or Docker-based (Docker Hub, Docker Registry, and Dockerfile), but servers can exist on multiple cloud providers. Out of the box, Cycle supports AWS, Equinix Metal, GCP and Vultr, with Microsoft Azure planned but not yet available.
In addition, what Cycle refers to as their Infrastructure Abstraction Layer (IAL), allows organizations to add support for anything from another cloud provider to on-premise infrastructure, by implementing a thin REST-based middleware. It’s worth noting that machines need to be x86-based (ARM is not yet supported), with a minimum of 4GB RAM. 30GB+ of disk space is also recommended.
On each compute node, Cycle automatically installs its own minimal Linux-derived operating system, CycleOS, which provides basic networking, storage protocols and plugins for the container layer that runs on top of it. “It is deliberately designed to be as dumb as possible,” Warner explained.
Conceptually, CycleOS is reminiscent of CoreOS, but it takes a significantly different approach to deploying infrastructure. Every time a server boots, it connects to Cycle and pulls down a copy of the OS, which then runs in RAM — it is never installed to disk. This is part of what allows Cycle to manage the infrastructure. “What it gives us is infrastructure standardization, where we can guarantee that every server is running the exact same version of CycleOS, and the exact same hardened Cycle kernel regardless of the provider’s base images,” said Warner. “This allows us to push out an update every two weeks without ever introducing incompatibility or downtime. And that allows us to build a fully managed platform, a Heroku-like experience, where organizations are empowered to own their infrastructure, networks, and data.”
Within Cycle, infrastructure is grouped into clusters, and applications are isolated into environments. Clusters offer a path to isolation, resource management, and high availability for infrastructure. Environments, similarly, provide globally encrypted private networks for container-to-container communication and can span all of the infrastructure within a cluster, regardless of the underlying provider. Inside environments, Cycle offers a number of built-in, and fully managed services, including: load balancing, service discovery, VPNs, and more.
Diving deeper into environments, Cycle configures a global Layer 2 network per environment with all the traffic within that network encrypted and a corresponding global IP subnet for all the containers within the environment. The platform automatically takes care of the details of setting this up. For example, as it builds the network, it runs a number of tests to see if the nodes require an out-of-band connection or if a Direct Connect is available; if the latter is an option, Cycle will default to using it.
Servers are divided into named clusters — for example, you might have a production and a development cluster — with compute resources that can span across different providers (say, AWS, GCP, and Vultr) without the user needing to do anything.
Also, interestingly, although it needs reasonable network connectivity between compute nodes, the platform doesn’t impose latency limitations on them. Due to the way their respective control planes work, both Kubernetes and Docker Swarm effectively impose latency limitations, which is why we typically see everything running in a single region. With the Cycle model, customers aren’t responsible for managing the control plane, meaning Cycle, as a company, can manage latency at the control plane layer. Having said this, your application may have latency limitations that you need to consider — and via customizable node constraints/tagging, Cycle is still friendly to those requirements.
Where Does Cycle Fit?
One of the things that becomes apparent as you spend time with Cycle is that the people who designed and built it have spent a lot of time on infrastructure, so many of the smaller details and common problems have been thought about.
We’ve seen one example of this already with how the networking is managed. The platform also has built-in mechanisms for migrating instances from one cloud provider to another; when it does this, it automatically takes care of reconfiguring the network for you. In addition, if data needs to be moved it is handled via a streaming copy — dividing the data into chunks and then sending it across. If, like me, you have ever found yourself trying to move data from one machine to another when a disk is nearly full, you’ll appreciate how helpful this is.
Moreover, automatic updates mean that any applications running on the platform are always on the latest stable version of Cycle with all the security patches applied, something which can be a challenge for firms running Kubernetes.
At a high level, this combination of multicloud support, well-thought-out features and ease of use is compelling. Warner told The New Stack they are seeing more and more customers migrating from Kubernetes to Cycle. Indeed, he told us, “a majority of the companies moving to Cycle today are moving away from Kubernetes. They spent time adopting it and using it over the years but realized the costs to maintain it weren’t worth the value it delivered.” In terms of size, Warner told us, the average company moving to Cycle has between 15 and 25 developers.
My own view is that Cycle would make most sense for teams who see the value in containers as the way to package and deploy their applications to servers, but aren’t necessarily committed to the Kubernetes way of doing things and perhaps haven’t built a DevOps or platform team yet. I don’t think it’s a coincidence that its pricing, which is typically between US$500 and $6,000 a month, mentions discounts to early-stage start-ups.
Kubernetes was designed by Google to operate huge Google properties, and many of us don’t need that sort of scale. We don’t generally question it, though, because Kubernetes has become so ubiquitous. We may yet see Cycle occupy a larger segment of the market than we might have ever imagined.
Disclosure: The author of this post has done some consulting work for cycle.io.
The post Cycle.io: Meet the Team on a Mission to Replace Kubernetes appeared first on The New Stack.
Kubernetes was designed by Google to operate huge Google properties, and many of us don’t need that sort of scale.