Quantcast
Viewing all articles
Browse latest Browse all 268

Lack of Data Mobility Is a Root Cause of Cloud Native Ills 

Image may be NSFW.
Clik here to view.

PARIS — It is hard to find someone these days who is not struggling with higher prices for cloud services. Similarly, many are struggling with how to try out other cloud providers, by doing pilot runs of different options to see how they work before making a major vendor switch. Others are just getting started and deploying or shifting to cloud native using one of the big three cloud providers — and many don’t even know where or how to begin.

A big question mark is how to protect our data, especially how to move it and access it as we want, not just being stuck on one cloud provider who, again, is too often looking to raise prices. How can we extend services across different cloud providers and on-premises on demand as we want and need to? In the cloud native sphere, this is also what happens, not if, but likely when, a ransomware attack will occur. What happens then? Are we really ready? And then, especially today, many organizations are seeing higher cloud bills, necessitating stricter cost management.

Simple virtualization tools are not going to provide the necessary control to manage data and applications across various cloud environments and to find cost savings. Simple storage snapshots and other shortcuts are not sufficient to protect from data loss and attacks.

“It’s very easy to go into the cloud with your cloud-based workloads, without having to worry too much about cost and security risks — you can just lift them up and put them in virtual machines,” Michael Cade, a global technologist for Veeam Software, told The New Stack at KubeCon + CloudNativeCon Europe. “But it’s not going to work out great, because you’ve got to have a plan for what you’re going to do first.”

Moving data between different cloud environments requires careful consideration to ensure successful implementation, Cody Hosterman, senior director of product management and cloud, for Portworx, told The New Stack. “While the process of transferring data — ‘lift and shift’ migration — may seem straightforward, achieving success in the new cloud environment poses challenges,” Hosterman said.

Successfully navigating the move involves striking a delicate balance between managing costs and maintaining essential features, Hosterman said. “Avoiding potential pitfalls requires a thorough evaluation of existing infrastructure, applications and dependencies for compatibility with the target cloud environment,” Hosterman said. “Provisioning resources in the new environment, for example, is a critical step to replicate on-premises infrastructure. “

For those organizations that run their operations on “public cloud vendor XYZ today want that freedom to move somewhere else, potentially in the future, or running where it makes the most sense economically for them,” Matt Bator, principal for Kubernetes native solutions at Kasten by Veeam, said during KubeCon + CloudNativeCon North America.

Let’s just say backups are critical for maintaining safe, secure and accessible stateful workloads on Kubernetes. Meanwhile, backups are only as good as your ability to recover them, Bator said. “That’s the name of the game here, whether we’re recovering in place, for the purposes of disaster recovery to some other cluster, or I’m using this as part of, say, like a cloning routine for dev-test or user acceptance testing in Kubernetes,” Bator said.

Encryption, role-based access control, auditing and immutability are “table stakes,” Bator said. “These capabilities ensure that backed-up data remains reliable. I want that ability now to move those workloads between different clouds,” Bator said. “This is a bit more trivial for stateless workloads, right? Containerization has done a lot for us in terms of the mobility of workloads, but once I’ve got to solve this data gravity problem as part of these workloads, and I want to be doing this on a regular basis, maybe I want to enable hybrid cloud disaster recovery.

Ultimately, getting the data there is easy. Making it successful is not. The challenge lies in making the migration cost-effective without compromising management and feature capabilities. Without careful planning and execution, organizations risk poor performance and resource loss, emphasizing the need for strategic decision-making throughout the entire process.”

“A proper data mobility solution should be able to interface with Kubernetes distributions directly,” Bator said. “I can’t just snapshot a worker node and think that applications spread across multiple worker nodes I can ‘magically’ restore. So I need to start with all that application metadata,” he said. “I need to be able to integrate with my underlying storage infrastructure to be able to orchestrate volume snapshots of my data. And I need to be able to package all of that up in a way where I can get it off of the cluster” on an as-needed basis.

A mechanism to ensure consistency for applications is also critical. Not only databases but all workloads across different cloud and on-premises environments must be accessible with a single Kubernetes API to manage and extract all of this information. In the case of Kasten, namespaces run alongside the rest of the applications on the cluster, thanks to a custom resource API. With capabilities such as blueprints, it is possible to integrate with other components of the different stacks, including policy as code or integration into automation tools like a GitOps pipeline.

“I want to completely remove friction from my automating data protection and ensuring that I’m doing snapshots of my app, backups of my app, every time before I have my continuous delivery or continuous deployment pipeline push new code into production,” Bator said. “So these are, again, some of the big advantages of being Kubernetes native, as opposed to I’ve got this cool bolt-on thing to my 20-year-old backup that also happens to talk to Kubernetes.”

Ransomware Scary

What kind of depraved and evil person intentionally orchestrates ransomware attacks leading to death and harm at hospitals, schools and daycare centers — public service utilities aside? These types of attacks continue to occur with surprising frequency at all types of organizations. Kubernetes is certainly no exception, and being prepared to truly recover when and not if the attack happens is critical, and doable.

“The ransomware boogeyman still exists in the Kubernetes world. We are not immune from any of that, and I want to be able to take advantage of a hybrid multicloud environment no matter what,” Bator said. “I need to be able to depend on that data because it’s my last line of defense against that ransomware boogeyman.”

Get Started

How do you start, and whose job is it to get started? Well, it should be a team effort among developers, operations folks and CTOs. Everybody needs to have or be part of this data mobility insurance, as I would describe it. It’s more than just data storage and applications, especially for stateless applications and their weight. Once deployed, either on a single cloud-provider network or across different cloud native environments in a hybrid structure, the necessity is paramount to ensure that data mobility, storage solutions and disaster recovery, such as in the case of ransomware attacks or inadvertently deleted data, are all working seamlessly. It’s not about separate solutions; it should be one single solution.

Many are still “kind of confused about who owns backup for Kubernetes. Is it developers? Is it platform engineering teams or DevOps teams? Is it legacy backup administrators?” Bator said. “I will say that it’s probably more of an ‘and’ proposition than an ‘or’ proposition. This is an ‘it takes a village’ approach.”

The post Lack of Data Mobility Is a Root Cause of Cloud Native Ills  appeared first on The New Stack.

A big question mark is how to protect our data, especially how to move it and access it as we want, not just being stuck on one cloud provider.

Viewing all articles
Browse latest Browse all 268

Trending Articles