Quantcast
Channel: Kubernetes Overview, News and Trends | The New Stack
Viewing all articles
Browse latest Browse all 243

How Oracle Is Meeting the Infrastructure Needs of AI

$
0
0
Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs and how Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality

Generative AI is largely a data story, but it’s also an infrastructure and operations tale. GPUs are ascendant, being better suited to handle the computing needs of AI than CPUs.

Customers of cloud providers like Oracle are demanding new things to help them handle the infrastructure needs of their AI workloads, said Sudha Raghavan, senior vice president for developer platform at Oracle Cloud Infrastructure, in this episode of The New Stack Makers.

“Right after ChatGPT was released, Gen AI has been literally taking over the world,” Raghavan said. “It is the fastest adopted technology today. And along with that came this rush of GPU requirements. Everybody wants a GPU, whether they know how to use it, or even if it is the right choice for the problem they’re trying to solve.”

In this On the Road episode of Makers, recorded at KubeCon + CloudNativeCon North America in Salt Lake City, Raghavan spoke with host Alex Williams, TNS editor and publisher, about how infrastructure needs are changing and what Oracle is doing to help accommodate customers.

AI Workloads and Kubernetes

The need for GPUs isn’t monolithic, Raghavan pointed out. It can stretch from a small need to test a workload to “hundreds of thousands of nodes in one cluster, running one job — like a big, large, bad job, training job. And that space has really not been explored.”

For hyperscalers in the cloud services arena, it’s a fresh challenge — massive numbers of nodes running on GPUs for “days and weeks and months,” as she put it, to train large language models requires massive power consumption.

“For web workloads, you have a peak and a trough, right?” she said. “But these GPU batch workloads, there is no trough. Everything is running at peak all the time, and so the demand of power, the failure rate of the hardware [is] extremely accentuated.”

For one customer, Oracle Cloud Infrastructure is building “the largest supercluster for GPUs,” a 131,000-plus GPU node closer meant to run one job. “Now, can you imagine if one of those nodes goes down and the whole job just stops?” Raghavan asked. “What does that mean?”

The workloads are stateful, which brings up another challenge. “That one node going off, maybe for a few minutes, you lose what the node was doing because it didn’t persist … And then when the node comes back up, it needs to pick up where the job is, not where it left it. The state management of the overall job is very critical, very important, and that’s something we haven’t seen with traditional cloud native CPU workloads.”

The need to run AI workloads on GPUs is also causing organizations to require new things from Kubernetes, Raghavan said — including a more tailored sort of observability.

Oracle‘s Node Manager, she said, “helps all of these chip makers give plugins to that Node Manager as the single interface that Kubernetes will talk to in order to collect data … Node Manager provides the single-layer API, and Kubernetes infrastructure is now abstracted from the actual chip.”

She added, “We cloud providers are trying very hard to not increase the complexity of something like Kubernetes but provide the functionality that Kubernetes users are accustomed to.”

Check out the full episode, in which Raghavan takes us on a deep, technical dive into the challenges of infrastructure with AI workloads.

The post How Oracle Is Meeting the Infrastructure Needs of AI appeared first on The New Stack.

The computing needs of AI workloads are changing infrastructure needs, including in terms of Kubernetes, according to Sudha Raghavan of Oracle.

Viewing all articles
Browse latest Browse all 243

Trending Articles