
SALT LAKE CITY — KubeCon North America 2024’s theme, “Scaling New Heights,” resonated with me for multiple reasons — the event’s Salt Lake City high-altitude location is obvious, but also because it aligns so strongly with my career journey and the container industry’s overall trajectory.
Utah is where I started working in container platforms, forging some of my deepest professional relationships. My recent shift to AI solutions marketing, a new peak for my career, created a refreshing sense of excitement and clarity. Diving into the intricacies of AI technologies, I’ve discovered a new paradigm for innovation and insights that are truly invigorating, allowing me to rethink how we talk about and address AI app delivery challenges.
More importantly, the outlook from the summit at KubeCon also crystallized the opportunities AI holds for the container industry. AI is not a fleeting trend but rather an essential shift in application architecture and capabilities. However, being clear-eyed also makes the obvious gaps in AI app governance evident.
These topics are not getting enough oxygen at the summit, partly because the space is still nascent. This realization sparked numerous conversations at KubeCon about the considerations around scale we must address as organizations deliver more and more AI-powered applications. The following are some of the most impactful things I heard about AI at KubeCon.
The Ostrich Effect and AI
Nikhita Raghunath, principal engineer at Broadcom, offered a memorable thought experiment during her KubeCon keynote, when she compared ignoring AI to an ostrich with its head in the sand. While this is a whimsical example of confronting the unexpected challenges we face in tech, I had another idea: “What if we not only don’t stick our head in the sand but do the opposite and run headlong towards AI at top speed?”
Yes, I talk about AI safety, but I also think “letting the perfect be the enemy of the good” means you risk going up the mountain alone, trying to perfect your work before you let anyone see it. Summiting any mountain alone isn’t a good idea. This is where the Cloud Native Computing Foundation (CNCF) community can really help exponentially speed up iteration and identify more opportunities to solve bigger problems together.
One such bullish open source adopter is CapitalOne, which stood out as a practical real-world adoption case. Aparna Sinha, SVP and head of generative AI product at CapitalOne, delivered a keynote about how her organization approaches banking from a data and technology perspective. The company is embracing GenAI to automate customer service agents, low-level engineering tasks (to reorient developers to higher value tasks) and mundane back-office tasks (for greater overall efficiency). However, Sinha said, the team is mostly using managed services to stitch together the platform from open source solutions because capabilities are still fragmented.
Emergence of AI Middleware
At KubeCon, members of the CNCF community showcased their approach to AI application delivery by presenting several AI “gateways” designed to simplify complex workflows. These gateways underscore the importance of standardization and interoperability in AI app delivery and machine learning.
While enterprise-ready platforms are key to sustainable technology adoption, the projects unveiled at KubeCon are not yet capable of providing complete abstractions. The community continues to grapple with challenges such as feature and platform fragmentation. Integrating elements like authorization, cost management, resource allocation, observability and compliance from various CNCF projects creates complexity.
For application platforms to support GenAI app delivery, tools must abstract complexity while remaining flexible and adaptable to facilitate model swapping and other rapid experimentation tactics that are crucial for GenAI app development. These KubeCon announcements highlight the ongoing challenges in scaling GenAI app delivery, revealing a gap that suggests the need for a new solution category focused on AI middleware.
Early indications suggest that AI middleware could simplify complexities for platform engineers, data scientists and developers, enhancing scalability and safety. AI middleware will break down silos between teams and application types, enabling organizations to develop, operate and optimize GenAI applications as seamlessly as any other app.
Opportunity Costs and Safe AI
With AI gateways just coming online, it may seem premature to think about abstractions. However, the fast-paced evolution of the AI landscape necessitates even quicker advancements in tools. Sessions on AI safety emphasized the importance of recursive learning cycles in AI models and the increasing demand for governance capabilities.
In a particularly thought-provoking session, Shane Lawrence, senior staff engineer at Shopify, discussed security threats to AI models. He highlighted ongoing threat scenarios like product name squatting, typo-squatting and malicious actors injecting deceptive dependencies or libraries with names similar to legitimate ones, jeopardizing model integrity. (It also evoked a brief feeling that Skynet could become a reality.) Another concern he raised was using large language models (LLMs) to generate code that exploits known vulnerabilities and obscures code to hide back doors, making it easier to conceal risky elements.
Clearly, bad actors are finding more sophisticated ways to exploit vulnerabilities — that’s just another day in the Tanzu world. However, it was encouraging to hear presenters at KubeCon begin to discuss potential solutions. This situation underscores the need for cohesive AI middleware to enhance visibility into model accuracy, behaviors and user activities to ultimately mitigate risks.
Until We Meet Again!
KubeCon North America 2024 is over, yet the energy from the event still resonates with me. It was invigorating to witness such a dynamic community of creators come together, freely exchanging knowledge and ideas.
I was fortunate to engage with an impressive array of explorers, observers and platform creators at the event. As they advance in their AI journey, emerging projects are beginning to establish governance for their AI applications. If you’re interested in helping shape the future of AI solutions, apply to become a VMware Tanzu AI solutions design partner.
The post KubeCon Keynotes Wrestle With AI Governance Complexities appeared first on The New Stack.
AI is not going away, and the container industry needs to adapt its governance model to scale as fast as GenAI is evolving.