spot_img
Saturday, July 12, 2025
spot_img
HomeBlock ChainThe great AI agent acceleration: Why enterprise adoption is happening faster than...

The great AI agent acceleration: Why enterprise adoption is happening faster than anyone predicted

-


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


The chatter around artificial general intelligence (AGI) may dominate headlines coming from Silicon Valley companies like OpenAI, Meta and xAI, but for enterprise leaders on the ground, the focus is squarely on practical applications and measurable results. At VentureBeat’s recent Transform 2025 event in San Francisco, a clear picture emerged: the era of real, deployed agentic AI is here, is accelerating and it’s already reshaping how businesses operate.

Companies like Intuit, Capital One, LinkedIn, Stanford University and Highmark Health are quietly putting AI agents into production, tackling concrete problems, and seeing tangible returns. Here are the four biggest takeaways from the event for technical decision-makers.

1. AI Agents are moving into production, faster than anyone realized

Enterprises are now deploying AI agents in customer-facing applications, and the trend is accelerating at a breakneck pace. A recent VentureBeat survey of 2,000 industry professionals conducted just before VB Transform revealed that 68% of enterprise companies (with 1,000+ employees) had already adopted agentic AI – a figure that seemed high at the time. (In fact, I worried it was too high to be credible, so when I announced the survey results on the event stage, I cautioned that the high adoption may be a reflection of VentureBeat’s specific readership.)

However, new data validates this rapid shift. A KPMG survey released on June 26, a day after our event, shows that 33% of organizations are now deploying AI agents, a surprising threefold increase from just 11% in the previous two quarters. This market shift validates the trend VentureBeat first identified just weeks ago in its pre-Transform survey.

This acceleration is being fueled by tangible results. Ashan Willy, CEO of New Relic, noted a staggering 30% quarter over quarter growth in monitoring AI applications by its customers, mainly because of the its customers’ move to adopt agents. Companies are deploying AI agents to help customers automate workflows they need help with. Intuit, for instance, has deployed invoice generation and reminder agents in its QuickBooks software. The result? Businesses using the feature are getting paid five days faster and are 10% more likely to be paid in full.

Even non-developers are feeling the shift. Scott White, the product lead of Anthropic’s Claude AI product, described how he, despite not being a professional programmer, is now building production-ready software features himself. “This wasn’t possible six months ago,” he explained, highlighting the power of tools like Claude Code. Similarly, OpenAI’s head of product for its API platform, Olivier Godement, detailed how customers like Stripe and Box are using its Agents SDK to build out multi-agent systems.

2. The hyperscaler race has no clear winner as multi-cloud, multi-model reigns

The days of betting on a single large language model (LLM) provider are over. A consistent theme throughout Transform 2025 was the move towards a multi-model and multi-cloud strategy. Enterprises want the flexibility to choose the best tool for the job, whether it’s a powerful proprietary model or a fine-tuned open-source alternative.

As Armand Ruiz, VP of AI Platform at IBM explained, the company’s development of a model gateway — which routes applications to use whatever LLM is most efficient and performant for the specific case –was a direct response to customer demand. IBM started by offering enterprise customers its own open-source models, then added open-source support, and finally realized it needed to support all models. This desire for flexibility was echoed by XD Huang, the CTO of Zoom, who described his company’s three-tiered model approach: supporting proprietary models, offering their own fine-tuned model and allowing customers to create their own fine-tuned versions.

This trend is creating a powerful but constrained ecosystem, where GPUs and the power needed to generate tokens are in limited supply. As Dylan Patel of SemiAnalysis and fellow panelists Jonathan Ross of Groq and Sean Lie of Cerebras pointed out, this puts pressure on the profitability of a lot of companies that simply buy more tokens when they are available, instead of locking into profits as the cost of those tokens continues to fall. Enterprises are getting smarter about how they use different models for different tasks to optimize for both cost and performance — and that may often mean not just relying on Nvidia chips, but being much more customized — something also echoed in a VB Transform session led by Solidigm around the emergence of customized memory and storage solutions for AI.

3. Enterprises are focused on solving real problems, not chasing AGI

While tech leaders like Elon Musk, Mark Zuckerberg and Sam Altman are talking about the dawn of superintelligence, enterprise practitioners are rolling up their sleeves and solving immediate business challenges. The conversations at Transform were refreshingly grounded in reality.

Take Highmark Health, the nation’s third-largest integrated health insurance and provider company. Its Chief Data Officer Richard Clarke said it is using LLMs for practical applications like multilingual communication to better serve their diverse customer base, and streamlining medical claims. In other words, leveraging technology to deliver better services today. Similarly, Capital One is building teams of agents that mirror the functions of the company, with specific agents for tasks like risk evaluation and auditing, including helping their car dealership clients connect customers with the right loans.

The travel industry is also seeing a pragmatic shift. CTOs from Expedia and Kayak discussed how they are adapting to new search paradigms enabled by LLMs. Users can now search for a hotel with an “infinity pool” on ChatGPT, and travel platforms need to incorporate that level of natural language discovery to stay competitive. The focus is on the customer, not the technology for its own sake.

4. The future of AI teams is small, nimble, and empowered

The age of AI agents is also transforming how teams are structured. The consensus is that small, agile “squads” of three to four engineers are most effective. Varun Mohan, CEO of Windsurf, a fast-growing agentic IDE, kicked off the event by arguing that this small team structure allows for rapid testing of product hypotheses and avoids the slowdown that plagues larger groups.

This shift means that “everyone is a builder,” and increasingly, “everyone is a manager” of AI agents. As GitHub and Atlassian noted, engineers are now learning to manage fleets of agents. The skills required are evolving, with a greater emphasis on clear communication and strategic thinking to guide these autonomous systems.

This nimbleness is supported by a growing acceptance of sandboxed development. Andrew Ng, a leading voice in AI, advised attendees to leave safety, governance, and observability to the end of the development cycle. While this might seem counterintuitive for large enterprises, the idea is to foster rapid innovation within a controlled environment to prove value quickly. This sentiment was reflected in our survey, which found that 10% of organizations adopting AI have no dedicated AI safety team, suggesting a willingness to prioritize speed in these early stages.

Together, these takeaways paint a clear picture of an enterprise AI landscape that is maturing rapidly, moving from broad experimentation to focused, value-driven execution. The conversations at Transform 2025 showed that companies are deploying AI agents today, even if they’ve had to learn tough lessons on the way. Many have already gone through one or two big pivots since first trying out generative AI one or two years ago — so it’s good to get started early.

For a more conversational dive into these themes and further analysis from the event, you can listen to the full discussion I had with independent AI developer Sam Witteveen on our recent podcast below. We’ve also just uploaded the main-stage talks at VB Transform here. And our full coverage of articles from the event is here. 

Listen to the VB Transform takeaways podcast with Matt Marshall and Sam Witteveen here:

Editor’s note: As a thank-you to our readers, we’ve opened up early bird registration for VB Transform 2026 — just $200. This is where AI ambition meets operational reality, and you’re going to want to be in the room. Reserve your spot now.



Source link

Related articles

spot_img

Latest posts