ElevenLabs $500M ARR: Voice AI Goes Institutional
On April 22 in Las Vegas, Google Cloud did something most hyperscalers avoid: it killed a brand that was working. Vertex AI — the platform name every enterprise buyer already knew — was folded into the Gemini Enterprise Agent Platform, a unified hub that absorbs Vertex AI, Agentspace, and a handful of scattered agent-building tools under a single roof. The same keynote committed $750 million to Google Cloud’s 120,000 partners for agentic AI work, pushed the A2A protocol to v1.0 in production at 150 organizations, and opened Model Garden to 200+ models including first-class Anthropic Claude.
Google’s pitch: agents are the new operating system. The real question is whether the software underneath actually changed, or whether this is the most expensive rebrand in cloud history.
Quick Summary: What Google Announced
Detail Info Date April 22, 2026 (Google Cloud Next ‘26, Las Vegas) Headline launch Gemini Enterprise Agent Platform — replaces Vertex AI and absorbs Agentspace Key surfaces Agent Designer (no-code), Inbox (agent command center), Skills (reusable actions), long-running agents Partner money $750M fund across Accenture, Capgemini, Cognizant, Deloitte, HCLTech, TCS, and 120K channel partners Protocol A2A v1.0 — 150 production orgs, embedded in Azure AI Foundry and Amazon Bedrock AgentCore Model Garden 200+ models, including Anthropic Claude Opus/Sonnet/Haiku as first-class options Bottom line: The rebrand is cosmetic. The underlying consolidation, the A2A traction, and the fact that Google now ships Anthropic Claude as a feature of its own agent platform — those are real. The risk for Google is that the new OS runs competitors’ apps better than its own.
The naming alone suggests scope. Vertex AI, as a brand, is gone. All of its services and roadmap now flow through the Gemini Enterprise Agent Platform, which Google is describing as the place where “developers build, scale, govern, and optimize fleets of agents.” Agentspace — the user-facing agent experience Google launched in late 2024 — has been pulled into the same umbrella.
Four surfaces matter from the keynote:
Agent Designer. A no-code agent builder that lives inside the Gemini Enterprise app. You describe what you want in natural language, or you drag through a visual flowchart, and Google generates a schedule- or trigger-based agent that connects to your enterprise data. It’s in preview on an allowlist. This is the surface aimed squarely at knowledge workers who are never going to open a terminal.
Inbox. The command center where every agent action lands. Notifications get grouped into “Needs your input,” “Errors,” and “Completed.” If your agents are running for hours or days at a time, you needed somewhere to watch them. Google built that somewhere, and they built it as a first-class product, not a dashboard bolted onto Workspace.
Skills. Reusable actions agents can invoke without being re-prompted from scratch. Apply brand guidelines. Format a report to spec. Once you teach an agent the workflow, it becomes a Skill, and the rest of the team can call it by name. This is the piece most people will underestimate. Skills are how agent outputs stop being one-off and start being institutional.
Long-running agents. Agents that run independently “up to days at a time” on complex multi-step problems, managed through Inbox. This is the same architectural bet Anthropic made with Claude Code Routines — the assumption that humans stop being the rate-limiter on how many tasks an agent can grind through.
Underneath all four, the platform piece — Agent Studio (low-code), Agent Registry, Agent Identity, Agent Gateway, Agent Observability, a simulation environment for stress-testing before deployment, and a marketplace for third-party agents. This is the scaffolding that used to be scattered across five different Google products. Now it’s one.
Google committed $750 million specifically to its partner ecosystem for agentic AI work. Not to internal R&D. Not to infrastructure. To partners.
What that money actually buys, per the press release:
The framing Google wants: this is the largest single partner investment any hyperscaler has ever made. The framing that matters: Google’s enterprise direct sales motion has historically been its weak spot against AWS and Azure. If agent deployment is going to be a services-heavy sale for the next three years — and every reported enterprise deployment suggests it will be — then the consulting shops are the distribution channel. $750M is Google paying those shops to lead with Gemini.
The Agent-to-Agent protocol hit v1.0 alongside the keynote, and the supporter count went from roughly 50 at launch in April 2025 to more than 150 today — including AWS, Cisco, IBM, Microsoft, Salesforce, SAP, and ServiceNow.
The numbers that matter more than the logos:
A2A now has genuinely cross-cloud mindshare. That’s a rare thing in this industry — the last protocol that hit this kind of multi-hyperscaler support at launch-plus-one-year was Kubernetes, and the comparison isn’t accidental. The Model Context Protocol reached 97 million installs on a similar trajectory, and the two protocols increasingly sit side-by-side in production agent deployments. MCP handles tools and data; A2A handles agent-to-agent delegation. The enterprise pattern is converging on “use both.”
100+ partner agents shipped on day one. Salesforce, ServiceNow, Workday, Atlassian, Adobe, and Deloitte all had agents in the marketplace at launch, ready to be invoked from inside Gemini Enterprise.
Read that carefully. The companies that own the enterprise system of record for sales (Salesforce), IT service (ServiceNow), HR (Workday), and dev collaboration (Atlassian) all shipped day-one agents that slot into Google’s agent platform. Not Google’s competitor. Google’s.
This is the opposite of the AWS strategy. AWS wants every workload to run on AWS. Google is betting that enterprises already have their systems of record and don’t want to move them — but they’ll let a Google-branded agent orchestrate across them. That’s a cheaper bet to win, because it doesn’t require enterprises to migrate anything. It just requires them to pick a central agent layer.
The risk: this works only if Google stays neutral. The moment Gemini Enterprise starts favoring Google Workspace workflows over Salesforce or Microsoft 365, the partner-agent strategy collapses. And Google’s incentives are not neutral.
Here is the genuinely strange part of the announcement. Model Garden — the model catalog inside the new Agent Platform — carries 200+ models including Anthropic Claude Opus, Sonnet, and Haiku as first-class options. Not hidden in a “partners” tab. Not available only via API. First-class.
Google is now the most model-neutral hyperscaler — the first to position a competitor’s flagship as a first-class option alongside its own default model in a unified enterprise platform branding play. (AWS has featured Claude as the primary highlighted model in Amazon Bedrock since well before this week, but that’s a model catalog play, not a platform-identity bet.)
The incentives are unusual. Anthropic recently inked a deal with Google to run Claude training on Google’s next-gen TPUs, so Google profits from Claude’s compute either way. But the same thing happens in reverse — every Claude call made through Model Garden is a call that didn’t hit Gemini. An enterprise that standardizes on the Agent Platform and picks Claude as its default model has paid Google a platform fee and Anthropic a model fee. Google gets rent. Anthropic gets the workload.
For a certain slice of buyers, this makes Gemini Enterprise a better deal than Azure, where OpenAI has historically been the favored model and others are second-class. The OpenAI-vs-Anthropic comparison has mattered a lot for individual developer choice. For enterprise procurement, the bigger question is which hyperscaler lets you switch models without switching platforms — and after this week, Google is arguably the most model-neutral of the three.
I’m going to be direct about what this announcement is and isn’t.
What it is: A real consolidation. Vertex AI, Agentspace, and five other tools that used to require a whiteboard to explain are now one product. That’s operational value for anyone trying to stand up agents today. The A2A v1.0 milestone is real — production usage at 150 orgs is a number no other agent protocol can cite. The partner fund, the forward-deployed engineers, and the AI-native services tier are a distribution play that Google needs if it wants to compete with Microsoft’s installed enterprise base.
What it isn’t: A new AI capability. None of this makes Gemini smarter. Agent Designer is a no-code builder on top of models that were already available. Inbox is a UI layer. Skills are named macros. Long-running agents are the same pattern Anthropic shipped, the same pattern OpenAI is pursuing, and the same pattern NVIDIA’s enterprise agent toolkit wraps in a different shape. Google has caught up architecturally. Nothing here is ahead.
The rebrand risk: Renaming Vertex AI was not free. Every piece of internal documentation at every enterprise customer now has a stale product name. Every job description on LinkedIn that says “Vertex AI experience” is now describing a product that doesn’t exist. The learning curve tax on 120,000 partners is real, and Google ate it voluntarily because “Gemini Enterprise Agent Platform” matches the agent narrative and “Vertex AI” didn’t. That’s a marketing decision dressed as a product decision, and buyers should see it for what it is.
The genuine shift: Google is betting that the unit of enterprise value is moving from “model” to “agent” to “fleet of agents.” The Agent Registry, Observability, and Gateway products don’t make sense as standalone features — they only make sense if you assume enterprises will be running hundreds of agents in production in 18 months. That’s the bet. If it’s right, this platform is positioned correctly. If it’s wrong — if the enterprise deployment ceiling is more like 10 agents than 200 — most of this scaffolding is overbuilt.
| Google Cloud (Gemini Enterprise) | AWS (Bedrock AgentCore) | Microsoft (Azure AI Foundry + Copilot Studio) | |
|---|---|---|---|
| Primary/flagship model | Gemini 3.1 Pro (flagship; Flash for cost-sensitive workloads) | Claude (primary), Nova (secondary) | GPT-5, Claude via Azure AI |
| Third-party model depth | 200+ via Model Garden, Claude first-class | Claude first-class, others via marketplace | OpenAI first, others reluctantly |
| No-code agent builder | Agent Designer (allowlist) | Bedrock Studio | Copilot Studio |
| Protocol stance | A2A native, MCP supported | A2A added, MCP native | A2A added, MCP supported |
| Partner ecosystem play | $750M fund, 120K partners, day-one agents from SFDC/ServiceNow/Workday | Largest SI base, no dedicated agentic fund | Deep channel, Copilot-aligned |
| Deployment path | Managed cloud, forward-deployed engineers | Managed cloud, self-serve | Managed cloud, self-serve |
The cleanest split: Microsoft is OpenAI-first and will stay that way until the economics change. AWS is Claude-first at the model layer and agnostic at the agent layer. Google is the most model-neutral but culturally the most Google-dependent — you’ll use Gemini for most tasks and the partner marketplace for the rest.
For teams already evaluating agent platforms, the best AI agents roundup and the agent platforms and workflow automation comparison cover the field beyond the three hyperscalers. For enterprise-specific deployment tradeoffs, the enterprise AI deployment guide walks through the procurement and governance layer this platform is targeting.
If you’re already on Google Workspace or Google Cloud: This is the most meaningful change to your buying options since Vertex AI launched. Agent Designer in preview is worth applying for the allowlist now. Inbox and Skills are worth testing as soon as they’re generally available. The bundle math tilts in your favor.
If you’re on Microsoft: Watch the A2A integration in Azure AI Foundry. The cross-platform agent routing is where the real competitive pressure will come from. Copilot Studio still has the most built-in integration depth with Microsoft 365, but the agent fabric is going to be cross-cloud, not single-cloud.
If you’re on AWS: Bedrock AgentCore is the comparable surface. The underlying model choice (Claude) is the same. The partner ecosystem for agentic services is currently richer at Google. That’s the gap to close.
If you’re a consultancy or SI: The $750M fund is real money with real strings attached. Gemini Enterprise practice-building is the fastest path to capturing the next 18 months of agent deployment budget. Partner enrollment now beats partner enrollment later.
The strongest thing Google did this week was also the quietest. They put Claude in Model Garden as a first-class option and built the agent platform around the assumption that enterprises want model choice, not model lock-in.
That’s the opposite of the Microsoft-OpenAI relationship. It’s the opposite of what you’d expect a company with its own frontier model family to do. And it might be the move that actually pulls enterprise procurement Google’s way — because it’s the first hyperscaler platform where “which model should we use?” is a genuinely neutral question.
The rest is execution. The agent surfaces are good. The partner fund is necessary. The A2A v1.0 milestone is a legitimate cross-industry win. None of it is hype-proof, and none of it guarantees Google wins the agent era. But this is the first time Google’s enterprise AI story feels organized rather than assembled.
Call it an expensive rebrand if you want. Under the new name, the product is more coherent than it has ever been.
No. Vertex AI’s services, models, and roadmap are continuing — they’re just delivered through the Gemini Enterprise Agent Platform now. Existing Vertex AI customers don’t lose access. The brand is gone, the underlying products are not.
Gemini Enterprise is the user-facing app (no-code agent creation, Inbox, Skills). Gemini Enterprise Agent Platform is the developer platform underneath (Agent Studio, Agent Registry, Agent Gateway, Model Garden). One is for knowledge workers. The other is for builders. They share the same agent fabric.
MCP (Model Context Protocol) standardizes how agents access tools and data sources. A2A (Agent-to-Agent) standardizes how agents delegate work to other agents. They solve different problems and are increasingly deployed together. The MCP installs milestone covers MCP adoption in detail.
Yes. Anthropic Claude Opus, Sonnet, and Haiku are first-class options in Model Garden and can be selected as the underlying model for agents built on the platform. Billing flows through Google Cloud.
Forward-deployed Google engineers embedded at major SIs (Accenture, Capgemini, Cognizant, Deloitte, HCLTech, TCS), Gemini Enterprise practice-building for global consultancies, Wiz security assessments, Gemini proofs-of-concept, and usage incentives for channel partners who ship production agent deployments.
It’s in preview on an allowlist as of the April 22 announcement. Google has not confirmed a general availability date. Requesting access now is the earliest path in.
Partially. Gemini Enterprise competes with Copilot as the user-facing app. The Agent Platform competes with Azure AI Foundry and Copilot Studio as the developer layer. The A2A-vs-MCP-vs-both question is where the real competitive layer sits, and all three hyperscalers now support both protocols in some form.
Last updated: April 23, 2026. Sources: Gemini Enterprise Agent Platform launch announcement, Sundar Pichai Cloud Next 2026 recap, $750M partner investment press release, A2A v1.0 production milestone, A2A upgrade technical detail, SiliconANGLE coverage of the platform launch, The Next Web coverage of partner agents.
Related reading: Claude Code Routines enterprise guide · MCP at 97M installs · Best AI agents 2026 · Enterprise AI deployment guide · AI agent platforms and workflow automation · NVIDIA Agent Toolkit