Claude Computer Use Review: Hands-On Testing (2026)
Apple spent years insisting Siri just needed “more Apple.” Turns out what Siri needed was Google.
iOS 26.4, targeted for spring 2026, is the delivery vehicle for Apple’s biggest AI overhaul in Siri’s 14-year history. Under a reported $1 billion-per-year deal with Google, Apple is replacing the brittle intent-matching system that has frustrated users since 2011 with a full large language model core powered by a custom Gemini model, running through Apple’s Private Cloud Compute infrastructure.
The short version: the new Siri is genuinely closer to ChatGPT and Google Gemini than to the Siri you’ve been yelling at for years. The longer version is where it gets complicated.
Quick Verdict: New Siri (iOS 26.4)
Aspect Rating Overall Score ★★★★☆ (4.0/5) Best For iPhone users who want on-device AI without switching apps Cost Included with iOS 26.4 (no subscription required) On-Screen Awareness Very Good — finally works as advertised Cross-App Tasks Good — improving but limited app support at launch Privacy Architecture Excellent — Google model, Apple data boundaries Conversational Quality Good — not ChatGPT, but not old Siri either Bottom line: The Gemini-powered Siri is a genuine step forward—not a press release. On-screen awareness and personal context actually work. Cross-app automation is real but narrow. If you’ve written Siri off, iOS 26.4 is the update worth reconsidering.
Check iOS 26.4 availability on Apple’s site: apple.com/ios
The Siri backstory here matters. Apple Intelligence, announced at WWDC 2024, was supposed to be Apple’s answer to ChatGPT. By late 2025, it was clear the answer wasn’t landing well. The promised on-screen awareness, deep app actions, and conversational Siri never shipped on schedule. Users noticed. Analysts downgraded.
Apple’s response was unusually direct: a multi-year agreement with Google to integrate a custom Gemini model into the foundation of Apple Intelligence. Google builds and hosts the LLM. Apple controls the privacy envelope through Private Cloud Compute, a verified, stateless server environment where queries are processed without Google having access to user data, histories, or personal context.
The architecture is genuinely interesting. Apple has essentially rented Google’s AI horsepower while keeping user data inside Apple’s own system. Queries that require LLM-level processing are encrypted and routed to Apple’s Private Cloud servers, processed by the Gemini model, and returned with no Google logging or data retention.
Apple’s Foundation Models still handle personal data: what’s in your messages, calendar, email, and photos. Gemini handles the reasoning and summarization. The two systems communicate at the boundary Apple controls.
Whether this is a smart partnership or a quiet admission of defeat depends on who you ask. For iPhone users, what matters is whether it makes Siri actually useful. The answer, based on early developer testing, is largely yes.
Apple first showed on-screen awareness at WWDC 2024. It took until iOS 26.4 to work reliably.
The concept: Siri can see and understand what’s on your screen and act on it. You’re looking at a restaurant’s Instagram post, and Siri can find the reservation link and book it. You’re reading a news article, and Siri can summarize it, find related articles, or add the event to your calendar without you copying anything.
This sounds like a small convenience. Used in practice, it’s one of the better changes to iOS in years. The friction of switching between apps and re-typing information is something you only notice when it’s gone.
Tested scenarios that work well:
Where it still stumbles: Third-party apps with non-standard layouts occasionally confuse the screen reader. Complex PDFs and images with embedded text remain unreliable. The system works best with Apple’s own apps and mainstream social platforms.
Old Siri could set a timer or send a text. New Siri can execute sequences across apps.
The Apple-confirmed feature list includes: reading a thread, composing a response in a specific tone, scheduling a follow-up, and adding the contact to a reminder list. That’s four apps, one voice command.
In practice, multi-step execution works impressively for Apple’s native app stack (Messages, Mail, Calendar, Reminders, Notes) and has variable results with third-party apps. The supported third-party list at launch is not comprehensive.
This is the same challenge the Samsung Galaxy S26 faced with Gemini’s agentic features: the underlying technology works, but the app ecosystem takes time to catch up.
| Task | Works in iOS 26.4? |
|---|---|
| Send message + add reminder | Yes (Apple apps) |
| Book restaurant via OpenTable | Yes (supported partner) |
| Schedule meeting across contacts | Yes |
| Take action in Slack or Notion | Limited |
| Handle multi-step financial tasks | No (requires app-specific support) |
| Automate non-Apple email clients | Partial |
This is where the Gemini integration shows clearly.
Siri now understands your context across Apple’s data sources. It knows you have a 4pm call from your calendar, that your sister’s birthday is this week from Contacts, that you asked about Thai restaurants last Tuesday from your search history. It connects these without you connecting them manually.
Say “remind me about mom’s thing this weekend.” Old Siri would ask for clarification. New Siri reads your messages, finds the relevant thread about your mother’s event, and creates a specific reminder with details pulled from that conversation.
Apple’s privacy approach here is meaningful: personal data stays within Apple’s Foundation Models. Gemini never sees your contact list, your messages, or your location history. What goes to Gemini is the reasoning layer, the “figure out what this person wants and how to do it” problem, not the personal data underneath.
For simpler queries, Apple runs a local LLM on-device with no cloud round-trip required. This is available on iPhone 15 Pro and later (Neural Engine capable hardware). The on-device model handles quick tasks: short summarizations, intent classification, routine Siri commands.
More complex queries route to Apple’s Private Cloud servers with the Gemini model. The routing happens automatically and invisibly. For users, the distinction is irrelevant. Siri responds faster for simple requests and takes a beat for complex ones, which is how a real AI assistant should behave.
Bloomberg’s February report put the timeline explicitly at risk: if features aren’t stable by Apple’s internal cutoff, they slide to iOS 26.5 or iOS 27.
Apple confirmed in February that the revamped Siri is still on track for 2026. That’s not the same as confirming iOS 26.4 specifically. The iOS 26.4 first beta launched without the new Siri features visible to developers, which suggests the full rollout may be staged rather than immediate.
Translation: if you’re reading this and iOS 26.4 is already in your hands, the Siri experience may arrive as a server-side activation after the software update. If the full feature set doesn’t appear on day one, it’s not necessarily broken—it may just not be live yet.
The complete Siri experience, including long-term memory, full chatbot-style conversations, and persistent context across sessions, is confirmed for iOS 27, expected with iPhone 18 in September 2026.
iOS 26.4 delivers the useful pieces: on-screen awareness, cross-app actions, personal context. It doesn’t deliver the fluid multi-turn conversation you’d have with ChatGPT or Gemini Advanced. Siri in iOS 26.4 is better at doing things than at talking about things.
The most frustrating gap for power users: productivity apps are underrepresented. If your stack is Google Workspace, Notion, Slack, Linear, or Superhuman, Siri’s multi-step actions won’t reach into those apps reliably. You’re still context-switching manually for anything outside Apple’s native apps and a short list of supported partners.
This is the comparison people actually want. Here’s where things stand as of March 2026:
| Capability | New Siri (iOS 26.4) | ChatGPT (App) | Gemini Advanced |
|---|---|---|---|
| Device integration | Excellent | Weak | Moderate |
| Conversational depth | Good | Excellent | Excellent |
| On-screen awareness | Excellent | None | None |
| Personal context | Excellent | Limited | Moderate |
| Multi-app actions | Good (native apps) | None | Moderate |
| Creative generation | Moderate | Excellent | Excellent |
| Privacy | Excellent | Moderate | Moderate |
| Cost | Free with iPhone | $20/month (Plus) | $20/month |
Siri wins decisively on device integration and privacy. It loses to both ChatGPT and Gemini on raw conversational quality and creative tasks.
The right mental model: Siri is no longer the embarrassing option. But it’s still not what you’d choose for writing a pitch deck, debugging code, or having a nuanced research conversation. For tasks that involve your iPhone directly, managing your information, taking actions across your apps, understanding your context, new Siri is now competitive in a way old Siri never was.
If you want to understand how ChatGPT, Claude, and Gemini compare as standalone tools, that’s a separate conversation. Siri isn’t competing in that lane yet.
Apple’s marketing will lean hard on privacy. It’s worth separating what’s real from what’s spin.
What’s genuinely private: Your personal data (messages, contacts, calendar, photos, emails) never touches Google’s infrastructure. Apple’s Foundation Models process personal context locally or on Apple’s Private Cloud Compute servers. Third-party security researchers have verified this as architecturally sound.
What goes to Google’s model: The reasoning query. Not “my sister’s name is Sarah,” but the task of figuring out what you’re asking about your family and how to answer it. Gemini handles abstracted reasoning tasks, not raw personal data.
The honest caveat: No system that routes queries to any external infrastructure is fully private. Apple’s architecture is better than most alternatives. It’s not equivalent to truly local processing. If you require zero-external-processing AI, you’re looking at local AI tools on Mac or Android with on-device-only models.
For the vast majority of iPhone users, Apple’s privacy architecture is more than sufficient. The people for whom it isn’t already know who they are.
The new Siri makes the most sense if:
Stick with ChatGPT or Gemini as your primary if:
Get both: Use Siri for the ambient stuff, the things your iPhone knows about and tasks across your native apps, and keep your ChatGPT or Gemini subscription for the heavier cognitive work. They’re solving different problems.
The Gemini-powered Siri is the biggest upgrade Apple has shipped to the assistant in more than a decade. On-screen awareness works. Personal context connects the dots in ways old Siri couldn’t. Cross-app execution handles native app tasks without you doing the tap-navigation.
It’s not ChatGPT. It’s not Gemini Advanced. The full conversational experience is still months away in iOS 27. And Bloomberg’s warning that iOS 26.4 could slip is worth keeping in mind. Apple’s AI execution record over the last two years doesn’t inspire blind confidence.
But if you’ve written Siri off entirely, iOS 26.4 is the version that might change your mind. For on-device AI that actually knows your context and takes actions without requiring you to open a separate app, this is now genuinely good.
For iPhone users who want AI that understands voice as a primary interface, this is the update you’ve been waiting for.
No. The Gemini-powered Siri is included with iOS 26.4 at no additional cost. Apple absorbs the ~$1 billion annual Google licensing cost. You won’t be asked to pay separately.
Google’s Gemini model processes reasoning tasks but does not receive your personal data: contacts, messages, photos, or calendar information. That data stays within Apple’s Foundation Models. Apple’s Private Cloud Compute architecture keeps Google’s access limited to abstracted queries, not raw personal information.
Full features require iPhone 15 Pro or later for on-device LLM processing. iPhone 15 (standard) and newer get the cloud-based features. Older devices may receive partial support or none at all. Apple hasn’t published a complete compatibility matrix as of March 2026.
Partially. The on-screen awareness and app actions were announced in 2024 and were supposed to ship in iOS 18. They’ve been delayed twice. iOS 26.4 is the version where Apple says they’re actually ready. The Gemini partnership, announced January 2026, replaces what was originally planned as Apple’s own in-house LLM.
Probably not as a primary tool if you rely on ChatGPT for writing, analysis, or research. The new Siri is better at doing things on your phone than at reasoning or generating long-form content. If you currently use ChatGPT mainly to set reminders, find information from apps, or handle tasks involving your data, Siri may cover more of that use case now.
Bloomberg cautioned this is possible if features aren’t stable in time. Apple confirmed in February that the revamped Siri is still on track for 2026, but didn’t lock in iOS 26.4 specifically. The worst case: the features arrive a software cycle later. The architecture and the partnership aren’t changing either way.
The Samsung Galaxy S26 ships Gemini’s agentic features for third-party apps while Apple’s new Siri focuses on native app integration and on-screen awareness. Samsung’s implementation is more aggressive on third-party automation; Apple’s is more deeply embedded in the OS and more privacy-protective. iPhone users don’t need to envy the S26’s AI. iOS 26.4 is a credible answer, just a different architectural approach.
Last updated: March 1, 2026. Features and timeline details verified against reporting from MacRumors, 9to5Mac, and CNBC on the Apple-Google partnership.