OpenAI's $122B Bet: The Superapp Is Real
I recommended Cursor to every developer I know. Multiple times. In writing. On this site. I called it the best AI-powered code editor on the market and meant it. That changed when Cursor’s Composer 2 was found to be built on Kimi K2.5, a Chinese AI model, without disclosure.
So when developers discovered last week that Cursor had quietly built Composer 2 on top of Moonshot AI’s Kimi K2.5 — a Chinese-developed model — without telling anyone, I felt something between embarrassment and anger. Not because a Chinese model is inherently dangerous. But because I’d been telling people they knew what was running their code. They didn’t.
The Short Version
What happened Cursor used Kimi K2.5 as the base model for Composer 2 without disclosing it When it broke March 22-25, 2026, after devs found Kimi model IDs in API responses Cursor’s response Co-founder Aman Sanger: “It was a miss to not mention the Kimi base in our blog from the start” The licensing issue Kimi K2.5’s license requires products over $20M/month revenue to display “Kimi K2.5” in UI — Cursor exceeds $2B ARR Data concern Unknown what training data flows exist between Cursor’s implementation and Moonshot AI Who should worry Enterprise teams with data sovereignty requirements, regulated industries, government contractors Bottom line: This isn’t about China. It’s about a $2B+ company hiding what’s under the hood of its flagship feature from users who had every reason to care.
Here’s the timeline, pieced together from reporting by TechCrunch, VentureBeat, and Security Boulevard between March 22-25.
A “miss.” That’s the word choice. Not a deliberate omission. Not a transparency failure. A miss. Like forgetting to CC someone on an email.
Cursor’s defense is that Kimi K2.5 is only the base model and that most of Composer 2’s actual capability comes from their own proprietary RL training layered on top. Only about 25% of the compute, they say, comes from the Kimi foundation.
Here’s the thing: that 25% is the foundation. Every inference runs through it. Every line of your code that Composer 2 processes touches that base layer. It’s not an optional module you can disable. It’s the floor the whole building sits on.
The analogy would be a construction company saying “only 25% of this building is the foundation — the rest is our proprietary design.” Sure. But if I care about what the foundation is made of, the superstructure doesn’t change that.
For developers using Cursor for personal projects? Probably fine. For enterprise teams with compliance requirements, data residency obligations, or government contracts? That 25% is 100% of the problem.
This is where it gets genuinely uncomfortable for Cursor.
Kimi K2.5 ships under an open-weight license with a specific commercial clause: any product generating more than $20 million per month in revenue that uses K2.5 must prominently display “Kimi K2.5” in its user interface.
Cursor reportedly exceeds $2 billion in annual recurring revenue. That’s roughly $167 million per month.
They displayed nothing. No attribution. No mention. The only reason anyone knows about the Kimi base is because developers found model IDs leaking through API responses.
I’m not a licensing attorney. But the gap between “must prominently display” and “didn’t mention it at all” is wide enough to drive a lawsuit through. Whether Moonshot AI pursues enforcement is a separate question — but the obligation appears clear, and Cursor appears to have ignored it.
If you’re evaluating AI coding tools for a team of any size, this incident should change how you ask questions during procurement. Not because Cursor is uniquely bad, but because Cursor is the first major example of what I’ve been worried about for months: AI supply chains are opaque, and the companies selling to you don’t always know (or disclose) what’s inside their own products.
These are the questions I’d put to any AI vendor right now:
If a vendor balks at any of these, that tells you something. If they can’t answer #1, that tells you more.
For a broader look at navigating these questions in regulated environments, see our AI safety for business guide and our breakdown of enterprise AI deployment considerations.
Depends on your definition of “safe.”
For individual developers and small teams: Almost certainly yes. Cursor remains an excellent code editor. The Kimi K2.5 base doesn’t mean your code is being sent to China. Cursor processes inference on their own infrastructure. The model architecture running locally or on Cursor’s servers doesn’t inherently create a data exfiltration risk.
For enterprise teams with compliance requirements: You now have an unknown in your supply chain that wasn’t there yesterday (or rather, was always there — you just didn’t know about it). Whether that unknown is acceptable depends on your specific regulatory environment. SOC 2 and ISO 27001 compliance programs typically require you to understand and document your data processing chain. A surprise base model from a Chinese AI lab creates a documentation gap at minimum.
For government contractors and defense-adjacent work: I’d pause. Not because of proven risk, but because the disclosure failure itself is a red flag for the kind of transparency these environments demand. If Cursor didn’t disclose this voluntarily, what else might you not know?
If you’re shopping alternatives, our Cursor vs. Claude Code vs. Copilot comparison and AI code assistants roundup are both current.
This isn’t just a Cursor story. It’s the first high-profile example of a pattern the industry will keep repeating.
The AI industry right now operates like the food industry before ingredient labels. You pick a product off the shelf (Cursor, Copilot, whatever) and trust that what’s inside matches what’s on the box. But there’s no regulation requiring AI companies to disclose their model supply chain. No equivalent of a nutrition label. No mandatory ingredient list.
Here’s what’s actually happening across the industry:
I wrote about the broader Chinese AI model surge a month ago. The quality is real. The competition is healthy. But the supply chain opacity is a problem that the industry hasn’t begun to solve.
I still use Cursor. I’m typing this in a different editor, but my day-to-day coding happens in Cursor, and I’m not switching tomorrow. The product is too good.
But here’s what I’d want to see:
Immediate: Publish a complete model bill of materials for every Cursor product. Not vague descriptions. Specific model names, versions, origins, and licensing terms.
Short-term: Implement a model change notification policy. If the base model changes, users should know before it happens, not after someone finds model IDs in debug output.
Long-term: Push for industry standards around AI supply chain disclosure. Cursor is big enough to lead here. At $2B+ ARR, they have the credibility and the obligation.
Aman Sanger’s “miss” framing isn’t going to age well. The window for getting ahead of this story is closing. The window for setting an industry standard is still open.
No evidence suggests that. Cursor runs inference on its own infrastructure. The Kimi K2.5 model weights are used locally on Cursor’s servers — your code isn’t routed to Moonshot AI or Chinese servers during normal operation. The concern is about transparency and supply chain provenance, not active data exfiltration.
Kimi K2.5 is an open-weight large language model developed by Moonshot AI (月之暗面), a Beijing-based AI company. It’s designed for coding and reasoning tasks. “Open-weight” means the model parameters are publicly available for download, but it ships with a commercial license that includes revenue-based attribution requirements.
That depends on your jurisdiction and industry. For most commercial software companies, no specific regulation was violated. For companies operating under ITAR, FedRAMP, or certain EU data sovereignty frameworks, an undisclosed Chinese model component in their development toolchain could create compliance complications. Consult your compliance team.
For most users, no. Cursor is still a strong product, and this incident is about disclosure practices, not product quality. For teams with strict supply chain requirements, it’s worth having a conversation with Cursor’s sales team about their model architecture before continuing. See our AI safety and privacy guide for a framework to evaluate these decisions.
Cursor claims that Kimi K2.5 provides the base model weights that Composer 2 runs on, but that approximately 75% of the model’s capability comes from Cursor’s own reinforcement learning training applied on top. In practical terms, every inference still passes through the Kimi base layer. The 25% figure describes the proportion of training compute, not the proportion of runtime involvement.
Last updated: March 26, 2026. Reporting based on coverage from TechCrunch, VentureBeat, and Security Boulevard, March 22-25, 2026, and Aman Sanger’s public statements.