All News
anthropicopenaienterpriserevenueclaudearr

Anthropic Surpasses OpenAI in Annual Recurring Revenue

Anthropic has reportedly crossed $25B ARR, overtaking OpenAI for the first time. Ramp data shows 73% of new enterprise customers now choose Claude.

Vlad MakarovVlad Makarovreviewed and published
7 min read

Ten weeks. That is how long it took for the enterprise AI market to flip from a near-even split between Anthropic and OpenAI to a lopsided 73-27 in Anthropic's favor, according to spending data from corporate card provider Ramp. In early December, OpenAI still held a 60/40 edge among first-time enterprise customers. By late March, nearly three out of four new sign-ups were choosing Claude.

The numbers behind this shift are staggering. Multiple reports now place Anthropic at or above $25 billion in annualized recurring revenue, a figure that appears to match or narrowly exceed OpenAI's own $25 billion mark. For a company that was generating roughly $1.4 billion a year ago, the trajectory defies normal startup math. The Information pegs recent annualized revenue at $19 billion-plus — about 14 times higher than twelve months prior — and that number may already be stale given the pace of growth. The Register, citing infrastructure disclosures, reports Anthropic has revealed a $30 billion run rate and plans to use 3.5 gigawatts of Google AI chips to keep up with demand.

How Claude Code Became a Revenue Machine

The most surprising element in Anthropic's ascent is the outsized contribution of a single product. Claude Code, the company's agentic coding tool, is now generating roughly $2.5 billion in ARR on its own. For context, that would make Claude Code alone larger than most publicly traded SaaS companies.

Developers who adopted Claude Code after its source code briefly became public in early April appear to have stayed. The tool's ability to operate autonomously across multi-file codebases, handle complex refactors, and integrate with existing CI/CD pipelines has made it sticky in ways that general-purpose chatbots are not. Enterprise engineering teams, in particular, have been willing to pay premium rates for a tool that demonstrably cuts development cycles.

But Claude Code is only part of the story. The broader Claude model family — led by Claude Opus 4.6 and its predecessor Claude Opus 4.5 — has carved out a reputation for reliability in enterprise workflows. Anthropic's architectural breakthrough announced in late March gave the company a technical credibility boost at exactly the right moment, landing in the same week that many procurement decisions were being finalized for Q2 budgets.

The Growth Math That Worries OpenAI

An Epoch AI analysis frames the competitive dynamic in stark terms. Since each company first crossed $1 billion in ARR, Anthropic has been growing at roughly 10x per year, compared to OpenAI's 3.4x. That gap compounds fast. Even if Anthropic's growth rate decelerates — as it inevitably will — the current momentum suggests it could build a meaningful revenue lead before OpenAI can respond.

The fundraising reflects this confidence. Anthropic's $30 billion Series G valued the company at $380 billion post-money, a figure that would have seemed absurd eighteen months ago but looks almost conservative if the revenue trajectory holds. Investors are effectively betting that Anthropic can sustain hyper-growth long enough to build durable enterprise relationships that survive the inevitable commoditization of base model capabilities.

OpenAI, for its part, is not standing still. The Wall Street Journal reports the company is considering a strategic pivot away from broad consumer bets — the ChatGPT ad experiments, the hardware partnerships, the media licensing deals — toward a tighter enterprise focus. The shift would acknowledge what Ramp's data already shows: enterprise customers are voting with their wallets, and right now they are voting for Claude.

Why Enterprises Are Switching

The reasons behind the migration are both technical and cultural. On the technical side, Anthropic has benefited from a series of model improvements that landed in quick succession. The release of Opus 4.6 brought significant gains in instruction following and structured output reliability — two areas where enterprise customers have historically found frontier models frustrating. Recent research into Claude's internal emotion vectors has also generated buzz among technical evaluators, suggesting Anthropic's approach to model interpretability is yielding practical dividends.

On the cultural side, Anthropic has leaned into a positioning that resonates with enterprise risk committees. The company's emphasis on safety research, its Constitutional AI framework, and its willingness to engage with regulators have all made it an easier sell in industries like finance, healthcare, and government contracting, where procurement teams need to justify AI spending to compliance departments.

There is also a simpler explanation: pricing. Anthropic has been aggressive about enterprise volume discounts, and several large deals in Q1 reportedly came with per-token rates that undercut OpenAI's equivalent offerings by 15 to 20 percent. When models are perceived as roughly equivalent in capability — a debatable but increasingly common view among enterprise buyers — price becomes a powerful differentiator.

What Happens Next

The question now is whether Anthropic can convert a revenue milestone into a lasting structural advantage. Revenue alone does not guarantee dominance, and OpenAI retains significant assets: a larger consumer user base, deeper brand recognition outside the developer community, and Microsoft's distribution muscle through Azure and Copilot integrations.

Anthropic's infrastructure commitments suggest the company is planning for continued hypergrowth. The 3.5 gigawatts of Google AI chips it has secured represent an enormous computing footprint — enough to train next-generation models while simultaneously serving a rapidly expanding customer base. The partnership with Google Cloud, while sometimes awkward given Google's own competing Gemini models, provides Anthropic with infrastructure scale that few startups could otherwise access.

For the broader AI industry, the Anthropic-OpenAI revenue crossover marks the end of a period when OpenAI's first-mover advantage seemed insurmountable. The market is now genuinely competitive at the top, and enterprise customers are the beneficiaries. Pricing pressure will intensify, feature parity will tighten, and the companies that win long-term will be those that solve the hardest integration and reliability problems — not just those with the most impressive demo.

The next few quarters will reveal whether this is a permanent shift or a temporary swing driven by product timing and aggressive pricing. But for now, the numbers tell a clear story: in the enterprise AI race, Anthropic is no longer the underdog.

Related Articles

Scroll down

to load the next article