$30B ARR Proves Anthropic Picked the Right Game


Scale And Strategy

together with

Turing

This is Scale And Strategy, the newsletter that helps you stay rational when the headlines… aren’t.

​Here’s what we’ve got for you today:

  • $30B ARR Proves Anthropic Picked the Right Game
  • OpenAI Is Quietly Telling Governments to Rewrite the Rules

$30B ARR Proves Anthropic Picked the Right Game

While everyone else chased consumer scale, Anthropic went hard on enterprise.

That bet is now compounding.

They just crossed a $30B revenue run-rate. That’s up from ~$9B at the end of 2025 and ~$14B just a few months ago. Not gradual growth. Step-function.


The demand side tells the real story.

In February, they had ~500 customers spending $1M+ annually.

Now it’s over 1,000.

That’s not just more users. That’s deeper usage from companies that actually matter to revenue.


A big driver here is Claude Code.

It’s quietly become a default tool for a lot of developers, and by early 2026 it was already running at ~$2.5B on its own.

At the same time:

  • Weekly active users more than doubled since January
  • Business subscriptions to Claude Code quadrupled

That’s product-market fit, not just hype.


Zoom out and you can see the strategy clearly.

Anthropic didn’t try to win the “everyone uses this daily” game.

They focused on high-value workflows inside companies. Coding, automation, internal tooling. Places where AI directly ties to output and budget.

That’s why revenue is scaling faster than user count.


Meanwhile, OpenAI is starting to move the same way.

They’re still larger overall, but the shift is obvious. Cutting side bets, leaning into enterprise, talking more openly about profitability and sustainable revenue.

Even their own leadership is saying the quiet part out loud now.

Enterprise is where the money is.


Anthropic is also locking in the supply side.

They expanded partnerships with Google Cloud and Broadcom to secure massive TPU capacity coming online over the next few years.

Which makes sense. If demand is this strong, compute becomes the constraint.


All of this is happening while they’re taking hits elsewhere.

Public friction with the Pentagon. A couple of messy data leaks. Not exactly the clean narrative they’ve tried to maintain.

And yet, the numbers keep moving in one direction.


That’s the part that matters.

Strategy isn’t what you say. It’s what shows up in the revenue line.

Anthropic picked enterprise early, stayed focused, and now the gap is closing faster than most people expected.

IPO chatter is already in the air.

If they keep this pace, it’s not really a question of if. Just when.


The research accelerator for frontier AI labs

While data factories churn out quantity, leading AI labs need partners who co-own research goals and engineer the complex human-AI loops that push models from promising to state-of-the-art. Turing specializes in closing capability gaps through custom research acceleration.

Turing’s research-focused approach includes:

  • Co-owned experimental outcomes, not just data delivery, and vendor neutrality
  • Quality-by-design workflows with transparent data lineage and auditable results
  • Custom RL environments and SFT/RLHF/DPO pipelines designed for your benchmarks

Partner with the research accelerator that understands what frontier AI labs actually need.


OpenAI Is Quietly Telling Governments to Rewrite the Rules

OpenAI just dropped a 13-page policy doc that reads less like a whitepaper and more like a warning shot.

The premise is simple. We’re entering a transition toward superintelligence. The current system isn’t built for what comes next.

So they’re proposing a new “social contract.”


Some of the ideas are… not subtle.

  • A sovereign-style fund funded by AI companies that pays dividends to citizens
  • Taxes on “robot labor”
  • A 4-day workweek
  • Universal access to AI (“Right to AI”)
  • Containment frameworks for autonomous systems that go off-script

It’s a full-stack rethink of how wealth, work, and access function in an AI-driven economy.


The wealth fund is the headline.

Think Alaska’s oil dividend, but for AI. Companies building the tech contribute, and the upside gets redistributed across society.

That only makes sense if you believe the upside is going to be massive.

And uneven.


Zoom out and the signal is pretty clear.

This isn’t a company lobbying for lighter regulation. It’s a company telling governments the current economic model might not hold.

Which is a very different posture.


Even the framing matters.

Calling it a “social contract” isn’t accidental. That’s language usually reserved for foundational resets, not incremental policy tweaks.

They’re basically saying: if AI scales the way we think it will, you don’t just regulate it. You redesign around it.


There’s also a timing issue baked in.

Policy moves slowly. Technology doesn’t.

So you get this gap where capability outruns governance, and by the time systems are in place, the world has already shifted.

That’s the risk they’re pointing at.


You don’t put something like this out unless you believe your own roadmap.

And if they’re even partially right, the question isn’t whether change is coming.

It’s whether institutions can keep up when it does.


Was this email forwarded to you?

That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.

Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!


Unsubscribe · Preferences

Subscribe to Scale & Strategy