OpenAI is opening up “offensive-capable” cyber models while Anthropic keeps theirs locked down


Scale & Strategy

together with

Notion

This is Scale & Strategy, the newsletter that connects the dots, so you can see the bigger picture.

​Here’s what we’ve got for you today:

  • OpenAI is opening up “offensive-capable” cyber models while Anthropic keeps theirs locked down
  • Allbirds just pivoted from sneakers to selling GPU time

OpenAI is opening up “offensive-capable” cyber models while Anthropic keeps theirs locked down

AI in cybersecurity is about to get a lot less theoretical.

OpenAI is expanding its Trusted Access for Cyber program and giving a much wider group of vetted defenders access to a new GPT-5.4 variant built specifically for cyber work. The key difference isn’t just capability, it’s posture. This model is intentionally more permissive.

They’re not being subtle about it. GPT-5.4-Cyber is tuned with fewer restrictions and better support for workflows that were previously awkward or blocked entirely, including things like binary reverse engineering. In other words, it’s closer to how actual security teams operate instead of a sanitized demo environment.

Access is still gated, but not tightly. Thousands of individual defenders and hundreds of teams are being onboarded, assuming they go through verification. The bar is basically “prove you’re legit and willing to work with OpenAI,” not “be one of 40 handpicked orgs.”

That’s the real contrast with Anthropic and its Mythos rollout. Anthropic kept things extremely tight, limiting access to a small group of partners and a few dozen additional orgs. Cleaner from a safety narrative standpoint, but it slows down real-world iteration.

OpenAI is taking the opposite bet:

  • Broader access
  • Looser constraints
  • Faster feedback loops

They’re still talking about safeguards, staged rollouts, and ecosystem investment, but the underlying strategy is obvious. Get these tools into the hands of people doing actual work and let the edge cases surface in production.

There’s real risk here. A more permissive model is, by definition, easier to misuse. You can’t fully separate “defensive capability” from “offensive knowledge” in cybersecurity. Expanding access means accepting that tradeoff.

But there’s also a strategic reality Anthropic is implicitly acknowledging by holding back. If your competitor is willing to ship broadly, restraint isn’t neutral. It’s giving up ground.

Anthropic’s approach signals control and caution. OpenAI’s signals speed and iteration.

One of these wins on narrative. The other usually wins on adoption.


You’re building a billion-dollar company. Don’t plan it in Google Docs.

Notion gives you the workspace your startup deserves — docs, tasks, product roadmaps, team wikis, and investor updates in one clean space.
For a limited time, early-stage startups get 6 months of Notion Premium free. No credit card. No catch.


Allbirds just pivoted from sneakers to selling GPU time

Allbirds sold off what was left of the brand and is now rebranding into an AI compute business. Same company, completely different game. The market loved it. Stock ripped 6x on the announcement.

They already unloaded the core asset. The brand went for $39M, which is what’s left after falling from a ~$4B peak back in 2021. What remains is basically a public shell with a ticker and a new story.

Now the story is GPUs.

They’ve lined up a $50M financing to buy hardware and launch a GPU-as-a-service offering, renting compute under longer-term contracts. At the same time, they’re asking shareholders to drop the “public benefit” status, which is a polite way of saying the sustainability mission is officially dead.

The numbers tell you everything:

  • Stock goes from ~$3 to $20+ on the pivot
  • Market cap was sitting around $22M before this
  • They raise more capital for GPUs than the company was worth

This is less “reinvention” and more financial engineering with a tailwind.

We’ve seen this movie before. In the crypto cycle, struggling companies stapled “blockchain” onto their name and got a second life. Now it’s AI, and compute scarcity gives it just enough credibility to not look completely ridiculous.

The uncomfortable part is… it might actually work.

GPU demand is real. Supply is tight. If they can secure hardware and lock in contracts, the business can generate cash regardless of how absurd the origin story is.

So this isn’t about Allbirds becoming an “AI company.” It’s about a public vehicle attaching itself to the most capital-constrained layer in AI and hoping the market funds the transition.

In a normal market, this gets laughed out of the room. In this one, it gets a term sheet and a 600% rally.


Was this email forwarded to you?

That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.

Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!


Unsubscribe · Preferences

Subscribe to Scale & Strategy