Claude Code Leak: Not a Hack, Still a Problem


Scale & Strategy

together with

delve

This is Scale & Strategy, the newsletter delivering your daily business nutrients.

Here’s what we’ve got for you today:

  • Slack Is Trying to Become the Operating System for Work
  • Claude Code Leak: Not a Hack, Still a Problem

Slack Is Trying to Become the Operating System for Work

Salesforce is done pretending Slack is just a chat app.

They just rolled out ~30 new AI features, almost all centered around turning Slackbot from a helper into something closer to an operator.


This isn’t a fresh start, it’s a continuation.

Back in January, Slackbot started acting more like an agent. Drafting emails, scheduling meetings, pulling info from your inbox.

Now they’re pushing it further into actual workflow execution.

The big unlock is reusable AI “skills.”

Instead of one-off prompts, you can define tasks Slackbot can run repeatedly across different contexts. Think of it like lightweight automation without needing to wire up a full system.

Ask it to plan an event budget, and it can:

  • Pull relevant info from Slack and connected tools
  • Build out a plan
  • Loop in the right people
  • Schedule the meeting

Not revolutionary individually. But stitched together, it starts replacing real coordination work.


They’re also expanding its reach outside Slack.

Slackbot now connects into external tools and Salesforce’s broader agent ecosystem. It can hand off tasks, trigger workflows, and basically act as a middle layer between systems.

Add in meeting transcription, summaries, action items, plus context from calendars and conversations, and you start to see the direction.

Less “chat,” more “command center.”


The strategy is pretty obvious.

Slack already owns where work conversations happen. Now they want to own what happens after the conversation.

Planning, coordination, execution. All inside one surface, powered by AI.


This is Salesforce playing the long game.

If Slack becomes the place where work gets initiated and completed, it stops being a tool and starts being infrastructure.

And once you’re infrastructure, switching costs go way up.


Microsoft will have something to say about it, obviously.

But Slack’s bet is cleaner: stay close to the user, layer in intelligence, and slowly absorb the workflows.

Not flashy. Just quietly trying to sit in the middle of everything your team does.


Migrate to Delve and get a $2,000 VISA card in your inbox

Delve is the AI-native compliance platform that actually does the work for you, auto-collecting evidence from AWS, GitHub, and your stack so you don’t have to chase screenshots or babysit integrations. Use AI security questionnaire tooling, AI copilot, and everywhere else to make compliance feel less, dreadful. Welcome to the new age.

The proof is in the pudding:

Bland → Switched, got compliant, and unlocked $500k ARR in 7 days

11x → Streamlined audits and moved faster on enterprise deals

micro1 → Scaled compliance without adding headcount.

Bonus: Delve will handle your migration for free. Zero-touch. No disruption. No starting over.

If you’re dreading opening your current SOC 2 tool, that’s your sign.

Book a demo here and trigger a migration - get $2000 sent straight to your inbox as soon as you’re onboarded.


Claude Code Leak: Not a Hack, Still a Problem

Anthropic just had its flagship dev product, Claude Code, spill a massive chunk of its internals into the open.

Not a breach. No attackers. Just a mistake.

Which somehow makes it worse.


Here’s what actually happened.

A map file bundled into the npm package exposed ~512,000 lines of internal source code. Public repo, public access, instant distribution.

Anthropic confirmed no customer data was involved. This wasn’t someone breaking in, it was something slipping out.

That distinction matters. But only to a point.


Because this isn’t really about external security.

It’s about process.

If half a million lines of internal code can get shipped accidentally, the issue isn’t your firewall. It’s how things move through your pipeline. Reviews, packaging, release checks.

The kind of stuff no one wants to think about until it fails publicly.

As Amy Chang put it, this is the classic problem of human error inside complex systems. And her takeaway is blunt: treat AI tools like collaborators, not infrastructure.

In other words, don’t blindly trust the machine. Or the company shipping it.


The leak itself is… not trivial.

It exposes how Claude Code actually works under the hood. Memory architecture, internal structure, and a bunch of unreleased features.

A few highlights that slipped out:

  • Kairos: background agents that can run tasks and ping you asynchronously, very much in line with the agent wave kicked off by things like OpenClaw
  • Buddies: visual agent avatars designed to make the experience more engaging (and, let’s be honest, more shareable)
  • Capybara: a more advanced model variant, already deep in iteration

This isn’t just code. It’s roadmap.


Which means competitors are now reading it too.

You can call it “inspiration” if you want. Reality is simpler. If you know what someone is building and how they’re building it, you can move faster or deliberately cut them off.

There are already reports of competitors adjusting product plans off the back of this.

That’s the real cost.


There’s also a less discussed angle.

Bad actors now have a clearer view into how the system behaves. That doesn’t instantly break anything, but it lowers the barrier for finding weaknesses later.

Security through obscurity isn’t a strategy, but losing obscurity overnight isn’t ideal either.


Anthropic moved quickly to clean it up.

They issued takedowns across GitHub, initially targeting thousands of copies before narrowing the scope. Standard damage control.

But once something like this spreads, it doesn’t really disappear. It just becomes harder to find.


Zoom out and this hits where it hurts most: reputation.

This is the second slip in a short window, after the accidental Claude Mythos blog post leak. For a company positioning itself as the “safe” AI lab, that narrative starts to crack if these keep happening.

Not because the tech is unsafe. Because the operations look loose.


The irony is pretty clean.

The company warning the world about AI risk just got reminded that the bigger risk is often human.

And that’s the part you don’t get to patch with a model update.


Was this email forwarded to you?

That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.

Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!


Unsubscribe · Preferences

Subscribe to Scale & Strategy