Scale & Strategy
This is Scale & Strategy, the newsletter that allows you to keep up-to-date with business, without staying glued to your Twitter feed.
Here’s what we got for you today:
- AGI might have an efficiency problem
- NYT’s crusade against AI has a new target
AGI might have an efficiency problem
Nat Rubio-Licht: What do you think people get wrong about AGI?
Puri: Calling it “artificial intelligence” was already a branding disaster, and then we made it worse by adding “general.” Those words trigger all the sci-fi baggage: Will it replace us? Will it help us? Will it doom us? The framing makes the conversation bigger and scarier than it needs to be.
I’m not obsessed with “general” intelligence. I’m obsessed with useful intelligence. I want AI that solves real, narrow problems inside real companies. Not a machine that replaces humans, but one that augments experts and helps people get work done. That’s the gap between AGI and what I call AUI: artificial useful intelligence.
Rubio-Licht: And what about the grand promises AI labs keep making about AGI?
Puri: The upside is that chasing AGI pushes the underlying tech forward. The downside is thinking we need AGI to write an email or generate a quarterly report. We don’t.
Look at the physics. Human intelligence fits in a 1200-cm³ skull, runs on 20 watts, and gets refueled with sandwiches. A single Nvidia Blackwell GPU board pulls 1200 watts, and serious workloads need dozens or hundreds of them. That’s a three-order-of-magnitude efficiency gap.
So the point is simple: don’t throw artificial general intelligence at narrow enterprise tasks. Usefulness means solving a problem at the right cost, with the right efficiency, in the right place. AGI doesn’t check those boxes. AUI does.
NYT’s crusade against AI has a new target
The New York Times is back in court swinging at another AI company. After spending nearly two years locked in a legal knife fight with OpenAI, the paper has now sued Perplexity, accusing the AI search startup of copying and distributing its content without permission.
The suit, filed in federal court in New York, adds to a growing pile of lawsuits from media groups like The Chicago Tribune and Dow Jones. The core allegation: Perplexity’s systems scrape and reproduce Times journalism, sometimes verbatim, sometimes through bots that are hard to detect or block.
Perplexity isn’t some tiny upstart. The company, led by former OpenAI executive Aravind Srinivas, has raised about $1.2 billion and hit a $20 billion valuation in September. Its pitch is simple: real-time AI-powered search through its app and browser, Comet. Naturally, a huge share of the answers it generates depend on news outlets.
For the Times, that’s the problem. Its entire business model hinges on paid access to its reporting. If an AI tool can ingest that work and spit it back out for free, the subscription engine breaks.
The lawsuit includes examples of Perplexity lifting Times passages word-for-word. It also shows Perplexity hallucinating made-up information and attributing it to the Times, which the paper argues undermines its core asset: credibility.
This filing comes right after a judge blocked OpenAI’s attempt to avoid turning over 20 million de-identified chat logs in the Times’ ongoing case against them.
Plenty of media companies are choosing partnership over war with AI firms. The New York Times is not one of them. The complaint warns of a “downward spiral” where scraped content erodes monetization, which leads to weaker reporting, which further erodes revenue. In the Times’ view, Perplexity doesn’t just threaten its business model. It threatens the future supply of the journalism Perplexity depends on to function at all.
Was this email forwarded to you?
That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.
Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!