This is Scale & Strategy, your friendly neighborhood ‘business information sponge’ (squeeze us out daily → get smarter about BizOps).
Here’s what we got for you today:
K2 Think V2 ranks among AI’s most open models
How to Move From Keyword Planning to Prompt Planning
K2 Think V2 ranks among AI’s most open models
Open models have surged in popularity over the past year, but the Institute of Foundation Models at MBZUAI is pushing “openness” further than most.
Its latest release, K2 Think V2, is one of the rare models that’s open end-to-end, offering transparency across the entire development stack. Users can access not just the weights, but also the training data, code, intermediate checkpoints, and evaluation tools, IFM’s Liu told The Deep View.
That’s a major distinction. Many well-known “open” models, including DeepSeek, Alibaba’s Qwen, and OpenAI’s GPT-OSS, are technically open-weight, not fully open-source. In other words, only their trained parameters are public, while the underlying training process remains mostly hidden.
K2 Think V2’s full openness gives developers the ability to inspect, reproduce, and build with far fewer blind spots. It offers what Liu calls “a level of transparency that other leading open models do not.”
This also enables genuinely independent evaluation, since performance doesn’t depend on proprietary datasets or opaque pipelines that can artificially inflate results.
For organizations deploying or auditing AI systems, the benefit is simple: you know exactly what you’re getting. The model’s reasoning and behavior can be traced back to concrete training decisions rather than black-box effects.
K2 Think V2 arrives at a pivotal moment for open models. As deployment costs climb, enterprises are increasingly looking beyond major providers for more affordable, flexible alternatives. But with most models only partially open, users often have limited visibility into how they were built.
K2 Think V2 may offer something rare in this space: real independence and credibility. As Liu put it, “The main benefit is independence and credibility.”
Business operators scaling from $2M to $20M don’t stall because of bad strategy. They stall because their time is trapped in “quick tasks” no one else owns.
Each one quietly taxes growth.
That’s why high-performing operators approaching their next inflection point ask a sharper question: What should I stop doing entirely?
Our guide, 26 Things to Stop Doing in 2026, identifies the operational and administrative work executives must delegate to move from founder-led execution to scalable leadership.
And when you’re ready to remove those tasks for good?
BELAY matches you with U.S.-based Executive Assistants who operate at the executive level — protecting your time, managing priorities, and creating leverage that supports the jump from $2M → $20M.
Growth doesn’t come from doing more. It comes from doing less better — and delegating the rest.
How to Move From Keyword Planning to Prompt Planning
If Google went dark tomorrow, would your content strategy survive?
Alex Birkett argues that’s the question every marketer needs to ask as discovery shifts from search bars to chatbots.
Seer Interactive found that 95% of the queries answer engines execute have zero tracked search volume. Zero.
That’s the problem: when you rely only on keyword tools, you’re ignoring what ChatGPT, Perplexity, and Gemini are actually doing behind the scenes.
These models don’t just answer a single query. They use query fanout.
A user asks one question, and the system generates multiple sub-queries to build the response. Keyword tools never capture those hidden layers, but that’s where the real opportunity lives.
To win in this new world, you need to move from keyword planning to prompt planning.
And your best SEO tool isn’t another third-party platform. It’s your sales team.
Listen for the specific, multi-part problems prospects are already voicing. A question that comes up in five sales calls from your ICP is worth infinitely more than a generic keyword with 1,000 monthly searches.
Alex suggests scoring every potential topic from 1–10 across three dimensions:
Buyer data: Do customers actually ask this?
Product alignment: Does your product genuinely solve it?
Channel validation: Is there search volume, citation data, or forum activity?
If a topic scores high on buyer relevance and product fit but low on volume, publish it anyway. That’s often a high-intent fanout query waiting to be answered.
Answer engines also favor specific formats. They consistently cite:
Listicles
Comparison pages
Knowledge-driven explainers
Structured content makes it easier for models to extract and reuse your information.
The goal is to reverse-engineer where these systems pull from. Study citation data and identify the pages already getting surfaced.
Often, pain-point content triggers product recommendations far faster than broad category keywords ever will.
This isn’t a strategy where you crank out 50 AI-optimized posts overnight. It takes time, roughly three months to build the research engine and closer to a year to see meaningful traffic shifts.
But while everyone else chases vanity metrics on Google, you might be building the foundation to get cited by the engines of the future.