AI Is at the Mercy of Human Choices
Hello, fellow humans! Let’s be clear about how much of the future of AI is in our hands. The more we can make thoughtful choices about how to use AI, the more we can ensure that AI serves us and we’re not just surrendering our economic power to other groups and forces at work in the AI space.
Today's Agenda
A Better Way to Deploy Voice AI at Scale
Most Voice AI deployments fail for the same reasons: unclear logic, limited testing tools, unpredictable latency, and no systematic way to improve after launch.
The BELL Framework solves this with a repeatable lifecycle — Build, Evaluate, Launch, Learn — built for enterprise-grade call environments.
See how leading teams are using BELL to deploy faster and operate with confidence.
Headlines
Executives Are Wrong About How Their Teams See AI
In fact, they’re Wrong by a factor of two.
Harvard Business Review published a piece by Deborah Lovich, Stephan Meier and Chenault Taylor that found “76% of executives reported that their employees feel enthusiastic about AI adoption in their organization. But the view from the bottom up is less sunny: Just 31% of individual contributors expressed enthusiasm about adopting AI.”
The study also “recognize[s] that a workforce, like a consumer base, is not monolithic. Different segments have different motivations and fears.”
“…at BCG, we found that teams that co-created their AI rollout were twice as likely to use the tools in practice. The same pattern appears in our broader survey data: Employee-centric organizations that prioritize enhancing the employee experience—improving workflows, well-being, and engagement—are also more advanced in employee motivation, retention, and overall performance.”
Managers Using AI for Busywork Resets Expectations
Fortune finds that the way managers are using AI is resetting expectations and flattening org charts, but if they’re delegating all that work to AI, how much of that work is actually valuable?
SMBs Leveraging AI
According to a new study from LinkedIn, SMBs have some unique advantages that allow them to benefit from AI in ways that larger organizations cannot. Central to the advantage is that SMBs are often built around a specific, market-driven domain expertise that a human uses to drive AI towards specific results. According to the Australian outlet SMBTech, “Small businesses are punching above their weight and AI is a key factor enabling that success.”
According to the LinkedIn report, “39% of Australian professionals say they want to work for themselves in the near future, and they're taking action. Part of what's fueling this confidence: More than 55% of small business leaders in Australia say starting and running a business is easier today because of AI.”
Word of the Year
In a world where no one seems to agree on anything, Merriam-Webster and The Economist agree on the word of the year.
Feature Article
The AI Three-Body Problem: Why the Future of AI is Fundamentally Unstable
If you've been following AI development with a growing sense that prediction is impossible, you're not wrong. The trajectory of AI innovation resembles physics' famous three-body problem: three entities exerting gravitational force on each other, creating a system that is mathematically chaotic and fundamentally unpredictable.
But unlike the celestial version, our three bodies are human: model makers deciding what to build and how to monetize it, builders choosing which models to build on and what products to create, and users voting with their adoption and dollars on what actually matters. The feedback loops between these three groups create instability that no amount of analysis can fully predict—but understanding the dynamics can help you build strategies that survive multiple possible futures.
The System's Structure
Model makers—the Anthropics, OpenAIs, Metas, and Mistrals of the world—control capability development and access. Builders—developers, enterprises, and product companies—decide which models to depend on and what products to create. Users—all of us consuming AI-enabled products—create demand signals through usage and payment behavior that ripple back through the entire system.
Each group's incentives depend on predicting what the other two will do, but those predictions themselves change behavior. A model maker's pricing strategy depends on whether builders will pay premium rates, but builders' willingness to pay depends on whether users value the capability difference, and users' valuations depend on whether model makers maintain their advantage. There is no stable equilibrium.
This creates four plausible but incompatible scenarios, each triggered by different feedback loops reaching critical thresholds.
Scenario One: The Commodification Collapse
Imagine Meta releases Llama 4 at GPT-5 performance levels within months of OpenAI's launch. Builders immediately begin cost-benefit calculations: slightly better capability versus dramatically lower costs and no vendor lock-in. Many migrate to open models.
This migration starves frontier labs of the revenue needed to maintain their lead, which narrows the capability gap further, which accelerates migration. Self-fulfilling prophecy.
The critical human choice for model makers becomes whether to compete on price (accepting lower margins and potential death spirals) or double down on differentiation through vertical integration—building end-user products directly rather than selling API access. OpenAI's ChatGPT strategy reflects this thinking: capture value at the product layer when the model layer commoditizes.
For builders, the choice is whether to abstract away from model dependencies now while switching costs are manageable, or bet on frontier models maintaining advantages worth the lock-in risk. Those who build model-agnostic architectures gain arbitrage opportunities between providers, which ironically accelerates commodification by making models even more interchangeable.
Users create the feedback pressure through their willingness to pay. If product quality built on different models converges, users simply choose cheaper options. This trains builders to optimize for cost rather than cutting-edge capability, completing the loop.
Scenario Two: Regulatory Bifurcation
High-profile AI failures create political pressure for regulation. But implementation details matter enormously: regulations requiring licensing, auditing, or liability frameworks could favor either closed or open approaches depending on how they're structured.
Model makers with capital and legal expertise can shape these regulations through lobbying—and have strong incentives to push for rules requiring centralized control, which favor closed models. This is regulatory capture dynamics playing out in real time, as recent executive orders targeting state AI laws demonstrate.
But there's a countervailing force: if regulations make closed models too expensive or restrictive, builders migrate to open alternatives, even if those alternatives must operate in gray areas or offshore. This makes open models less safe (no oversight, no alignment research), which creates ammunition for more restrictive regulation. Vicious cycle.
For builders in regulated industries like healthcare and finance, clear liability chains may actually make closed models preferable despite higher costs. This could create market segmentation: regulated use cases on closed models, everything else on open. The two tracks diverge, creating different capability trajectories.
Users face safety-versus-capability tradeoffs imposed by regulation. Sophisticated users who want unrestricted access will find workarounds—VPNs, offshore services, locally-run models. This creates capability inequality: power users get unrestricted access while regular users get restricted products, reinforcing existing advantages.
Scenario Three: The Capability Discontinuity
A frontier lab achieves a genuine breakthrough—true reasoning, long-horizon planning, or reliable agency. The capability gap suddenly widens dramatically instead of gradually narrowing.
Builders who bet early on breakthrough capabilities establish market positions before others catch up, creating winner-take-most dynamics. But builders with successful products built on current-generation models face the innovator's dilemma: the breakthrough might cannibalize their existing business. They're incentivized to dismiss or downplay the advance, slowing adoption.
This connects to organizational dynamics around managing-up cultures: enterprises often default to "wait and see" even when early adoption would be strategically superior. Risk aversion, budget cycles, and internal politics slow enterprise response regardless of technical merit.
For model makers with breakthroughs, the choice is whether to maximize monopoly rents while the advantage lasts or expand the ecosystem through selective sharing, betting on sustaining the innovation lead through network effects. But there's a security dilemma: if the breakthrough represents true agency, keeping it proprietary risks regulatory intervention for concentrated power, while opening it risks competitive advantage.
Users who experience breakthrough-enabled products develop dependency. They won't accept downgrades, giving model makers power over the entire value chain. This is classic platform dynamics: control the capability users can't live without, and you can squeeze everyone in between.
Scenario Four: The Economic Inversion
The infrastructure spending required for frontier models becomes unsustainable—recent analysis suggests an $800 billion revenue gap between what's being spent and what can be monetized—while open models achieve "good enough" at dramatically lower cost.
This is Clayton Christensen's low-end disruption pattern. Builders who switch to lower-cost options don't experience quality degradation because "good enough" is actually good enough for their use cases. This creates demonstration effects: other builders follow, accelerating the revenue spiral for frontier labs.
Model makers face the impossible choice of reducing prices to compete (triggering death spirals of lower revenue and reduced R&D capability) or maintaining premium pricing while market share erodes. The rational response is vertical integration: if you can't monetize the model layer, build products where models are implementation details rather than value propositions.
For builders, this scenario makes abstraction layers and optionality critical. Products architected around specific model capabilities become risky investments if the economic model supporting those capabilities proves unsustainable.
The Meta-Pattern: Talent as Signal
Across all four scenarios, researcher movement between frontier labs and open communities creates an information cascade. Each migration transfers architectural knowledge, accelerating open development. But talent flows also signal confidence or concern about frontier trajectories.
Builders and users can infer model makers' internal assessments by watching who leaves and where they go. Brain drain becomes a self-fulfilling signal, accelerating the very dynamics frontier labs want to avoid. This is the chaos multiplier: small perturbations in talent flow create outsized downstream effects.
Strategic Implications
The core insight is that you cannot predict which scenario will unfold, but you can build for survival across multiple futures. For enterprises and builders, this means investing in abstraction layers that maintain optionality despite higher upfront costs. Model-agnostic architectures hedge against all four scenarios.
For model makers, the temptation is to extract maximum value quickly given sustainability uncertainty, but this extraction accelerates threatening dynamics. The strategic discipline question becomes whether organizations can resist short-term pressure to build long-term moats.
For anyone trying to navigate this environment, the key is understanding you're operating in an inherently chaotic system where small choices have outsized impacts. The three human bodies will continue exerting unpredictable force on each other, and stability is not an option. The question is whether you're building strategies that bend rather than break when the system lurches toward scenarios you didn't prepare for.
The trajectory is not just uncertain—it's fundamentally unpredictable. Plan accordingly.
Radical Candor
Employee-centric organizations are seven times more likely to succeed with AI.


