The Simplest Way to Create and Launch AI Agents and Apps
You know that AI can help you automate your work, but you just don't know how to get started.
With Lindy, you can build AI agents and apps in minutes simply by describing what you want in plain English.
→ "Create a booking platform for my business."
→ "Automate my sales outreach."
→ "Create a weekly summary about each employee's performance and send it as an email."
From inbound lead qualification to AI-powered customer support and full-blown apps, Lindy has hundreds of agents that are ready to work for you 24/7/365.
Stop doing repetitive tasks manually. Let Lindy automate workflows, save time, and grow your business
Today's Agenda
Hello, fellow humans! About twice a year, influential, long-time Silicon Valley strategist Benedict Evans gives a presentation on whatever he sees as the important topic of the moment. Not surprisingly, he’s been talking about AI for a few years now, and also not surprisingly, he’s bullish. So I’ve reviewed his Fall 2025 talk from Singapore this year, and I’ve highlighted some of his key insights, along with a practical takeaway and possible second and third-order effects that may flow from that insight.
AI Eats the World - Fall 2025
This is Benedict Evans' comprehensive analysis of the AI landscape.
You can see the whole slide deck here: Autumn 2025 Slide Deck | Benedict Evans
1. The AI Capex Bubble is Functionally Rational for Each Player
Insight: Big Tech is spending ~$400bn in 2025 on AI infrastructure—more than global telecom spending. This looks like a bubble, but each player has rational motivations: Nvidia converts excess cash into platform lock-in, OpenAI trades paper for hard assets, and Oracle attempts to gear into relevance.
Practical Takeaway: Don't dismiss apparent bubbles as irrational. Instead, understand each player's individual incentives and ask what YOU would do with their balance sheet position.
Second-Order Effects:
The massive infrastructure buildout creates winner-take-most dynamics through economies of scale
Smaller players get locked out of competing at the foundation model level
Value migrates to whoever can efficiently monetize the installed capacity
Third-Order Effects:
The commoditization of models accelerates faster than the infrastructure pays back, creating potential stranded assets
Geopolitical competition intensifies as AI infrastructure becomes national strategic asset
Financial markets may punish the laggards who didn't invest, even if the leaders overspent
2. Models are Converging Toward Commodity Status Without Clear Product Differentiation
Insight: Multiple models now score within 5-10% of leaders on benchmarks. China and open-source are closing gaps. Yet there's no clarity on sustainable moats, right product form, or value capture mechanisms.
Practical Takeaway: If you're building on AI, don't bet on model superiority as a moat. Build around proprietary data, distribution, product experience, or workflow integration instead.
Second-Order Effects:
Competition shifts from "best model" to "best at reaching customers" or "best integrated solution"
Vertical-specific applications with proprietary data sets become more valuable
API costs continue declining, making integration economics more attractive
Third-Order Effects:
Companies that built solely on model access find themselves disrupted by better distribution
The "AI wrapper" critique becomes real—only companies adding genuine product value survive
Network effects and switching costs become the primary defensible advantages
3. The Engagement Gap Between Experimentation and Daily Use Remains Wide
Insight: ChatGPT has 800m weekly users but only ~5% paying. Most surveys show 20-40% monthly use but only 5-15% daily use. People try AI but don't integrate it into daily workflows.
Practical Takeaway: Don't design for power users or assume viral adoption. The challenge is converting experimental users into habitual ones. This requires solving specific, recurring problems within existing workflows rather than asking users to change behavior.
Second-Order Effects:
Pure chatbot interfaces may not be the winning form factor
Embedded AI features in existing tools (GitHub Copilot model) outperform standalone apps
The "switch cost" of learning new AI interactions becomes a barrier to adoption
Third-Order Effects:
Winners will be companies that make AI invisible within familiar interfaces
Job roles that require "consciously looking for optimization opportunities" become more valuable
The productivity gap widens between those who successfully integrate AI and those who don't
4. "Absorb" Use Cases Work; "Innovate" and "Disrupt" Remain Unclear
Insight: AI succeeds at absorbing obvious automation (coding, marketing, customer support). But the "innovate" layer (new products, new bundling) and "disrupt" layer (redefining questions) remain mostly theoretical.
Practical Takeaway: Start with unglamorous automation of expensive, repetitive work. The ROI is clear and immediate. Don't wait for innovative use cases—capture value from boring efficiency gains first.
Second-Order Effects:
Companies that automate quickly build cost advantages competitors must match
The bar for "minimum viable product" rises as AI-assisted development becomes standard
Labor reallocation happens faster than new job creation, creating transition friction
Third-Order Effects:
Industries with the most repetitive knowledge work see earliest disruption (legal doc review, basic coding, tier-1 support)
The "innovate" winners emerge from companies that deployed "absorb" successfully and learned from real usage
Competitive advantages shift from "who has more people" to "who orchestrates AI-human workflows best"
5. AI Recommendations Could Unbundle the Surveillance Capitalism Model
Insight: Current recommendation systems require capturing user behavior to generate recommendations (surveillance capitalism). LLMs could potentially recommend based on understanding products themselves, not just behavioral correlation—without needing massive user bases.
Practical Takeaway: If you've built a business on network effects from user data, prepare for AI that can recommend without that data. Conversely, if you lack scale, AI might let you compete on recommendation quality without building a user base first.
Second-Order Effects:
Google and Meta's ad targeting moats potentially weaken if AI can match intent without behavioral tracking
Privacy-preserving recommendation becomes technically feasible, changing regulatory dynamics
New entrants can compete in personalization without cold-start problems
Third-Order Effects:
Power shifts from platforms that aggregate user data to those that aggregate product understanding
Advertising shifts from targeting audiences to targeting intents/problems/moments
The trillion-dollar digital advertising market faces structural reorganization
6. The Jevons Paradox: Automation Doesn't Reduce Work, It Expands What's Possible
Insight: Steam engines didn't reduce Britain's labor needs—they gave the equivalent of 5x the population in work capacity by 1900. "Infinite interns" won't reduce headcount; they'll expand what's economically viable to attempt.
Practical Takeaway: Don't plan for AI to let you do the same work with fewer people. Plan for competitors to attempt 10x more things at the same cost. Your strategic question: what becomes possible when tasks that required 100 people now require 10?
Second-Order Effects:
Businesses that used "we have thousands of people" as a moat lose that advantage
The bottleneck shifts from execution capacity to strategic decision-making and verification
Winners are those who figure out what new problems are now economically solvable
Third-Order Effects:
Industries reorganize around new possible scale (e.g., personalized services that were economically impossible)
Employment doesn't disappear but violently shifts to roles AI can't do (judgment, verification, strategy)
Companies that saw automation as cost-cutting get disrupted by those who saw it as capability expansion
7. This Deployment Cycle Will Take Decades, Not Years
Insight: Cloud adoption, after being "old and boring," still sits at only 30% of enterprise workloads. E-commerce took 25 years to reach 30% of retail. 40% of CIOs have no LLM production plans until at least 2026.
Practical Takeaway: There's a massive gap between technological possibility and organizational reality. Build for the long deployment cycle. Focus on change management, integration with legacy systems, and proving ROI in small wins before big bets.
Second-Order Effects:
First-movers who execute well have years to build advantages before fast-followers arrive
System integration and change management become more valuable than pure AI capability
The "trough of disillusionment" hits before mass adoption, creating entry opportunities for patient capital
Third-Order Effects:
A two-tier economy emerges: AI-native companies vs. those still deploying
The productivity spread between leaders and laggards widens dramatically over decades
The "boring" work of enterprise deployment becomes where the real money gets made
The Meta Pattern: The most dangerous assumption is that AI follows mobile's timeline. The infrastructure investment suggests a shorter cycle, but enterprise deployment patterns suggest longer. The winners will be those who operate on both timeframes simultaneously—building for rapid model evolution while respecting organizational change constraints.
Radical Candor
[I’ve] given chatGPT an image of my fridge and said what should I cook? That's not a Google query. That's not something that would produce links. That's changing the nature of the question and… how we think about what this might be. And of course that's also redirecting a trillion dollars of annual ad spend, about half of which currently goes to… Google, Meta, and Amazon. If we change how the internet works, if we change what you mean when you ask a question or you search for something, then all the money moves.


