In partnership with

Today's Agenda

Hello fellow humans! We have big questions to think about over the weekend.

News

Who’s the Boss? AI Makes it Hard to Know

AI is starting to shape how we think, and we need to be conscious of this pattern.

Ansh Lalwani writes for Critical Debates HSGJ that “contemporary AI models learn from patterns and modify their outputs based on contextual cues, user behavior, and historical data. This dynamic interaction fosters a recursive loop between system suggestion and user expectation, creating an environment where the user’s cognitive architecture subtly adapts to the tool itself.”

We’re always at the mercy of our own biases, and unfortunately, we already have an automation bias. A study by Parasuraman & Riley (1997) found that people defer to automated or machine-generated content, even in cases where it can be easily proven wrong. The study found that automations can invert the relationship between humans and machines: “Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators.”

Lalwani cites this study and others to describe how our human tendencies are influenced by technology and targeted by it, too. We describe advanced tech with anthropomorphism, inferring narratives, and ascribing motives where none may exist. Look no further than the way many people build therapeutic or even romantic relationships with chatbots. And software designers know this, so the interfaces are intentionally designed to fuel these psychological interactions.

And I think our ability to steer this relationship comes from our meta-cognition. We need to be able to consciously think about how we think, not just once or periodically, but on a continuous and ongoing basis. We need to be able to step back and ask ourselves if the AI really has an agenda or if we’re fabricating one in our minds. We have to be able to ask if the automation is really serving my business needs or if I’m just supporting an automation that I don’t understand.

We have to provide that context. We have to keep our hands on the wheel with a clear objective that we understand.

Looking for unbiased, fact-based news? Join 1440 today.

Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.

What is Design in AI?

Writing for UX Matters, Kevin Silver writes that software design and user experience is always in conversation with our behaviors when we use these systems. While software has always had behavior as a UX dimension, AI is not a bounded system like prior software generations; it has a capacity for dynamically adapting behavior and personalizing content. This is the first essay in a series where he intends to lay out a framework for how to think about AI behavior in UX design. He breaks it down into four key dynamics that we need to consider in our UX design: adaptation, attention, alignment, and repair.

The challenge for product teams then becomes how to direct this AI behavior around these four core dynamics so that they are properly aligned with human values and intentions. As AI moves beyond simple task automation to cognitive augmentation that enhances human decision-making, we need frameworks for AI behavior design, delegation strategies, and systems that can adapt to complex, context-dependent scenarios.

How Do We Measure AI Progress? And What Can End That Progress?

As the AI world debates whether or not AI is a bubble, Julian Schrittwieser asserted that we are misreading the scale of AI’s exponential growth. And Nate B. Jones takes it one further to say that the AI bubble is FAKE. Schrittwieser is a top AI researcher who contributed to the development of AlphGo Zero and MuZero, and Jones is a former Amazonian who now experiments with AI and writes extensively on use cases, prompting, and the business case for AI.

To summarize their points, the belief that AI progress will remain exponential is rooted in consistent internal observations and the performance of models on key, verifiable metrics:

  1. Consistent Internal Progress: Researchers in frontier labs are not seeing any slowdown of progress. They observe a very consistent improvement over many years. Externally, the progress may seem slow, leading to an information gap, but internally, metrics show explosive growth.

  2. Autonomous Task Length as the Core Metric: The primary measure of meaningful progress is how long AI agents can work autonomously without supervision. This ability is strongly correlated with economically useful work.

    1. Progress is tracked showing that every 3 or 4 months, models are able to do a task twice as long as before completely on their own.

    2. More recent tracking shows a shift from 15-minute tasks to 2-hour tasks in just seven months, following a doubling curve expected to continue.

  3. Verifiable Projections and Benchmarks: Extrapolating these linear fits of past consistent trends leads to high-confidence forecasts:

    1. By mid-2026, agents should be able to work all day autonomously (full 8-hour day worth of tasks).

    2. By late 2026, at least one model is expected to match industry experts across many occupations.

    3. By 2027, models are expected to frequently outperform experts on many tasks.

There’s a strong bull case to be made here. But at some point, every exponential trajectory must break. Nothing can move at that rate indefinitely, so I asked ChatGPT to consider what the limiting factors might be and what are some possible externalities that could interrupt this trajectory.

Here, I’ll share what ChatGPT had to say.

🧩 1. The Limiting Factors on AI Improvement and Growth

1.1. Energy and Compute Economics

Exponential improvement depends on compute scaling and energy efficiency. As training costs rise superlinearly with model size and duration, marginal utility per watt and per dollar will become the primary constraint. The global semiconductor supply chain, rare-earth materials, and datacenter energy draw (projected to reach ~10% of global electricity by 2030) set hard physical limits.

Limit threshold: When energy and compute costs outpace productivity gains per model generation.

Analogy: Like Moore’s Law flattening — progress continues, but efficiency per dollar stops compounding.

1.2. Diminishing Returns in Cognitive Scaling

Beyond a certain point, models require exponentially more data and compute for marginal improvements in coherence or reasoning. Current scaling laws already show diminishing returns; eventually, models will hit the “information ceiling” — the point where internet-scale data saturation forces synthetic data loops, which in turn degrade novelty and truthfulness.

Limit threshold: When models begin to self-train on their own outputs more than on human-generated data, leading to “model collapse.”

1.3. Market Saturation and Human Elasticity

Even if models can work autonomously for days, human willingness to pay sets an adoption limit. Businesses will plateau at the “good enough” level of automation that balances quality and liability risk. Once 80–90% of white-collar processes are AI-augmented, further growth will depend on displacing human trust and accountability systems, which change much slower than technology.

Limit threshold: When cost elasticity and perceived marginal value of AI autonomy flatten — i.e., when AI labor becomes abundant but not yet trustworthy enough to replace high-stakes human roles.

📉 2. Signals That the Limit Is Being Reached

  1. Stagnating Training Efficiency:

    • Slower improvement in FLOPs-to-performance ratios across model generations.

    • AI company reports showing “parameter inflation” without clear performance leaps.

  2. Capital Rotation or Retrenchment:

    • Declining venture and sovereign fund investment in foundation models; shift toward smaller, vertical, or edge-specialized AIs.

    • Consolidation among AI infrastructure vendors (cloud GPU providers, data brokers).

  3. Labor Price Inversion:

    • If AI tools commoditize so quickly that productivity gains no longer command pricing power — the deflationary squeeze makes capital retreat.

  4. Policy and Compliance Saturation:

    • When regulatory drag (compliance frameworks, liability laws, content provenance rules) outpaces the ability to innovate quickly.

🌍 3. Externalities That Might Interrupt Exponential Improvement

  1. Compute or Energy Shocks:

    • Geopolitical disruption in semiconductor supply (Taiwan, rare metals, energy rationing).

    • Carbon regulation on AI data centers introduces hard scaling caps.

  2. Data Quality Collapse:

    • Recursive self-training (“model cannibalism”) reduces factual integrity and creativity.

    • Privacy laws and data licensing battles restrict access to high-quality corpora.

  3. Public Trust and Social Friction:

    • Backlash from deepfake misuse, misinformation, or massive white-collar displacement slows adoption.

    • Cultural exhaustion — a demand-side slowdown as humans resist over-automation.

  4. Cognitive Overshoot:

    • Models that are too autonomous start diverging from human intention. Organizations may intentionally cap autonomy duration to manage risk and control interpretability.

🧠 4. Meta Insight

If we measure progress by how long AI can work without human supervision, we eventually reach an economic asymptote of trust. That limit won’t be purely technical — it will be psychological, institutional, and infrastructural. AI might be capable of working autonomously for weeks, but the world may only allow it to do so for minutes.

Exponential improvement will hit a wall not of math, but of meaning — the boundary between autonomous action and accountable responsibility.

Radical Candor

Whatever AI you are using right now is going to be the worst AI you will ever use.

Ethan Mollick

Thank You!

Keep Reading

No posts found