In partnership with

Today's Agenda

Hello, fellow humans! Today, I offer a roadmap for how our intelligence needs to adapt to both make the most of AI and ensure that AI does not eclipse our human capabilities. This is the first in a series that introduces the concepts, and in the following parts, I will dive into each skill set to offer practical advice and resources.

The Ghost in the Loop Part 1: The Human Shift

From Doing Work to Directing Intelligence

“Just a whisper. I hear it in my ghost.” - Major Motoko Kusanagi, Ghost in the Shell

This is Part 1 of a series “A Ghost in the Loop: Human Skills for the Age of Autonomous Intelligence.” In this introduction, I provide a roadmap for the topics and skills that are discussed in more detail and with practical takeaways in the following installments.

I. What Does it Mean to Do Work?

As a project manager in a blue-collar industry, I did a lot of planning and coordinating for people who use their hands to build, adjust, and operate machinery. Even though I had a good relationship with most of the people I worked with, there was always some tension between the idea of “doing the work,” and doing what I did. I tried to be open about my work and shared as much as I could with my team; I’d let them know that I’d have to consult with the engineer, and he would have questions, so I’d need to collect information before I could do that, and whatever the engineer decides, we’d have to inform QC, and then we’ll have to run a test and review the results, etc. etc. I wanted everyone to understand that they were part of a team, and all these parts needed to come together to make the process produce good work. Even so, there was a difference between “doing work,” and what I was doing. Almost everyone understood that I was doing an important job, but it wasn’t work exactly.

For most of human history, we’ve built, typed, assembled, calculated, and processed. We used plows, lathes, wrenches, or laptops, and we are conditioned to measure our value by how much output we could produce in a given unit of time. Some of us have that idea firmly ingrained in our identity.

Now that we have large-scale, agentic AI systems that can now do work faster, across more domains, sometimes even better than we can, we have to rethink our relationship to work. If a collection of AI agents can design marketing campaigns, write code, simulate markets, and even compose music while we’re brushing our teeth. What does that mean for us? Now, and for the foreseeable future, none of that is possible without human skill, expertise, and judgement.

The new bottleneck is no longer execution — it’s direction. We are crossing a cognitive boundary where the central human skill is shifting from doing the work to directing the intelligence that does the work.

When we get into the nuts and bolts of using AI tools, it becomes clear that AI doesn’t make humans obsolete. We can think of ourselves as meta-workers — people whose job is to orchestrate, interpret, and give shape to automated cognition. In the same way that the industrial revolution required us to move from muscle management to machine management, the AI revolution requires us to move from task execution to meaningful coordination.

The #1 AI Newsletter for Business Leaders

Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.

Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.

No hype. No jargon. Just results.

II. From Tools to Teammates

If AI breaks the pattern of technology as a tool of direct influence on work — i.e., you use it, and it performs and effect for you. You swing the hammer; you write the command. AI creates an intermediary layer between you and the work. For people who have spent years honing their skills and intuitions so that in a fraction of a second, they can spot something that “doesn’t look right.” It’s a marvel of human cognition that Malcolm Gladwell wrote the bestselling book Blink book about it.

Modern AI systems can plan, reason, and act. They can initiate actions based on goals and objectives, not just instructions. The shift is subtle but profound: when the system becomes agentic, management becomes relational. The challenge for us is that now we have to learn how to be extremely clear and precise in defining things that most of us are not accustomed to being clear about. Instead of having a direct conversation with the work that we are performing, we have to anticipate, plan, and direct work based on intent. If the road to Hell is paved with good intentions, where does that leave us? It leaves us negotiating the details that build towards that outcome.

In this new environment, the most valuable human skill is not technical precision but strategic clarity — the ability to define what good looks like and ensure that AI systems pursue it responsibly. And that still requires a lot of cognitive work from us as humans to direct the automated labor in efficient and productive ways to achieve those outcomes.

III. Automation’s Hidden Values and Costs

Every automation hides a decision about values. When we ask an AI to optimize a supply chain, we implicitly decide what counts as “optimal.” Is it speed, cost, carbon, durability, resilience, or fairness? Each metric is a proxy for a worldview of what is meaningful, what is worth growing, what is worth minimizing, and why we take on the work that we do.

Those assumptions are usually buried inside human judgment, sometimes labeled as “common sense,” the poor definition made them easy to manipulate as these ideas can be messy, intuitive, and flexible. Now they can be codified into systems that scale instantly. The danger isn’t that AI will think for us; it’s that it will amplify what we haven’t thought through.

That’s why the shift from doing to directing requires metacognition. This type of mental labor will become an important topic in AI as directing this digital labor force requires us to step back and think about how we’re thinking and which metrics we are prioritizing. Leaders must move beyond “Can the model do this?” to “Should it do this? For whom? Under what trade-offs?”

Product management professionals will recognize this evaluation and prioritization work, but this may be a new way of thinking about work for many people seeing their professional roles overwhelmed by AI capabilities.

IV. From Task Execution to Outcome Orchestration

Sharing work between human and machine actors makes us rethink productivity. It pulls back the curtain to reveal that time on task was never a good measure of productivity. When work can be accomplished in minutes instead of days and weeks, the more relevant question becomes: Does the system achieve the intended outcome with integrity?

I have a certain bias because of my background in project and product management, but product thinking has a lot to offer as an effective model human-AI collaboration.

A good product manager doesn’t micromanage developers; they articulate purpose, define value, and manage team alignment, i.e., intent. Effective product managers translate vague notions of value and success into measurable outcomes and empower intelligent engineers to leverage all of their skills without dictating every move. That’s precisely how humans must now learn to work with AI.

To direct intelligence well, we must learn to:

  1. Define objectives clearly — articulate the “why” before specifying the “what.”

  2. Map assumptions — surface the invisible conditions that drive AI recommendations.

  3. Design feedback loops — measure incremental milestones and feed that back to the system to ensure it is learning the right lessons.

  4. Balance autonomy with accountability — allow the AI to explore while managing the risks.

These are all traits of successful small-a agile teams, and those teams learned them from Toyota, who in Toyota Production System also managed flow through setting clear objectives, testing assumptions, iterating, with respect for individuals through jidoka, and for the system as a whole.

V. The Human Shift: From Output to Insight

This shift changes what it means to be valuable. When information is abundant and automation is cheap, the scarcest resource becomes judgment. 85% of businesses who try to implement Toyota production System or some similar lean system revert back to their old models within two years. It requires a different mode of thinking about the business, and I expect that AI adoption will also see a high reversion rate. It’s not enough to roll out some new technology; AI requires the new thinking to be able to benefit. Otherwise, we’re just amplifying undisciplined thinking and scaling poor judgement.

Judgment is the synthesis of knowledge, context, and values into a coherent decision. It’s what enables us to interpret patterns rather than just see them. It is the act of making meaning out of collections of data and information.

AI can generate a thousand possible answers and we have to decide which one matters. That decision is not purely analytical; it’s interpretive. It connects data to meaning, metrics to mission, keeping in mind that ultimately a product or service must be meaningful and valuable to human customers.

In this sense, our human contribution moves up the cognitive stack:

  • From producing information → to curating for human relevance.

  • From analyzing data → to designing for human understanding.

  • From optimizing performance → to defining priorities for a human purpose.

This is the essence of directing intelligence — curating context and purpose within systems that can already compute outcomes.

VI. The Myth of Total Automation

It’s a very human anxiety to imagine that as AI improves, humans will simply fade from the loop. But systems without human direction collapse into circular logic — efficient but meaningless. But the reality is that without human direction, AI can only generate text into the digital void. There must be a human consumer to evaluate the value and meaning of any AI output, and in a healthy system, that evaluation becomes human learning that is fed back into the system so as to improve the next generation.

In this way, even the most autonomous models still depend on human input such as outcome definition, ethical framing, and trust calibration. Without those, the system becomes a mirror on a mirror, reflecting the noise of its environment.

This is why the future of work isn’t human versus AI, but human with AI. The point of intelligence is not to replace agency but to extend it. Humans provide the narrative thread that holds the system together — the ability to assign significance beyond performance metrics.

When an AI paints, we still decide whether it’s art.

When an AI predicts, we still decide whether it’s just.

When an AI optimizes, we still decide what “better” means.

Automation accelerates execution. Meaning is for humans to decide.

VII. Managing Complexity: Lessons from Systems Thinking

Directing intelligence isn’t about control; it’s about stewardship of complexity. This may be the most cognitively challenging work in the entire human skill stack. But it’s one we have to master to be able to thrive in

Complex systems like factories, markets, and neural networks all behave in nonlinear, unpredictable ways; they require leaders who think in loops, not lines. This is where systems thinking becomes the core skill of the AI era.

In a systems mindset, you don’t force outcomes; you design conditions that make good outcomes more likely. You manage feedback, incentives, and flows — the invisible architecture that shapes behavior.

Toyota learned this lesson decades ago: efficiency emerged not from micromanagement but from designing feedback-rich environments where problems surfaced naturally. The same applies to AI: design systems that make errors visible, insights traceable, and learning continuous.

Directing intelligence, then, is less about mastery and more about meta-design — creating environments where both humans and machines can adapt together.

VIII. New Measures of Value

If work is no longer measured by output, what replaces productivity as the dominant metric?

Three dimensions are emerging:

  1. Clarity of Purpose — Can you define success in a way that aligns people, data, and machines?

  2. Quality of Learning — How quickly and accurately does your system improve from feedback?

  3. Integrity of Outcome — Does the result serve human goals ethically and sustainably?

In the old economy, efficiency was king. In the new one, alignment is. The organizations that thrive will be those that measure understanding as rigorously as they measure throughput.

IX. The Emotional Shift

For workers and leaders alike, this transformation is not just cognitive — it’s emotional.

Many people derive meaning from doing. To step back and direct can feel like loss: of identity, control, or relevance. The challenge is to reframe direction as creation — to see orchestration as a higher form of craftsmanship.

It takes emotional intelligence to navigate that change:

  • Empathy for teams adapting to new roles.

  • Self-regulation when the pace of automation feels overwhelming.

  • Social awareness to manage collaboration between humans and systems.

Directing intelligence is not just an intellectual act; it’s a relational one.

X. The Adaptive Human

“…only try to realize the truth; there is no spoon. Then you’ll see that it is not the spoon that bends; it is only yourself.”

Spoon Boy, The Matrix

In the industrial era, we optimized machines. In the digital era, we optimized processes. In the age of AI, we must optimize ourselves — not for efficiency, but for adaptability.

That means cultivating the skills that technology can’t replace:

  • Complex problem-solving, rooted in clear problem definition.

  • Critical thinking, to evaluate and challenge algorithmic reasoning.

  • Curiosity, to explore alternative perspectives.

  • Meaning-making, to connect work to human purpose.

These are not “soft skills.” They are meta-skills — the ones that make every other capability adaptive in a world of accelerating change.

XI. Conclusion: From Mastery to Harmony

The shift from doing work to directing intelligence marks the third great migration of labor: from physical power, to cognitive processing, to co-creative orchestration.

We are not leaving work behind; we are redefining it.

  • Work becomes less about output and more about insight.

  • Leadership becomes less about control and more about coordination.

  • Success becomes less about precision and more about purpose alignment.

To thrive in this world, we must learn to think like systems architects, act like educators, and feel like artists — designing the conditions for intelligence to serve humanity rather than the reverse.

The future belongs to those who can hold two truths at once:

that machines will do more of the doing, and that humans will do more of the deciding what matters.

This is the new frontier of human work — not working harder, but working wiser.

Not commanding machines, but directing intelligence.

Radical Candor

We are at the dawn of this radical transformation of humans that by its very nature is a truly complex and emergent innovation. Nobody on earth can predict what’s gonna happen. We’re on the event horizon of something… This is an uncontrolled experiment in which all of humanity is downstream.

Bret Weinstein, via Diary of a CEO Podcast

Thank You!

Keep Reading

No posts found