Today's Agenda
Hello, fellow humans! On Tuesday, we started talking about where humans fit in the loop, and I outlined a roadmap for developing ourselves to be smarter and wiser with AI, automations, and agents. Today, I have a deeper dive with part 1 of the series and some actionable intelligence and insights.
Humans in the Loop Part 1
Every automation hides a decision about values. When we ask an AI to “optimize” a supply chain, we’re not just asking it to run faster or cheaper—we’re choosing which definition of optimal we care about. Is it speed? Cost efficiency? Carbon footprint? Durability? Resilience during disruption? Fairness across vendors? Every one of those metrics encodes a worldview about what is meaningful, what we believe is worth growing, and what trade-offs we’re willing to accept along the way.
Most organizations treat these choices as technical settings—dropdowns in a dashboard. But they’re not technical at all. They’re moral, economic, and strategic decisions about what good looks like in a business. And here’s the part we’re not talking about loudly enough:
AI automates our assumptions.
Before AI agents, many of these value judgments were buried inside human habits, intuition, and tacit knowledge—what we casually called “common sense.” And because they lived mostly in people’s heads, they were inconsistently applied, argued over in meetings, or softened by context and empathy.
But now?
AI gives us the power to codify those assumptions into systems that scale instantly, repeat consistently, and operate relentlessly.
And that’s both the breakthrough and the danger.
The danger isn’t that AI will think for us—it’s that AI will amplify what we haven’t thought through. If our model prioritizes cost above all else, it will happily squeeze quality and resilience until something breaks. If we reward speed, it will race past nuance, ethics, and long-term risk. If we overvalue efficiency, we’ll unintentionally starve exploration and innovation.
This is the hidden cost of automation: not the price of the software, but the price of the implicit worldview we accidentally encode inside it.
How can AI power your income?
Ready to transform artificial intelligence from a buzzword into your personal revenue generator
HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.
Inside you'll discover:
A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential
Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background
Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.
From Doing to Directing: Why Metacognition Is Now a Leadership Skill
AI is changing the nature of work faster than organizations can redesign their job descriptions. For decades, the economy rewarded people for their ability to produce: generate reports, analyze data, generate proposals, draft documents, create plans, and manage day-to-day execution.
AI is swallowing those tasks, not all at once but piece by piece. What’s left for human talent is the layer above production:
directing intelligence, not doing the work itself.
And directing requires metacognition—the ability to think about how you think.
This is going to be one of the most important leadership skills of the next decade. Why? Because directing AI systems and AI-assisted workflows forces leaders to ask questions that traditional management never required:
What problem am I actually trying to solve?
Which metrics matter—and why?
What trade-offs am I accepting, consciously or unconsciously?
What worldview is embedded in this optimization?
Does this output align with our values, not just our KPIs?
In other words, instead of asking “Can the model do this?” we now have to ask:
“Should it do this? For whom? Under what constraints? And at what cost?”
This shift is especially jarring for professionals whose roles have historically revolved around production—people who have defined their value through output. For them, AI feels less like a tool and more like an existential threat.
But for product managers (and systems thinkers more broadly), this is familiar territory. Product professionals have spent years defining success metrics, evaluating trade-offs, modeling customer value, and coordinating cross-functional decisions. They know that every prioritization framework is really a value framework in disguise.
Now the rest of the workforce has to learn that same muscle.
AI Raises the Stakes on What We Already Do Poorly
Let’s be honest: as organizations, we’ve never been great at articulating our assumptions. We confuse inertia with strategy. We operate on tacit rules that nobody wrote down. We rely on mental shortcuts built from anecdotes. We make decisions because “that’s how we’ve always done it.”
In a human-driven workflow, these flaws create friction, misalignment, and inefficiency—but people can recover. A human can notice when something “feels off,” rethink an assumption on the fly, or adjust based on empathy and context.
AI cannot.
AI does not know when a cost reduction harms product quality.
It cannot sense when a customer’s frustration is about underlying trust, not the ticket they submitted.
It cannot intuit that a metric is outdated or narrow or missing something important.
It does exactly what it is told—and it does it at scale.
That’s why metacognition is no longer optional. Our ability to step back, think about how we’re thinking, and choose our metrics intentionally becomes the linchpin skill that separates responsible automation from reckless acceleration.
Actionable Intelligence for Leaders in the AI Era
Here are the practical steps leaders and operators can take to embed metacognition into their teams and workflows:
✅ 1. Make your hidden metrics explicit.
List out the implicit values behind your KPIs. Ask:
What worldview does this metric assume? What does it ignore?
What metrics are we not including? Why did we not prioritize those?
When we look at the KPIs as a coherent collection, what story does it tell about our organization? How is that story aligned or misaligned with our mission statement?
How do your KPIs for agents align with department or organization KPIs? Are they working in coordination or against each other?
✅ 2. Reframe every automation request as a prioritization request.
Instead of “Automate this task,” ask:
What trade-offs will this automation impose on the business?
What do I gain by making this faster, and what do I risk by reducing human attention to this?
How do I measure the cost that comes with lower human attention?
✅ 3. Rewrite job roles around decision quality, not production output.
Shift expectations from “create” to “curate,” from “produce” to “prioritize.”
If you can 10x throughput, does the decision to do that work add value for the customer or the organization?
If you can theoretically do everything, how do you filter the decision to do work or not do work so that you’re only doing what matters?
If humans have access to all that information, how can they filter for high-quality, high-relevance information to improve decision-making?
✅ 4. Build “metacognition checkpoints” into workflows.
Every time an AI agent proposes an output, ask:
What assumptions drove this?
What if my assumption is not true? What would that look like? Would it change my decision?
What metrics did it optimize for? Which metrics does it not optimize for?
What was left out?
If these metrics all look great, how does the customer feel that? If they all fail, what is the customer impact?
✅ 5. Train teams to validate AI, not just use it.
Verification skills—spotting mismatches, poor trade-offs, or missing context—are becoming the new baseline competency.
How do we prove the AI correct or incorrect?
What data sets do we have to validate or invalidate an AI result?
Where is data missing that we might need to validate AI outputs?
✅ 6. Assign ownership of value decisions before automation begins.
Someone must decide which values matter most: speed, cost, quality, risk, fairness, or sustainability.
If it’s not worth a human’s time, is it worth the AI’s time?
Who is the customer for this automation? How will the output affect their experience?
Why is this work being done at all?
✅ 7. Document the worldview behind any AI system you deploy.
A one-page “value card” can prevent months of misalignment later.
Mission statements are just pretty words in a brochure; they are north stars that should help you understand your organization’s mental model of the world and help you align everyone in the organization around that objective.
✅ 8. Use AI to question your own thinking, not just accelerate it.
Prompt it to critique your assumptions, propose alternative metrics, and reveal blind spots.
Ask a chatbot questions:
What if I’m wrong about X?
How can I test my idea?
If you were a customer, how could this be helpful or hurtful?
What are three options I should consider in this situation?
If X is true, give me three ways that I could validate that.
AI’s real power isn’t in doing work for us—it’s in revealing the assumptions we’ve been making all along.
And the real risk isn’t that it will replace our thinking—it’s that it will scale our unexamined thinking faster than ever before.
The shift from doing to directing isn’t just a technical change.
It’s a cognitive one.
And the leaders who master metacognition will be the ones who thrive in the age of intelligent systems.
Radical Candor
We are approaching the end of 2025 and it wasn't the year when agents replace people. It was however, the year when we learned what agents actually are. Not cheap labor, but a new category of software that competes for labor budgets instead of IT budgets.


