In partnership with

Today's Agenda

Hello fellow humans! How are we thinking? How are we thinking about thinking? To properly wrangle AI processes, we have to think deeply about our objectives, our cognitive processes, and how to train both professionals and students to acquire and sustain those skills for the future.

Dig in, and dig deep. Today, I also have a feature piece with practical tips on leveraging AI as a force multiplier at small firms.

Enjoy!

News

Agentic AI and Systems Thinking

The emergence of agentic AI systems that can plan, reason, and act autonomously represents a major shift in AI capabilities. Stories consistently discuss how these systems require new organizational structures, governance frameworks, and leadership approaches. The focus is on moving from task-based automation to outcome-oriented AI systems that can coordinate across departments and functions.

The authors for Harvard Business Review share an agentic workflow:

The intent classifier agent sends a simple policy question like “What are allowed expenses for traveling overseas?” or “Does this holiday count in paid time off?” to a file search and respond agent, which provides immediate answers by examining the right knowledge base given the employee’s position and organization. A document generation agent can create employee verification letters (which verify individuals’ employment status) in seconds, with an option for human approval. When an employee files a request for vacation, the leave management agent uses the appropriate HR management system based on its understanding of the user’s identity, completes the necessary forms, waits for the approval of the employee’s manager, and reports back to the employee.

We have to break down our own cognitive processes to understand them at a granular level so that we can instruct agents on what they need to do, and even create a separate agent for each cognitive step in the process. That requires an unusually high level of meta-cognition.

This also highlights the fragility of current systems because “every exception has to be hard-coded and the automation breaks when systems or processes change. Agentic AI, in contrast, is designed to execute decisions autonomously to achieve overarching objectives or outcomes.”

Agentic AI marks a turning point in the evolution of artificial intelligence — shifting from passive tools that execute predefined tasks to autonomous systems that plan, reason, and act toward goals. Harvard Business Review emphasizes that a successful deployment needs to be understood as a complete system of many tools that also address human intent, transparency, and accountability. Similarly, the European Institute for Management Technology (EIMT) analysis of 2025 AI trends highlights that generative systems are becoming embedded across business functions, demanding cross-functional coordination and ethical governance to ensure they amplify human creativity rather than replace it.

But AI’s adaptability is a double-edged sword: some adaptations may be exactly what we need, while other adaptations might take us off-course. Human-in-the-loop ecosystems can protect against unwelcome actions, and designing those human-computer interaction systems will be the next challenge for organizations using AI to scale up their work.

Organizations that adapt structurally and cognitively to AI will be the ones best positioned for new opportunities that will be possible over the next 5-10 years and that we can’t even imagine yet. There are technical challenges to be sure, but the bigger challenge is human. Leaders need to develop meta-skills like systems thinking, discernment, and cognitive flexibility to steer these autonomous systems toward shared outcomes, not fragmented automations. Adaptation, not adoption, defines the next leap.

Become the go-to AI expert in 30 days

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day

Will AI Change College Campuses and Career Readiness?

Educational institutions built around memorization and content-delivery are struggling to find a strategy for handling AI tools. Educators are caught between the Scylla of failing to prepare students for their future work environments and the Charybdis of students delivering AI slop. In considering how to navigate this future, Jamillah Moore Ed.D. writing for Psychology Today argues that higher education must shift how to think about learning, advising, and career preparation. Students are increasingly asking how AI tools will shape their future work, and institutions built around memorisation and content-delivery are struggling to keep up. Moore highlights that employers still value technical skills, but the differentiators in an AI-rich future are human capabilities: emotional intelligence, collaboration, resilience and ethical reasoning. On campuses, she sees cultural and structural shifts: career services must evolve into coaching for lifelong learning; AI literacy must be integrated across disciplines; and higher education must build flexible credentials, employer partnerships, and psychological readiness for disruption. The article also calls attention to the mental-health dimension: students sense instability, must adapt to change rapidly, and need supportive frameworks to reclaim agency in their careers.

Everyone in knowledge work, students and professionals alike need to adapt to prioritize soft skills and systems-thinking over raw technical skill. As AI erodes the differentiator of “knowing the answer”, the value moves to asking better questions, collaborating across domains, adapting in real time, and ethically choosing how we use tools. Your focus on critical thinking, emotional intelligence and systems literacy finds a clear ally in this article. For your audience (educators, learners, professionals), this signals that building resilience and generative human-capability is more urgent than ever. In short: the responsibility is shifting from “learn this content” to “develop this adaptability and discernment”.

Punching Above Your Weight: Five AI Force-Multipliers for Small Firms

AI isn’t for the giants. Complex workflows and intricate regulatory guidelines present a great opportunity for AI implementations, but they have a lot of up-front costs that conservative enterprises may not want to engage. Amazon, a famously manual process organization that has not adopted AI in their processes is a high profile case study.

But a small firm with a narrow customer base and straightforward processes might be able to implement automated AI processes over a long weekend.

The opportunity isn’t in replacing people — it’s in extending them. The real leverage comes when AI amplifies what a small, focused team already does well.

Radical Candor

AI is a force multiplier and it scales what's already there; good or bad. Before you scale AI, ask: Are our people, data, and incentives aligned? Will AI augment excellence, or automate failure?

Armand Ruiz, VP of Platforms at IBM

Thank You!

Keep Reading

No posts found