👏🏻 Welcome to Adaptive Agenda!

We’re excited to have you here as we begin this journey together. Our mission is to build a functional and meaningful understanding of AI tools, but more importantly, to cultivate the human cognitive skills that help us be more effective at leveraging AI tools and become critical thinkers as fact becomes harder to distinguish from fiction.

We will be sharing news about AI, insights on how to be an effective user of AI tools and what that means, prompting tips and tutorials, and training materials for the human cognitive skills that AI cannot replicate.

We’re going to share training on how to set clear objectives and goals, how to challenge and test a GPT, how to ask great questions, and so much more.

Whether your discipline is design, engineering, product, finance, education, operations, or any field that requires critical thinking, rigorous thinking, collaboration, and problem-solving, you will gain the skills to leverage AI as a powerful work assistant. With these skills, you’ll be able to do better, stronger, faster, and more meaningful work.

📋 Today's Agenda

🚨 News

Microsoft on Emerging Frontier Firms

Microsoft has identified an emerging paradigm shift as organizations begin adopting AI agents into their operations. This represents the clearest picture yet of how leading organizations are transforming with the advent of AI agents. Microsoft’s 2025 Work Trend Index Report categorizes them as a new organizational model: the Frontier Firm.

Based on data from 31,000 workers, including 844 at Frontier Firms, the report identifies five defining traits of Frontier Firms: organization-wide AI deployment, advanced AI maturity, active and planned agent use, and a belief that agents will drive AI ROI.

Frontier Firms Stand Out and report higher capacity, greater optimism, and lower concern about AI replacing jobs. To meet rising demands, businesses are shifting from traditional org charts to Work Charts—structures built around tasks, powered by AI agents, and overseen by “Agent Bosses.” The report suggests that approaching work this way empowers real agility beyond the buzzword throughout the organization.

The Model demands flexibility, the report suggests that successful transformations require rethinking the organization chart with its reporting hierarchy to a work chart structured around jobs to be done. The report compares it to a movie production model where specialized teams are brought in for specific projects and disbanded or reassigned as demand shifts. This closely resembles cross-functional product teams in the Silicon Valley mold and resembles the Haier microenterprise model, where every job function can operate as a business unit to rapidly scale up or down based on internal and external demands.

The Roadmap is being written right now by leaders already investing in AI skills, digital labor, and morale to enable team agility and resilience during market shifts. The roadmap for this kind of transformation follows three phases:

Phase 1: Humans with AI assistants

Phase 2: Human-agent teams

Phase 3: Human-led, agent-operated workflows

The Mindset described in the report signals a paradigm shift: AI agents are redefining work, demanding new strategies and roles. Organizations that embrace this evolution—restructuring, reskilling, and rethinking workflows—are poised to thrive in the AI-powered future.

Anthropic’s Social Orientation

Anthropic published a short video with a surprisingly frank discussion of the societal impacts of AI. Where OpenAI centers the hype of AI, Anthropic centers the human experience. AI researchers have been wrangling with some of these ethical and societal questions in depth for some time, but to ordinary users, they may be confronting some issues for the first time. People are asking Claude and ChatGPT for relationship advice and to make forecasts about weather, economics, politics, and life plans without necessarily understanding that these models are not really empathetic or thinking about the response.

It’s important to understand that the responses from AI chats are nothing more than prediction math and probability. It does not provide verified information, and it is refreshing to see a model provider be so thoughtfully honest about what that means for users, for work, privacy, and society. I strongly encourage everyone to take the eight minutes to watch the video.

AI’s Job Market Impact

Young College Graduates are facing a job market at an all-time low, and AI is one of three likely factors, according to Derek Thompson (Hit Makers, Abundance). He describes three theories about the drivers behind this. First, the job market for college grads never fully recovered from the Great Recession. Second, the college wage premium may have maxed out. And third, work that once needed 20 college grads can now be done by five who can leverage AI.

Each Factor’s Contribution to driving the downward trend is hard to know. And on average, a college degree still delivers a good return on the investment. It just means that non-college-educated workers are doing a little better, which is certainly a positive overall.

The Upshot is that each successive generation of graduates needs a deeper skill stack. What once needed only a business degree later required MS Office literacy skills, then Internet skills, then social media skills, then specific SaaS platforms, now AI and agent supervision. Work that previously had dedicated roles, such as regulatory compliance, now gets folded in as additional workload.

A Full-Stack Engineer in software can work in every part of the product — backend, databases, front-end, middleware, etc. The full-stack engineer used to be a unicorn that commanded top-tier salaries, but has become a required role on every software team. Now, every role becomes a full-stack role, not just for software engineers. A writer can’t just write; they have to be able to research, fact-check, proofread, edit, curate, design, publish, and promote.

👷🏻 Skill Stack

You can get some interesting responses with simple, short prompts, but you can start to unlock greater potential when you have a conversation with the AI as if it were a colleague.

Have a Conversation With Your New Colleague

This is especially easy with the ChatGPT voice mode. We all hit that wall where we need to phone a friend. Thankfully, OpenAI has rolled back the recent fawning, overly sycophantic release and offered some explanations in a postmortem. That’s not to say you shouldn’t regard your GPT conversations with a healthy skepticism, but asking questions and even interrogating the GPT’s responses can be a powerful strategy to help you think through your situation or idea.

Use Gemini, ChatGPT, or Claude as your sounding board; don’t ask the GPT to think for you, but let it help you think through your planning.

  • What are the risks if I do […]?

  • What are some other ways that I might achieve my objective?

  • How can I protect myself against those risks?

  • If I want to achieve […], what resources would I need? What would a reasonable timeline look like?

If you have a lot of conversations with the GPT, you can even ask:

  • Is […] a good idea for me? What are the risks for someone with my professional background?

Never pass up an opportunity to improve your prompting. That’s why I always finish my prompts with:

Please ask up to three clarifying questions to refine your response to this prompt.

After I answer those questions, please suggest a refined prompt.

Let us know how it goes!

🦄 The Human Value Proposition Part 1: Where We Stand

It’s easy to assume we know who we are and the scope of our capabilities. I’d like to challenge that complacency. This newsletter will keep you up-to-date on major AI news, but more importantly, it will spotlight the human skills we must adapt and strengthen to thrive in the AI era.

Ironically, as I learn more about generative AI’s capabilities, I’m convinced we undersell our human abilities. When I compare AI’s capabilities to what we do as humans, I see how many extraordinary skills we perform so intuitively and unconsciously that we take them for granted.

How often have you driven home just on muscle memory, thinking about everything except driving? And yet truly self-driving cars can’t match the consistency or skill of most human drivers. We can intuitively know what someone else needs without asking, collaborate with other people, identify a reference to The Simpsons or Minecraft and place it in a social and historical context. We can be curious about who’s using all the glitter, or how eels spawn, or whether atoms are conscious. You can decide what is the most important thing for you at this moment. AI cannot do any of that.

Humans have been the top species on this planet for a while, and what sets us apart is our ability to invent and use tools. Now, ironically, we feel threatened by one of our own tools. This isn’t a new fear. Prometheus’ gifting of fire to humans challenged the gods, and Mary Shelley’s Frankenstein drew on that myth to show how our creations can challenge humans’ primacy and cross the uncanny valley into a feared “Other.” But it was fear that led people to see a threat in The Creature that wasn’t there. AI is even less capable than Shelley’s Creature, but it is already reshaping how we write, design, code, and learn. We have an opportunity to leverage AI to improve our cognitive skills and not let fear drive our response.

From healthcare to marketing, organizations are adopting AI to automate tasks, improve processes, and boost productivity. We can use ChatGPT to generate content in seconds, predict consumer demand, or identify diseases with near-expert accuracy. This is both awe-inspiring and unsettling.

But these tools are just visible artifacts of our incredible human cognition that this newsletter aims to highlight and strengthen. These deeply human skills can make us not only better AI users and better humans.

What People Bring to the Party

Generative AI has entered spaces we once considered uniquely human: language, creativity, judgment. But these capabilities are grounded in human work. We ran the experiments. We built the systems. We created the tools.

When AI starts performing skills that took us years of learning and thousands of hours to master, it can feel like a threat. But knowledge alone isn't our superpower—we can’t match AI on speed, scale, or memory. Instead, we must emphasize what AI can’t replicate. What makes the human value proposition irreplaceable in an AI-driven world?

  • Contextual Understanding: We interpret information through culture, history, and lived experience. We create meaning, not just outputs.

  • Emotional Intelligence: We read nuance, build trust, and lead with empathy—skills that drive human relationships and effective leadership.

  • Creative Synthesis: We combine ideas across disciplines—economics with art, philosophy with policy. AI imitates; we imagine.

  • Meta-Cognition: We reflect on what we know and recognize when we don’t. While AI can hallucinate, we can catch ourselves and course-correct.

  • Curiosity: Our desire to explore, question, and discover drives innovation—something AI lacks.

  • Goal Setting: AI can’t define meaningful objectives. Humans identify problems and set the direction.

  • Discernment: With practice, we can learn to evaluate information critically and accurately.

  • Critical Thinking: We analyze, question, research, and synthesize data to form judgments and hypotheses.

  • Complex Problem-Solving: In layered systems, only humans can break down the whole system into elements that are meaningful to humans, identify the elements, factors, and forces of greatest impact, and navigate through a series of interconnected problems to arrive at an objective that serves humans.

  • Collaboration and Consensus-Building: We can take in range of perspectives and synthesize solutions, test these ideas with our colleagues, accept feedback, and (hopefully!) work towards a consensus to address the problem strategically.

We hear people dismiss these as "soft skills” secondary to the “hard skills” like engineering, finance, and medicine, but AI and search are starting to reverse those roles. The nearly frictionless access to information with search and AI means that knowledge for its own sake isn’t the source of value that it once was. What is becoming increasingly valuable is the ability to leverage information, assess it critically, filter out noise, and filter in the right information for your audience, and use that to align teams of humans who are using AI to scale their own effort. If we can successfully leverage these skills, we can become more valuable and capable at work and more resilient and confident navigating the information and misinformation online.

Teamwork, insight, and creative problem-solving are strategic assets that will help us be more competitive and effective as work becomes more automated and humans risk becoming disconnected. These skills can be part of the connective tissue that helps us grow into connected communities and resist being compartmentalized.

Where AI Falls Short

AI stands on the shoulders of giants—but it doesn’t know anything. It doesn’t understand or think. It doesn't learn—it processes and predicts. Google doesn’t know; it surfaces work done by others. ChatGPT doesn’t know; it reassembles language from data created by people.

Language models like ChatGPT are nothing more than probability machines. They break language into tokens and perform algebra on those numbers to pick the next most likely word or phrase. When you ask a question, it predicts the most likely next token in the sequence, without understanding what it means.

This framework reveals AI’s core limitations:

  • Dependent on Existing Information: AI can’t create new knowledge; it can only repeat and recombine existing content.

  • No Self-Awareness: It can’t recognize what it doesn’t know. If it lacks data, it still attempts an answer—this is hallucination.

  • Bias Replication: AI reflects the biases in its training data unless rigorously corrected.

  • No Lived Experience: AI doesn’t feel fatigue, hunger, grief, or joy. It can repeat things already written by people about these things and describe them, but does not know them.

These weaknesses are compounded by deeper issues:

  • No Accountability: AI outputs can cause harm, but AI bears no responsibility—humans do.

  • No Moral Compass: AI can simulate fairness but lacks ethics or awareness of impact.

  • Extractive, Not Additive: Generative AI relies on deriving content extracted from human authors; it cannot add anything original. AI data centers extract water and power resources from their communities, and the current business model does not always add value to the community as promised.

That doesn’t mean we shouldn’t use AI; there is a lot to be gained by using AI thoughtfully and judiciously. But we need to be clear-eyed about the human and economic costs associated with AI, and manage our own capabilities to meet this moment. The more we rely on AI, the more we need people who can manage, question, and guide it with wisdom and care.

What does it look like to use AI in a learning and growth feedback loop to help us become smarter and more thoughtful? In the next issue, part 2 will discuss the human jobs to be done that serve other people, the services we can perform more effectively when we leverage AI capabilities, and how to develop a strategy for how you can define your relationship to AI.

🦡 Radical Candor

AI Isn’t So Clever

“People who imagine that an LLM is something bigger, that an LLM can imagine something new, can be creative, they cannot imagine just how large the training dataset that was used to train it is. People often test LLMs by… they know something. ‘Oh, OK, I will ask it something and we will see if it knows it, too.’ And yes! It know it, too! But it’s just because your knowledge is so unoriginal, that of course it knows it, because it’s in some documents written by others.”

Andriy Burkov via One Knight in Product Podcast

📖 Book Recommendation: Co-Intelligence

What will work look like when we start seeing AI as not just a tool but a colleague? That’s where we’re heading, and we all need to decide how to navigate that workspace.

If you are still finding your way with AI (most of us are), Co-Intelligence by Ethan Mollick is a great starting point. It was published in early 2024, so while it may not be the most current, a lot of the ideas and direction it discusses still hold up surprisingly well. It is an accessible read for the non-technical and even for people just now starting to experiment with AI models.

Thank You!

We’re glad you joined us for this first issue of Adaptive Agenda. New content will arrive in your email inbox every Tuesday and Thursday.

Keep Reading

No posts found