Tech moves fast, but you're still playing catch-up?
That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.
Here's what you get:
Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.
Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.
Research papers and insights decoded - We break down complex tech so you understand what matters.
All delivered twice a week in just 2 short emails.
Today's Agenda
Hello, fellow humans! Today, we’re looking at three studies that give us valuable insight into what makes AI implementations work, and what that means. The bottom line is that we need to be thinking more about how the organization works and less about the technical challenges of MCP and RAG. Don’t get me wrong, those are both still hurdles, but we’re discovering that they’re actually the easy part.
The Three Coordination Layers for Human-AI Collaboration
The blistering pace of AI development has created a catch-22 for organizations trying to leverage AI; on the one hand, AI platforms are incredibly capable compared to previous computer systems and, in the hands of skilled humans, can generate moderately successful work incredibly quickly. It promises so much value (depending on how you define that word), but AI is still unreliable. AI still requires skill, knowledgeable direction, and supervision to create valuable work. In other words, getting value from AI is much less a technological hurdle and more of a human and organizational one.
I’ve written about the risks for implementing AI at enterprise scale and how AI is best thought of as a force multiplier rather than a cost reducer. Three major pieces of research dropped in the past week that, taken together, reveal something crucial about how work is actually transforming.
All three of these studies agree that success comes down to organizational design, not technical capability. Let’s take a look at what that means.
The Scale of What's Coming (And What Isn't)
McKinsey Global Institute released its comprehensive Agents, Robots, and Us analysis with a startling headline: 57% of US work hours could theoretically be automated. Before you panic or celebrate, here's the critical word: theoretically. Their actual finding is more nuanced and more interesting—human skills won't disappear, McKinsey predicts that they'll evolve into what they call "skill partnerships" with AI.
They've developed something called the Skill Change Index to measure this. Digital and information-processing skills face the most exposure, but here's where it gets interesting: they project $2.9 trillion in economic value by 2030, and that value comes from collaboration and workflow redesign, not from elimination.
That means that the money isn't in replacing people; it's in fundamentally rethinking how work gets done when humans and AI systems work together.
McKinsey identifies seven work archetypes ranging from people-centric (least AI-dependent) to agent-centric (most AI-dependent). Most roles sit somewhere in the middle of that spectrum. This isn't a binary shift—it's a gradient, and where your work falls on that gradient determines what you need to focus on.
The first throughline emerges here: We're measuring the wrong thing if we're just counting what AI can do. We need to measure how human skills change when AI enters the picture.
What Actually Works When You Implement AI
Product Talk’s Teresa Torres published Lessons from Nine Real AI Product Teams, and in the world of AI, even if you’re not on a product team, you’re on a product team.
This is where Product Talk's research becomes invaluable. They analyzed nine real AI product teams and found patterns that contradict a lot of conventional wisdom.
First, small teams win. We're talking 2-3 people, cross-functional, domain-expert-led. Not the large, specialized AI teams you might expect. Why? Because domain expertise drives product decisions more than AI technical knowledge. The people who deeply understand the problem space make better choices about AI implementation than AI specialists who don't.
Second, they all started narrow. Every successful team began with a specific use case, built evaluation capabilities around it, and then expanded. The teams that tried to solve everything at once struggled or failed.
Third, their evaluation systems evolved dramatically—from simple spreadsheet assessments to sophisticated frameworks. But they built those frameworks through use, not upfront.
Practical Takeaways for Implementation
If you're starting an AI initiative tomorrow, here's your playbook:
Form a tiny team (2-3 people maximum) with deep domain expertise in the problem you're solving. Add AI technical knowledge as consultation, not leadership.
Pick the narrowest viable use case you can find. Ask: "What's one specific workflow where AI assistance would create immediate, measurable value?" Start there.
Build your evaluation framework iteratively. Start with a spreadsheet tracking three metrics that matter for your use case. Sophisticate it as you learn what actually matters.
Resist the expansion urge until you've validated your first use case completely. The teams that succeeded went deep before going broad.
The second throughline: Small, focused, domain-expert-led beats large, comprehensive, and technically sophisticated every single time.
The Missing Half: Why Google Solved the Easy Problem
The third study, from Organizational Physics, makes the case that Google solved only half the problem with their multi-agent coordination breakthrough.
Google's breakthrough in multi-agent coordination is technically impressive—they've solved how to make AI agents work together efficiently. But the analysis found that that's only half the problem, and it might even be the easier half.
The harder problem is the organizational alignment.
It’s a provocative argument: you need to treat AI agents like human team members with defined roles, clear responsibilities, reporting structures, and accountability frameworks. It means decision rights and escalation paths. It means performance evaluation.
All of that is important because organizational intelligence—not just machine intelligence—determines whether AI implementation succeeds or fails. You can have the most sophisticated multi-agent system in the world, but if it doesn't align with how your organization actually makes decisions, communicates, and executes, it won't deliver value.
The kicker: the principles for aligning humans with humans are identical to the principles for aligning humans with AI. If your organization struggles with clarity around roles, responsibilities, and decision-making among people, adding AI agents will amplify those problems, not solve them.
The Three-Layer Framework
When you synthesize these three research pieces, a framework emerges. Think of it as three layers you need to get right simultaneously:
The Strategic Layer comes from McKinsey's work. You need to map your organization's work exposure using something like their Skill Change Index. Identify which archetypes apply to different roles in your organization. Calculate potential value through workflow redesign, not elimination. Set realistic timeframes—McKinsey is projecting to 2030 for a reason.
The Tactical Layer comes from Product Talk's findings. Form small, domain-expert-led teams. Start narrow with high-value use cases. Build evaluation capabilities early and iteratively. Expand only after you've validated your approach.
The Organizational Layer comes from Organizational Physics' insights. Design org structures that can accommodate AI agents as team members. Establish clear roles, responsibilities, and reporting lines that include both humans and AI. Apply your existing organizational alignment principles—if they work for humans, they'll work for AI integration. Invest in organizational intelligence, because that's what determines success.
Practical Takeaways for Leaders
If you're leading an organization through AI transformation:
Audit your organizational alignment first. Before you invest in AI capabilities, look at how well humans align in your organization. Fix those problems first—they'll only get worse when you add AI.
Create org chart positions for AI agents. Seriously. Give them reporting lines, defined responsibilities, and clear decision rights. This forces you to think through how they'll actually integrate into workflows.
Measure organizational readiness, not just technical readiness. Can your organization absorb new ways of working? Do you have the change management capability to support workflow redesign?
Invest in the middle layer. Strategic vision is easy. Technical capability is increasingly commoditized. The tactical implementation layer—small teams, narrow use cases, iterative learning—is where most organizations stumble.
What This Means for You
The uncomfortable truth emerging from all three of these analyses: most organizations are over-indexed on technical capability and under-indexed on organizational design.
We're buying AI tools, hiring AI specialists, and building AI strategies. But we're not redesigning our organizations to work effectively with AI. We're not starting small and learning fast. We're not treating AI agents as team members that need proper integration into our existing structures.
The $2.9 trillion opportunity McKinsey identifies won't come from the AI itself. It'll come from organizations that figure out the other two layers—the tactical implementation and the organizational integration.
The final throughline: The technical problem is increasingly solved. The organizational problem remains wide open. And that's actually good news, because organizational design is something you can influence directly, starting tomorrow.
The question isn't whether AI will transform work. It's whether your organization will transform how it works to capture that value. The teams getting it right aren't the ones with the most sophisticated AI. They're the ones with the clearest organizational design, the smallest and most focused teams, and the deepest domain expertise driving decisions.
That's the partnership framework that actually works.
Radical Candor
But now the scale is so big. Is the belief really, “Oh, it’s so big, but if you had 100x more, everything would be so different?” It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.


