Today's Agenda
Frameworks Are Meant to Organize Humans
Yesterday’s segment on prioritization highlighted some important frameworks for thinking about strategy, and in my mind, it connected to the latest Lenny’s Podcast episode where Lenny interviewed Nicole Forsgren and they talked about how to measure productivity, and by extension, how to measure success.
As much as we talk about productivity and value, they are notoriously difficult to measure, because they mean different things to different parts of the organization. The CFO thinks of value very differently than the Marketing Director, or the CPO, or the CEO. That’s the nature of complex organizations.
The OKR framework goes some way to unifying all those different values by proposing one Objective to rule them all, and the organization components deliver Key Results that serve that Objective.
OKRs, like TPS before it, and Demings’s 14 points and PDCA, and Six Sigma, and all the rest, is that they require human discipline and alignment to work. Around the late 80’s, Toyota Production System was the hot framework that everyone tried to model, and 85% of firms that tried it failed. There are as many reasons for these failures as there are firms that tried TPS. But a lot of it comes down to leadership and alignment discipline; things like everyone having the same understanding of core terminology, unifying the entire organization around strategic goals, everyone having a clear understanding of how their work contributes to those goals, leadership understanding how each business group delivers value for the organization, and so on.
At two separate organizations, I saw people propose OKRs as a tool, and their errors were mirror images of each other. One began as a directive from the CEO that Product Managers should “do OKRs,” without any further tie-in of other leadership or business groups. The thinking was that because PMs talk about OKRs, PMs should solely be responsible for them. The other organization saw a PM propose OKRs, but without any discussion with leadership about how that would fit with how the rest of the organization measures or talks about strategy, goals, or work.
When we consider the potential for increasing the volume of work produced (not to be confused with productivity) that comes with AI and agents, misaligned organizations that lack a shared vocabulary are at high risk for going completely off the rails.
AI Scales Everything, Including Problems
When building cars, quality control looks at part fit-up and measure “stack height” of a group of parts where they come together. You can have 4 parts come together, each is in tolerance (+/- 0.3mm), but when you stack them all together and they’re off in the same direction, you can get 4 good parts producing 1.2 mm out of spec. Stack other parts of top of that, and suddenly you’ve got a door that doesn’t fit.
AI amplifies this stack height problem for software. AI code is notoriously verbose, so if 5 engineers are using AI to generate 2x or 3x their normal volume of code, but they’re all prioritizing different functions, performance, objectives, or architectures, the technical debt increases by 2x or 3x, too. Maybe some engineers decide that because they’re using AI, they don’t need to comment their code anymore, but AI needs those comments to contextualize new code generation. Now they’ve undermined the AI’s effectiveness. Nicole Forsgren talks about this in her interview on Lenny’s Podcast, and will cover these aspects of Developer Experience more thoroughly in her upcoming book Frictionless: How to Outpace your Competition.
We’re all discovering that AI cannot replace humans right now after all. The automations and coding agents still need human review and supervision. So how do we think about productivity, objectives, and how to deliver customer value when we can push code at this breakneck pace?
Frameworks Can Align Humans
Some learning coaches advise us that we can accelerate our learning by building mental models of the topic we’re trying to master and that we need some theory of mind of the topic. And I think frameworks can do the same thing for organizations. They can help keep all the different parts of the organization working towards the same goal, aligned to the same mission, solving complex problems in coordination, and clear about how each role serves the overall strategy.
Choosing the framework that best aligns with the organization's culture and mission and enforcing that framework is a senior leadership function because it must permeate every layer of the organization. There is a lot of change management literature that should be on tap for senior leaders pushing their AI initiatives, and those who do will thrive. But every change management effort has to start with understanding the risks and challenges of the road ahead.
We Need to Understand AI as an Organizational Challenge
AI personalizes the content that it generates for you, so it will tailor language to your word choices, phrases, and preferences. While that’s great for you as an individual, it can create real headaches for organizations that don’t have standardized language and standardized definitions for that language. For example, in the product space, we’ve talked about Minimum Viable Product (MVP) for years. And even though this is a popular term, people take it to mean a wide range of things, despite having a specific, well-defined meaning. Personalizing language as AI does amplifies and accelerates this semantic drift.
Nearly every AI model is pre-trained with the massive dataset that is the internet. But it’s a small percentage of the internet that is reliable, trustworthy sources. For every Wall Street Journal or New York Times, there are dozens of sources that are clickbait, ragebait, or just unreliable conjecture. This is a major factor in why AI itself is still unreliable.
Agentic AI promises to automate work, maybe inside your own workflow, or across your team, or across multiple teams and business groups. When groups that had been siloed now have a robot sharing information between them, differences in terminology, differences in measurement, misaligned objectives, or confusing information dependencies suddenly make everyone’s work incoherent. Humans have to get on the same page about definitions, information flows, measurements, and objectives to get any benefit out of AI agents.
Humans Need Alignment to Make AI Useful
What this means is that every team is now a product team. If finance wants to automate work with an agent, the finance team now needs to consider product thinking problems: customer discovery, data modeling, user experience, how to measure success, prioritization, and how to improve and maintain their agents.
All of those aspects of product management require a lot of human-to-human communication and consensus-building around what really matters for the organization. Anyone who has been in the enterprise business environment can tell you about reports that are only targeting one person and alienating the rest of the audience or worse, the report that only makes sense to the person who wrote it. Or a communication that misunderstands the function of a particular business group, or people from different departments who are talking about the same thing, but they don’t realize it because they use different vocabulary to talk about it. If your organization has trouble with that now, AI will turn up the volume and pace of that kind of trouble.
So before you think about your internal AI strategy, think about how your organization works as a system before leaning into tried and true change management practices.
