In partnership with

The Tech newsletter for Engineers who want to stay ahead

Tech moves fast, but you're still playing catch-up?

That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.

Here's what you get:

  • Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.

  • Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.

  • Research papers and insights decoded - We break down complex tech so you understand what matters.

All delivered twice a week in just 2 short emails.

Today's Agenda

Hello, fellow humans! Today, we have a full roadmap for complex problem-solving with AI. It’s a big piece, but I hope that highlighting what differentiates human work from AI work will help you grasp what really makes human work and human thinking valuable. It’s easy to get caught up in the AI hype, but it’s important to stay grounded about how we actually deliver valuable work for other humans.

Hope this helps!

Human-in-the-Loop Series: Complex Problem-Solving in Human-AI Collaboration

When AI can execute faster than you can think, your bottleneck isn't speed—it's clarity.

Claude or ChatGPT will never be able to tell us which part of a complex problem matters most to humans. They will never be able to tell us which solution will provide meaningful value to humans. They will happily make a very confident guess, though. And to someone who does not have a deep understanding of the problem space, it can sound pretty convincing.

AI can generate a thousand solutions while you’re still brushing your teeth, so capacity is no longer the challenge in problem-solving. The constraint now is human judgment for knowing what part of the problem will have the greatest impact, who will value it, and why that matters.

Ask ChatGPT to frame a solution to a complex problem, and it will generate so much content of such numbing blandness that we stop reading; we become slop-blind. Even among humans, being able to zero in on what matters to people and convey that in a compelling way is a rare skill.

It might seem like a paradox that the value of structured human thinking grows along with our digital capacity for speed and automation. But time and time again, when something becomes commodified, the value falls. With generic intelligence everywhere, it’s the bespoke that differentiates itself in the marketplace.

We need to have the right clarity and rigor to ensure that our bespoke brains arrive at insights that are valuable to our peers and colleagues. So it helps to have proven problem-solving frameworks like Toyota's legendary production system and modern product thinking as reusable components and tests so that we can validate our thinking.

Shared DNA: The Factory is the Product

Toyota Production System (TPS) as a tenet of manufacturing might seem out of place in a discussion of software and AI, time-tested problem-solving frameworks like this are the foundations of modern methodologies. Agile software development—with its sprints, retrospectives, and continuous improvement—directly descends from Lean manufacturing principles pioneered by Toyota. Both use iterative cycles with rapid feedback loops over lengthy planning phases, and empowered teams over command-and-control hierarchies. Think of the factory as the model for the software itself; the objective is reliable high-quality, repeatability, efficiency, with fast delivery cycles. That sounds like what good software product teams aim for, too.

Scrum's "inspect and adapt" is Toyota's Plan-Do-Check-Act (PDCA) cycle rebranded for software. Product thinking's focus on validated learning traces back to Toyota's principle of built-in quality in through experimentation rather than inspecting defects out after the fact. This shared lineage matters because whether you're manufacturing cars, shipping code, or collaborating with AI, those structures help us challenge our assumptions, identify key priorities, measure what matters, and keep us focused on high-impact problems and solutions. The tools change, the medium changes, but the discipline of structured inquiry, hypothesis testing, and systematic learning is how we get to clear thinking.

Three Traditions, One Framework

To effectively collaborate with AI, we need to understand what rigorous problem-solving actually looks like. Three traditions offer complementary insights:

Toyota's Problem-Solving Method emphasizes going to the source (genchi genbutsu), asking "why" five times to find root causes, and standardizing successful solutions. It's grounded in the reality that most problems are systems problems, not isolated incidents.

Systems Thinking teaches us to map feedback loops, identify leverage points, and recognize that intervening in complex systems often produces counterintuitive results. It warns against "solving" symptoms while strengthening underlying pathologies.

Modern Product Thinking contributes frameworks for defining value, prioritizing under uncertainty, and maintaining user-centricity while iterating rapidly. It emphasizes measurable outcomes over outputs and validates assumptions before scaling solutions.

AI-powered computer systems now add unprecedented analytical capacity to this mix—but they lack the contextual judgment, values alignment, and causal reasoning that humans bring.

The question becomes: how do we architect collaboration that amplifies both?

Seven Steps for Human-AI Collaborative Problem-Solving

1. What is the Problem?

All problem-solving begins with problem definition, and even though this isn’t controversial, it is at this very beginning where we usually fail. It’s a very human impulse to rush to solutions before fully understanding what we're solving for, and the availability of AI only accelerates this failure mode. ChatGPT will happily solve the wrong problem with blistering speed.

The Human Job

Before we start reaching for solutions, we have to articulate the gap between current state and ideal state in a meaningful and precise way. This requires what Toyota calls "grasping the situation,” not just describing symptoms but understanding context, constraints, and what success actually looks like. Who finds this important? Why is it important to them? How will they know it’s better? What does better look like? People have to do this work

The AI Job

If you have data about the current situation, AI can help you synthesize that information by gathering data, identifying patterns, and surfacing anomalies humans might miss. but you’ll still need to choose what matters.

In Collaboration

Human observations and stakeholder understanding can help inform the AI to quantify and map the problem space. A product manager noticing customer churn doesn't just ask AI to "analyze churn"—they need to define what retention success looks like for different customer segments, then use AI to illuminate patterns inside that framed problem space.

Bottom Line

AI can connect dots, but humans have to decide which dots and relationships matter.

2. Decompose Problems and Map Assumptions

Complex problems feel overwhelming because they're actually bundles of interconnected sub-problems, wrapped in layers of uncertainty. Systems thinking teaches us that where we draw boundaries determines what solutions we can see. If we decide that a user location is not part of the problem space, it might help us by keeping us focused on other factors, or it hurt us by hiding how mobile device use patterns influence how users access the product. It is a judgement call.

The Human Job

Map assumptions explicitly. What do we believe is true about this problem? Whenever we catch ourselves saying “that’s not the problem…” or “this is the problem…” we should dig in to ask ourselves “what else needs to be true for this to not be part of the problem?” What evidence supports those beliefs? What would need to change for different solutions to work? This assumption mapping isolates variables and surfaces where our mental models might be wrong.

The AI Job

AI is a great tool for testing these assumptions against data at scale. Simulate scenarios. Identify dependencies humans didn't anticipate. AI excels at multidimensional analysi to explore how changing multiple variables simultaneously can affect outcomes.

In Collaboration

Humans create the assumption map; AI stress-tests it. When addressing declining product engagement, humans hypothesize factors (pricing, features, onboarding, competition), then AI analyzes behavioral data across those dimensions to highlight which assumptions hold and which are projections.

The "Five Whys" technique pairs beautifully with AI here. Humans ask why, and AI can provide data-driven answers to inform the next why, creating a loop that moves from symptoms to systemic causes.

3. What is the Value? What Are We Optimizing For?

I worked with a team that labeled some of their product requests as “optimization,” and this presented a challenge. I had to ask them “optimized for what?” This was a surprisingly difficult question for them to answer. They had to disentangle all kinds of assumptions about how the product should work, what strategic values they wanted to prioritize, and a host of other questions that we never made explicit.

Product thinking offers a range of tools for defining what "better" means in optimizing for a given outcome. This is fundamentally a values question, not an analytical one. Many organizations stumble because they want to prioritize eight values. You cannot have eight priorities; you can only have one. It takes real discipline, and it’s hard.

The Human Job

Define measurable outcomes that align with human intent. What does success look like? For whom? Over what timeframe? With what constraints? This requires stakeholder understanding of the customer, business understanding, and ethical reasoning as domains where AI can only provide a supporting role.

The AI Job

You can use AI to model the possible tradeoffs between competing objectives, forecast likely outcomes of different value definitions, and highlight second-order effects that humans might not anticipate.

In Collaboration

Frameworks like OKRs (Objectives and Key Results) can help humans set the objective ("increase customer lifetime value") and initial key results, then collaborate with AI to refine metrics, set realistic targets, and identify leading indicators.

Caution!

AI's measurement capabilities can lure you into optimizing for what's easy to measure rather than what matters. Goodhart's Law warns us that when a measure becomes a target, it ceases to be a good measure. Humans must hold the line on authentic value definition. Measure What Matters and How to Measure Anything are great reads for understanding how to define metrics that reflect real impact.

4. Analyze Root Causes: Data Synthesis Meets Contextual Reasoning

People love to jump to root cause analysis almost as much as they love to jump to solutions. But effective root cause analysis requires both breadth and depth to see patterns across datasets while understanding the human context that makes those patterns meaningful. There’s a reason why this is the fourth step and not the first; if you don’t have a clear problem definition, a breakdown of the problem space, and a value objective, you’re working entirely on assumptions. Maybe you’ll get lucky and get there entirely on intuition! But probably not.

The Human Job

Provide contextual reasoning that explains why these patterns are emerging and establish clear data-driven cause-and-effect relationships. If you have multiple root causes, you probably have multiple problems, and humans need to disentangle them to be able to set values and priorities. Many factors can be in play here like organizational dynamics, market forces, and human behavior that don't appear in datasets. We have to ask the questions AI wouldn't know to ask.

The AI Job

Synthesize data across dimensions and timescales humans can't process manually. Identify correlations, anomalies, and patterns that suggest causal relationships worth investigating.

In Collaboration

Iterative inquiry. Humans pose hypotheses based on context; AI tests them against data. AI surfaces unexpected patterns; humans investigate whether they're meaningful or spurious. This back-and-forth combines computational power with contextual wisdom.

When Toyota engineers investigate manufacturing defects, they go to the factory floor (genchi genbutsu). In digital contexts, AI becomes our capacity to "go to the data floor"—but humans must still interpret what they find there through the lens of organizational reality, customer psychology, and market dynamics.

5. Develop Countermeasures: Generation Meets Prioritization

Once you understand root causes, you need solutions—countermeasures in Toyota terminology, because the best solutions counter the specific mechanism causing the problem.

The Human Job

Prioritize ruthlessly. Evaluate feasibility considering organizational capacity, cultural fit, and strategic alignment. Make judgment calls when data is ambiguous or conflicting.

The AI Job

Generate solution options at scale. Model likely outcomes of different approaches. Identify implementation risks and dependencies humans might overlook.

In Collaboration

AI generates, humans curate. Use AI to explore the solution space broadly—including options humans wouldn't think of—then apply human judgment to select approaches that are not just effective but implementable, sustainable, and aligned with values.

This is where product thinking's "impact vs. effort" matrix becomes valuable. AI can estimate impact through simulation; humans assess effort by understanding organizational reality. The conversation between these perspectives produces better decisions than either alone.

6. Implement and Evaluate: Simulation, Monitoring, and Iteration

A common mistake at this stage is a rush to scale up the new countermeasure. This is the MVP stage where the team should run a small-scale experiment to validate the solution in context (or in situ, as some like to say). Implementation is where theory meets reality and where most solutions fail, but if you do it correctly, this is a great place to fail. This should be done in a way that a failure gives you valuable information with minimal downside. AI dramatically changes what's possible in both testing and monitoring.

The Human Job

Humans need to make go/no-go decisions, and sometimes that means interpreting ambiguous signals. We need to collect qualitative feedback and document emerging context so that we can make adjustments. Ultimately, humans are accountable for the outcomes.

The AI Job

AI can simulate solutions before deployment and monitor implementation in real-time. Flag deviations and anomalies instantly. Enable rapid iteration by compressing feedback loops.

In Collaboration

Use AI to run digital twins or simulations before implementing in the real world. Deploy with AI-powered monitoring that alerts humans to meaningful changes. Create rapid learning cycles where AI provides data and humans provide interpretation.

Systems thinking reminds us that interventions often produce unexpected effects. AI helps us see those effects faster, but humans must decide whether they're acceptable and how to respond.

7. Standardize and Share Learning: Building Institutional Memory

The final step transforms problem-solving from an event into a capability. Toyota's kata approach emphasizes turning successful solutions into standard work—not rigid procedures, but frameworks that encode learning. In software, technical debt is sometimes the result of incomplete standardization; engineering leveraged an improved library for this logic, but did not update other logic to take advantage of the improved processing.

The Human Job

Extract principles from specific solutions, maybe this means knowing how an abstraction layer could make the improvement available to other parts of the system. Determine what's context-dependent versus generalizable. Build narratives that help others understand not just what worked but why.

The AI Job

Document patterns across multiple problem-solving instances. Suggest when past frameworks apply to new situations. Make organizational knowledge searchable and accessible.

In Collaboration

Humans create the framework; AI helps propagate and adapt it. When a solution works, document the problem structure, solution approach, and key decision points. Let AI help identify future problems with similar structures where the framework might apply.

This creates a flywheel: better frameworks improve problem-solving, which generates better frameworks, which improves future problem-solving.

A Map of Relative Strengths

The foundation of human-AI collaboration isn't about the specific frameworks—it's about clear role differentiation.

AI Strength

Accelerating analysis, generating options, processing information, and identifying patterns.

Human Strength

Defining what matters, making value judgments, providing context, and maintaining accountability for outcomes.

The danger isn't that AI will replace human problem-solving—it's that humans will abdicate the thinking AI can't do, defaulting to whatever the algorithm suggests because it's faster than wrestling with ambiguity.

The opportunity is extraordinary: by combining AI's computational power with human judgment, contextual understanding, and values alignment, we can tackle complexity that was previously intractable. But only if we stay rigorously clear about what each brings to the collaboration.

Bottom Line

Human-in-the-loop isn't about keeping humans involved for the sake of involvement—it's about architecting systems where humans do the thinking that only humans can do while AI accelerates the flow and illuminates things we may overlook so that the collaboration produces outcomes neither could achieve alone.

AI accelerates analysis. Humans define what matters and why it's valuable.

That division of labor is the foundation of everything that comes next.

Radical Candor

If a measurement matters at all, it is because it must have some conceivable effect on decisions and behaviour. If we can't identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value.

Douglas W. Hubbard, How to Measure Anything: Finding The Value of Intangibles in Business

Thank You!

Keep Reading

No posts found