In partnership with

The Simplest Way to Create and Launch AI Agents and Apps

You know that AI can help you automate your work, but you just don't know how to get started.

With Lindy, you can build AI agents and apps in minutes simply by describing what you want in plain English.

→ "Create a booking platform for my business."
→ "Automate my sales outreach."
→ "Create a weekly summary about each employee's performance and send it as an email."

From inbound lead qualification to AI-powered customer support and full-blown apps, Lindy has hundreds of agents that are ready to work for you 24/7/365.

Stop doing repetitive tasks manually. Let Lindy automate workflows, save time, and grow your business

Today's Agenda

Hello, fellow humans! Today, we ask whether or not product management thinking is an appropriate model for approaching AI and AI agents, as they become more prevalent in the workplace.

Not for nothing, AI agents are software products. It may be strange to think of them that way, but they are tools that we build to help humans accomplish tasks and work. Maybe no one explicitly buys the product, but the organization pays for it indirectly, and we, along with our colleagues, are consumers of the work this tool produces. Sounds like a product to me. So if we’re all working on our AI agents…

Are We All AI Product Managers Now?

There’s a question hanging in the air these days: As AI capabilities expand beyond simple automation into complex reasoning and autonomous decision-making, how do we define our relationship to these semi-autonomous robots? And what about our relationships to other humans who may also be leading a team of agents?

Across entire organizations, in HR, finance, operations, customer service, logistics, we’re all getting pressure from leadership to use AI to scale up our outputs; more analysis, faster responses, broader scope of responsibilities, and the expectation is that AI will help us do that. But the reality is that doing a job and building something to do that job autonomously are two very different things, and the real sticking point is that messy middle.

The messy middle is where the original plan breaks down and some new complexity is discovered. Maybe the customer changes the requirements, maybe the requirements weren’t as clear-cut as you thought, maybe there was an unexpected roadblock, but it’s always something that makes it more complex and more difficult than you thought.

People can do all kinds of mental gymnastics, negotiating, intuiting, questioning, and research to get through the messy middle. Usually, people who are really good at their jobs are good at them because they can do all of these things in the messy middle of a process and come out the other side with a good outcome. But an AI agent can’t do any of that.

My hypothesis is that product thinking can help us solve our way out of this bind.

The Real Shift at Work

The changing workplace is real; as more professionals need to build AI agents to supplement their roles, and vibe-coding makes agent-building accessible to non-technical workers, the bottleneck shifts dramatically. Have you ever had to explain your job to someone who doesn’t know your industry? Now do it with enough detail that if they could remember everything, they’d be able to do your job. That’s what we’re asking our team members to do when we ask them to use AI to automate or streamline their jobs. It sounds easy until you actually try to do it.

The technical implementation is no longer the bottleneck, and may become commoditized in the near future. The bottleneck will become defining the problem, the context, the objective, and designing a solution appropriate for the data ecosystem. To be sure, there are technical elements there, but a lot of that sounds like product thinking, with engineering and data considerations.

This is a combination of skills where product management has always excelled — solving for the right problem, and that’s why product management is becoming an essential leadership and problem-solving framework for the AI era.

Why Product Management? Think Cross-Functional Teams

Product management emerged as a discipline precisely because complex products require someone to bridge the gap between customer needs, technical possibilities, and business constraints. Let’s consider what that might look like for an HR professional.

An HR leader building an agent to streamline performance reviews faces a classic product challenge: multiple stakeholders, competing priorities, and high-stakes outcomes.

Start with discovery inside HR. Interview HR business partners, compensation specialists, and talent development teams to understand their actual jobs-to-be-done. What takes the most time? Where do errors occur? What decisions require judgment versus consistency? Map the full workflow and identify which components an agent could genuinely improve.

Then expand cross-functionally. Performance reviews touch legal (compliance requirements), IT (data security and system integration), finance (compensation planning), and every manager organization-wide. Each brings constraints your agent must respect and insights that shape better solutions.

Schedule working sessions—not just status updates—where legal articulates non-negotiable guardrails, IT explains authentication protocols, and managers describe what makes feedback actually useful. Use these conversations to decompose the problem: perhaps the agent drafts initial reviews but escalates sensitive situations, or generates competency assessments while humans handle development planning.

Define success metrics collaboratively. HR wants time savings, legal wants audit trails, managers want quality feedback. Craft objectives that serve all three: "reduce review completion time by 40% while maintaining compliance standards and improving manager satisfaction scores."

This cross-functional alignment—classic product management—transforms a departmental tool into an organizational capability.

The Core Skills That Matter Now

Human Job Discovery: Understanding What Actually Matters

Before you can direct AI to solve a problem, you have to understand the problem deeply.

The jobs-to-be-done framework teaches us that people don't want products or features—they hire solutions to make progress in their situations. When someone says they need an AI dashboard, they're really hiring a solution to feel confident about business performance, communicate insights to stakeholders, or identify problems before they escalate. Understanding the true job tells us what success actually looks like.

Customer discovery extends this principle into structured inquiry. Product managers conduct interviews, observe workflows, and map stakeholder ecosystems to grasp the situation: What context surrounds this problem? What constraints limit possible solutions? Who gets impacted by decisions? Where does value actually accrue?

This situational understanding turns out to be critical when we’re directing AI because agents can only work inside the boundaries we set. If you misunderstand the context, for example, assuming a pricing decision only affects revenue when it also impacts brand perception and customer lifetime value—your AI will optimize for the wrong outcomes.

Breaking Down Problems and Breaking Out Your Assumptions

Everything is so interrelated today, that any kind of problem-solving needs to start with unwinding a Gordian knot of inputs, outputs, processes, and assumptions. AI systems excel at solving well-defined problems but struggle with ambiguous, interconnected challenges.

Product managers have long practiced breaking complex problems into smaller, more manageable components. When facing "improve customer retention," they decompose it into discrete elements: onboarding effectiveness, feature adoption patterns, support response times, value realization milestones, and competitive positioning.

Each component becomes specific enough for targeted intervention—and specific enough to brief an AI agent.

But decomposition alone isn't sufficient. The real world consists of systems where components influence each other in nonlinear ways. Mapping these influences creates a model of the problem space. This systems thinking enables product managers to anticipate where AI interventions might create unintended consequences and where leverage points offer disproportionate impact.

Equally critical is assumption mapping: questioning the conditions necessary for our beliefs to hold true. When we assume that faster response times improve customer satisfaction, what must be true? That customers value speed over quality? That the current response time actually causes dissatisfaction? That satisfaction translates to retention? We have to validate each assumption, one-by-one.

When directing AI agents, explicitly mapping assumptions allows you to build in checkpoints where the AI validates conditions before proceeding—preventing elegant solutions to the wrong problem.

Objective Setting and Value Definition: Translating Strategy into Action

AI agents need clear objectives, but not all objectives work equally well.

Product managers understand that effective objectives balance specificity with flexibility, ambition with achievability, and autonomy with accountability. Translating organizational strategy into clear objectives for AI requires thinking through the logic chain:

If our strategy emphasizes market expansion, and market expansion requires customer acquisition, and acquisition requires conversion optimization, then our AI objective becomes "increase trial-to-paid conversion rate by 25% while maintaining customer quality" rather than vague directives to "improve marketing."

Understanding how objectives work in tandem prevents suboptimization.

An AI agent optimizing for conversion rate might achieve its target by lowering prices or relaxing quality standards—technically succeeding while strategically failing. Product managers craft complementary objectives that constrain each other: optimize conversion while maintaining average revenue per user above baseline; reduce churn while keeping support costs within budget.

The balance between autonomy and accountability becomes especially delicate with AI. Too much autonomy and agents pursue objectives through methods that violate implicit constraints or ethical boundaries. Too little and you lose AI's primary advantage: the ability to explore solution spaces humans couldn't efficiently navigate.

Product managers excel at setting this balance—defining clear success criteria and non-negotiable constraints while leaving the solution path open.

Communication Skills: Prompting as Strategic Briefing

If product management is the new leadership model, then prompting is the new briefing.

Writing precise, contextual instructions for AI mirrors the communication skills product managers use when briefing designers, engineers, or marketers. Both require clarity about intent (what we're trying to achieve), constraints (what we can't compromise), and success criteria (how we'll know it worked).

Structured dialogue maintains alignment across human and AI collaborators. Product managers don't just send briefs and disappear—they establish feedback loops, ask probing questions, and adjust direction as new information emerges.

With AI, this means prompting for reasoning traces to understand how the agent arrived at conclusions, requesting alternative approaches to stress-test initial solutions, and providing corrective feedback that shapes future performance.

The goal isn't a single perfect prompt but an ongoing conversation that keeps human and AI aligned as work progresses and circumstances change.

Framing intent effectively requires understanding not just what you want but why. Rather than prompting "generate a pricing page," effective communication provides: "generate a pricing page that emphasizes value over features because our research shows enterprise buyers need justification for stakeholders, and our main competitor leads with feature counts which makes direct comparison difficult."

This contextual framing helps AI make the hundreds of micro-decisions involved in execution.

Collaboration Skills for Hybrid Teams

Modern product development requires integrating insights from design, engineering, data science, domain expertise, legal, and marketing. Product managers serve as integration points, translating between disciplines and ensuring everyone works toward aligned goals.

AI doesn't eliminate this complexity—it adds another discipline to coordinate.

The "handoff" between human intuition and machine output requires careful management. Humans excel at recognizing patterns from sparse data, understanding social dynamics, and making judgment calls under ambiguity. AI excels at processing large datasets, testing multiple scenarios, and maintaining consistency at scale.

Product managers orchestrate these complementary capabilities, deciding which problems warrant human intuition versus machine processing.

Facilitating trust and shared understanding in hybrid teams becomes critical as AI takes on more autonomous work. When an AI agent recommends a course of action, human collaborators need to understand not just what to do but why that recommendation makes sense, what data informed it, and where uncertainty remains.

Humans Must Be the Decision-Makers

Not every decision should be delegated to AI, regardless of capability.

Product managers recognize when human oversight is non-negotiable—typically where decisions involve significant ethical dimensions, irreversible consequences, or complex stakeholder trade-offs that reflect values rather than optimization criteria.

Designing escalation protocols for autonomous systems means thinking through:

  • What types of decisions require human approval?

  • What thresholds trigger review?

  • What information must accompany escalated decisions?

This requires thinking probabilistically, but also requires all of the context and information that we collect in the discovery and cross-team collaboration work.

An AI agent pricing products might handle routine adjustments autonomously but escalate when proposed changes exceed certain thresholds, affect strategic accounts, or deviate significantly from historical patterns. These guardrails don't eliminate AI value—they ensure that autonomy operates within acceptable bounds while capturing human judgment where it matters most.

Example Applications in Practice

This framework applies across domains:

AI-driven design: Brief agents on user needs and brand guidelines, review outputs for alignment with strategic positioning, iterate based on user testing results.

Logistics optimization: Define objectives around cost, speed, and reliability trade-offs, map constraints around capacity and regulatory requirements, establish escalation protocols for unusual scenarios.

Scenario planning: Decompose strategic questions into testable assumptions, direct AI to explore possibility spaces, integrate machine-generated scenarios with human judgment about likelihood and desirability.

The Competitive Advantage

As AI capabilities expand and vibe-coding democratizes agent creation, the competitive advantage shifts from technical implementation to thoughtful direction.

Organizations that treat AI as another tool to be deployed will underperform those that recognize AI as a new form of collaborator requiring skilled management.

Product management provides that management framework—not because it was designed for AI, but because it was designed for the same challenge: directing capable but non-autonomous resources toward meaningful outcomes amid complexity and uncertainty.

The leaders who thrive in the AI era will be those who master not the technology itself, but the human skills of discovery, decomposition, objective-setting, communication, collaboration, and judgment that transform capability into value.

Radical Candor

Companies, like people, have limbic systems... Your goal right now isn't predictions. It's preparation for what comes next. We must shift our mindset from making predictions to being prepared.

Amy Webb, Futurist

Thank You!

Keep Reading

No posts found