In partnership with

Is Your PPC Strategy Leaving Money on the Table?

When’s the last time you updated your digital marketing strategy?

If you’re relying on old-school PPC tactics you might be missing out on a major revenue opportunity.

Levanta’s Affiliate Ad Shift Calculator shows how shifting budget from PPC to creator-led partnerships can significantly improve conversion rates, ROI, and efficiency.

Discover how optimizing your affiliate strategy can unlock new profit potential:

  • Commission structure: Find the ideal balance between cost and performance

  • Traffic mix: See how creator-driven traffic impacts conversions

  • Creator engagement: Measure how authentic partnerships scale ROI

Built for brands ready to modernize how they grow.

Today's Agenda

Hello, fellow humans! Today, we’re following up on yesterday’s discussion on the dangers of AI adoption in a managing-up organizational culture. I’m sharing three strategies for refocusing the conversation towards solving real customer problems and away from haphazardly collecting shiny new technologies.

Solving the AI Managing-Up Problem: Three Strategies That Work

Yesterday, I wrote about dangers of AI adoption mandates in managing-up cultures. I identified some warning signals—the demo traps, the misaligned metrics, the decision-making dysfunction. We established that mid-tier leaders face a unique squeeze: pressure from above, responsibility below, and career consequences that make managing up feel like rational self-preservation.

Today, let’s talk about what to actually do about it.

This isn’t empty advice like "Just be brave and push back" that ignores your real context and constraints. You can't simply refuse to show AI progress when your CEO is getting board pressure. You can't ignore stakeholder expectations when your performance review depends on executive perception.

But you can exert a little judo to redirect that energy. You can reframe conversations, build infrastructure that makes customer needs impossible to ignore, and model behavior that gives others permission to solve problems honestly. The goal isn't to resist AI adoption—it's to channel that pressure into actually solving problems instead of performing solutions.

Let me show you three practical strategies that work even when you're operating in a strong managing-up culture.

Strategy #1: Reframe the Conversation Upward (Without Career Suicide)

The first move is counterintuitive: when leadership mandates AI, don't immediately start planning AI implementation. Start by understanding what problem leadership is actually trying to solve.

Your CEO comes back from that conference saying "we need AI in customer service." That's not the real ask. The real ask is hiding underneath, and it's probably one of these:

  • They're worried competitors are moving faster

  • They see an opportunity for efficiency gains that would help margins

  • The board is asking uncomfortable questions about your AI strategy

  • They have specific customer feedback suggesting you're falling behind technologically

Your job is to solve that actual problem, which might not require a different solution than the one they’re proposing.

Here's the technique: before you build anything, have a conversation. Not a status update—a genuine problem discovery conversation with your leadership. Ask questions like:

  • "What prompted this focus on AI in customer service specifically?"

  • "What would a successful outcome look like?"

  • "Is there customer feedback we need to review?”

  • “Is there competitive pressure driving this?"

  • "If we could improve customer service satisfaction by 20% without AI, would that address your concern?"

What you're doing is the same jobs-to-be-done discovery you'd do with customers, but applied to stakeholder management. It’s important to think of AI as another team member that has a degree of autonomy, but also demands attention and maintenance; AI is not labor for nothing. So when you think of AI as a new hire, you need to understand the job leadership is trying to hire AI to do.

Then you reframe your response. Instead of "here's our AI roadmap," you present "here's our problem-solving roadmap, with decision criteria for when AI makes sense."

Create a simple communication framework that you use consistently in upward communication:

  1. Customer problem we're investigating: [Specific, measurable]

  2. Why it matters: [Business impact, customer feedback]

  3. Hypotheses we're testing: [Including AI and non-AI approaches]

  4. What we're learning: [Real data from customers]

  5. Next steps: [Based on evidence, not predetermined]

Notice what this does: it shows responsiveness to leadership's concern while maintaining flexibility about solutions. You're not saying "no" to AI—you're saying "maybe, let's find out." And you're building in customer evidence as the decision criterion, which is much harder for leadership to argue against than your opinion.

One more tactical move: build executive literacy about AI limitations. Share the research. Teresa Torres at Product Talk reports that domain expertise matters more than AI technical knowledge. McKinsey's projection that the $2.9 trillion in value comes from workflow redesign, not just AI deployment. Organizational Physics' insight that organizational alignment determines AI success more than technical sophistication.

You're not being negative about AI—you're being realistic about what good AI implementation requires. And you're using external sources for credibility, which matters in managing-up cultures where internal skepticism can be career-limiting.

The shift you're making: from "here's our AI plan" to "here's how we'll solve problems, including when AI is and isn't appropriate."

Strategy #2: Build Accountability Infrastructure That Can't Be Bypassed

Reframing conversations helps, but it's not enough if your teams are still rewarded for impressive demos over customer impact. You need to build systems that make customer focus structural, not optional.

Establish Clear Decision Rights for AI Adoption

First, get explicit about who decides what. Adapt a framework like DACI (Driver, Approver, Contributors, Informed) specifically for AI decisions:

  • Driver: The team closest to the customer problem owns the decision

  • Approver: Leadership sets constraints (budget, timeline, strategic fit) but doesn't specify solutions

  • Contributors: Must include customer perspective, not just technical capability

  • Informed: Stakeholders who need updates

The critical decision to clarify: "Who can say 'AI isn't the right solution here'?"

If only leadership can make that call, you're in managing-up mode. If teams have that authority within clear constraints, you're in problem-solving mode.

Document this. Make it visible. Reference it when AI decisions arise. The goal is to normalize teams choosing non-AI solutions when appropriate, rather than making that feel like failure.

Install Customer Feedback Loops That Actually Matter

Here's the infrastructure change that matters most: make customer voice louder than internal pressure.

Required practice: direct customer conversations before any AI development. Not optional. Not "when we have time." Before.

Frequency: weekly customer interactions minimum for anyone leading AI initiatives. If someone goes more than one week without customer interaction, pause AI work.

Format: jobs-to-be-done interviews, not solution validation. You're not asking customers "would you use this AI feature?" You're understanding their actual problems and current workarounds.

Make the artifacts visible. Customer interview notes shared with leadership. Customer quotes in every update. Customer evidence required for go/no-go decisions.

What this does: it shifts the question from "What will impress leadership?" to "What did customers tell us?" And when customer voice is consistently present and visible, it becomes much harder to ignore.

Build Evaluation Infrastructure Early (Not After You Ship)

This one is critical and commonly skipped: build your evaluation system before you scale AI deployment, not after.

Product Talk's research found that successful teams evolved from spreadsheets to sophisticated evaluation systems. The key word is "evolved"—they built evaluation capabilities through iterative use, not upfront. But they started early.

Here's your practical framework:

Before development: Define success metrics focused on customer outcomes, not just AI performance During development: Track leading indicators—accuracy, failure modes, edge cases After deployment: Monitor both AI performance and customer satisfaction Always: Make results visible to leadership with transparency over polish

Create a simple template:

  • North star metric: The customer outcome you're trying to improve

  • AI performance metrics: Technical quality (accuracy, precision, recall)

  • Customer experience metrics: Satisfaction, trust, adoption rate

  • Business metrics: Efficiency gains, cost reduction, revenue impact

  • Leading indicators: Early signals of success or failure

Start with a spreadsheet. Share it weekly. Make it boring and honest. This becomes your evidence base for what's working and what isn't—and it's much harder to argue with than opinions or demos.

What these three infrastructure pieces do together: They make customer needs impossible to ignore without explicitly fighting against the managing-up culture. You're not telling people to stop managing up—you're building systems where managing up requires showing customer evidence.

Strategy #3: Model the Behavior You Want to See

Infrastructure helps, but culture change ultimately comes from what leaders model. And you have more influence here than you probably realize.

Run Transparent Experiments (And Share Failures)

This is the anti-managing-up move: make your learning visible, including what doesn't work.

Share customer feedback that contradicts your AI approach: "We thought AI would help here, but customers told us they'd rather we fix the routing logic first."

Share technical failures early: "Our AI accuracy is at 65%. We need 90% for deployment. Here's our learning plan and timeline."

Share non-AI wins: "We solved this problem with better process design instead of AI. Here's the impact."

Why does this matter? Because it normalizes experimentation over performance. It gives others permission to be honest about their AI struggles. And it builds long-term credibility—you're someone who tells the truth rather than managing perception.

Create a "learning log" that you share monthly with peers and leadership. Document what you tried, what worked, what didn't. Include explicit "we chose not to use AI because..." decisions.

Yes, this feels risky in managing-up cultures. That's the point. You're demonstrating that honest problem-solving is more valuable than impressive storytelling. Some people will judge you. Others will be relieved that someone is modeling reality.

Reward Problem-Solving, Not AI Adoption

For your direct reports and teams, be explicit about what you value:

Recognition criteria: Did they solve a customer problem? (Not: did they use AI?) Performance reviews: Evaluate customer impact and learning velocity Promotions: Based on judgment quality—choosing the right approach—not technology adoption rate

The signal you're sending: we care about outcomes, not tools. You have permission to choose the best solution, even if it's not AI. Your judgment matters more than following leadership mandates blindly.

This might be the most important thing you do, because it cascades. Your direct reports will model this for their teams. Over time, it shifts culture from the middle out.

Practice "Disagree and Commit" With Transparency

Here's the scenario you'll definitely face: leadership mandates an AI initiative you think is the wrong approach.

Managing up response: implement it, make it look good, hope for the best.

Trust-building response:

  1. Share your concerns clearly and respectfully, with evidence

  2. Propose an alternative problem-solving approach

  3. If overruled, commit fully BUT with clear success criteria

  4. Track and share results honestly

This works because you've demonstrated independent thinking while respecting decision authority. You've given leadership information to make an informed decision. And you've created accountability for outcomes—results will prove the approach right or wrong.

Short-term risk: you might look like you're resisting leadership vision. Long-term benefit: when you're right, you build credibility; when you're wrong, you learn publicly. Either way, you're building a reputation for honesty over polish.

What modeling does: It shows others that problem-solving is valued, even when it conflicts with managing-up norms. Culture changes when people see that honest approaches can succeed—or at least won't end careers.

Where This Leads: The Trust Arbitrage

If we zoom out, let’s look at what you’re really building with these three strategies.

You're creating an alternative path through the managing-up culture. Not fighting it directly—that's exhausting and career-limiting. Instead, you're building parallel infrastructure that makes customer-focused problem-solving the path of least resistance.

Reframing conversations upward lets you show responsiveness while maintaining solution flexibility. Accountability infrastructure makes customer evidence required, not optional. Modeling behavior gives others permission to solve problems honestly.

Together, these create what I call the trust arbitrage: most people over-index on managing up, creating opportunity for those who focus on impact.

Managing up builds performance capital—impressive to current leadership, based on perception of responsiveness, but fragile and vulnerable to leadership changes.

Problem-solving builds impact capital—portable across organizations, based on demonstrated results, strengthened by reference checks with peers and teams, measured in customer outcomes.

The AI adoption pressure in your organization is revealing which type of capital you're accumulating. In six months, the difference between AI theater and AI value will be obvious. In three years, it will be career-defining.

Coming Next

These three strategies—reframe upward, build infrastructure, model behavior—create the foundation for alignment.

Tomorrow, I’ll share how to measure whether this is actually working, how to track long-term trust when quarterly metrics demand short-term wins, and how to build roadmaps for 90-day and 12-month views.

You have an opportunity to create an organizational culture shift by putting customer problems ahead of internal performance.

Key Takeaways: Making It Practical This Week

If you're implementing these strategies starting now:

  1. Schedule the problem discovery conversation: Before your next AI initiative planning meeting, have a 30-minute conversation with your sponsor/leadership to understand what problem they're actually trying to solve. Use the five-question framework.

  2. Document decision rights this week: Write down who currently decides if/when AI is used in your area. Share it with your team and leadership. Ask explicitly: "Does this match everyone's understanding?"

  3. Install one customer feedback loop: Pick one AI initiative and require weekly customer conversations before development continues. Make interview notes visible to stakeholders.

  4. Start your learning log: Create a simple document. This week, write down one thing you tried, what you learned, and what you're doing next. Share it with one peer.

  5. Identify what you'll share transparently: Find one place where you can share honest learning instead of polished updates. Test the culture's response. You're gathering data on how much honesty the system can absorb.

The AI adoption pressure isn't going away. The question is whether you'll use it to perform innovation or to build the problem-solving capability that creates actual value. These three strategies let you do the latter even when surrounded by people doing the former.

Start with one. This week. The cultural arbitrage is real, and the opportunity is now.

See you next issue, where we'll tackle measurement, timelines, and the long game of organizational change.

Radical Candor

When bosses are too invested in everyone getting along, they also fail to encourage the people on their team to criticize one another for fear of sowing discord. They create the kind of work environment where being "nice" is prioritized at the expense of critiquing and therefore improving actual performance.

Kim Malone Scott, Radical Candor

Thank You!

Keep Reading

No posts found