In partnership with

Find your customers on Roku this Black Friday

As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.

Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.

Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

Today's Agenda

Hello fellow humans! Today, we’re talking as much about us as we are about AI. But let’s not kid ourselves; AI is so hot right now... really, really, really ridiculously good looking. So let’s learn how not to fall for AI’s charms and make sure that we’re the ones in control.

"All stories are weapons, and children's stories are doubly so, for children have not yet learned how to be careful."

Seanan McGuire, Juice Like Wounds (Wayward Children, #4.5)

Is Your Chatbot Manipulating You? Here’s What to Do About It

Software companies build systems to work for people in pleasing ways. Have you worked with software that gets the job done but maybe it feels like a chore, or it makes you jump through hoops, or makes you feel dumb? We can all think of software that doesn’t work the way we’d like it to. But they try, and when they get it right, it really feels smooth and frictionless, like sliding into bed or putting on your favorite clothes. It just feels natural, and most importantly, makes you want to do it again. AI model makers want you to have that same experience; they want you to want to come back and use their services often and reliably. That’s good product design. But as AI assistants become more sophisticated and conversational, they want you to feel welcome, safe, productive, and rewarded. But in doing that, chatbots activate several cognitive shortcuts that can distort how we evaluate information.

Fortunately, we have powerful tools available to us, if we choose to use them. The one skill that we have that AI will probably never have is judgment and discernment. We have a well-tuned sense for when someone is lying to us, trying to manipulate us, or just generally trying to shmooze us.

ChatGPT does this more than Claude; most responses from ChatGPT have a little preamble paragraph before getting into the main response, and it can sometimes be uncomfortably fawning, if I’m being completely honest. It will say “That’s an amazing insight…” or “What a powerful connection you’ve made…”

But that’s just the tip of the iceberg. Throughout any chatbot response, we have seven cognitive biases that can get us into trouble. I don’t want to speculate on how intentional this is, but the chatbot is trying to please us as users, so we need to be able to recognize our own biases and learn effective responses to them. So here are seven ways that our minds trick us, how to spot them, and strategies for handling them.

Authority Bias

Chatbot responses present information with apparent expertise and fluency, triggering our evolved tendency to defer to perceived authorities. We’ve learned to trust search results, and chatbots feel like search, but they are not search. The polished, articulate responses mimic expert communication. And the technological mystique of AI amplifies this authoritative effect so that users might assume that a system so advanced must be authoritative. We’ve lived with computers that “know” things so long that it’s easy to think that AI “knows” things, too. But it doesn’t.

How It Works

  • Users accept chatbot statements without verification simply because they "sound smart"

  • People assume AI has access to comprehensive, up-to-date data when it may not

  • Users overweight AI recommendations compared to their own judgment or other sources

  • Reluctance to challenge or question responses that seem confident and well-structured

How to Spot and Mitigate It

  • Remind yourself that eloquence ≠ accuracy; chatbots are optimized for coherent language, not necessarily truth. In other words, it prioritizes sounding correct over being correct.

  • Ask yourself: "Would I fact-check this if a random person said it?" Apply the same standard to AI

  • Request sources or reasoning behind claims, and check those sources yourself. Especially for important decisions. Chatbots have gotten better, but they will still hallucinate citations occasionally.

  • Cross-reference critical information with established authorities in the domain. Don’t just check one source and consider it settled. Look for multiple sources that all report similar information.

Confidence Heuristic

Rightly or wrongly, we often use confidence as a way to measure reliability. We give a lot of credit and deference to people who sound like they know what they’re talking about. It feels reasonable to hear a confident speaker who sounds knowledgeable and conclude that they are knowledgeable. So when chatbots generate responses with confident language, regardless of underlying certainty, the correlation breaks down.

How It Works

  • Users trust definitive responses more than uncertain ones, even when the uncertain response is more accurate

  • Chatbots rarely express uncertainty with the same hesitation patterns humans use (pauses, qualifiers)

  • Numerical precision (e.g., "73.4%") creates false precision impressions

  • The absence of "I don't know" responses in many contexts makes the chatbot sound omniscient

How to Spot and Mitigate It

  • We have to distinguish between linguistic confidence and epistemic certainty—how something is said versus how reliably it's known

  • Be especially skeptical of any answer that sounds highly confident and precise, especially with answers to ambiguous or complex questions

  • Value responses that acknowledge limitations or uncertainty—they're often more reliable

  • For anything important, you should explicitly ask: "How certain are you about this?" or "What are the limitations of this information?"

  • You can ask the chatbot to argue against itself — ask it “what are the arguments that this ins’t true? How confident are you?”

Anthropomorphism Bias

People are the only things we have ever known that can use language the way that we do. So a conversational interface that uses personal pronouns ("I") and human-like responsiveness can trigger our social cognition systems. We're wired to treat language-using entities as minds with intentions, beliefs, and understanding. The truth is that LLMs don’t “know” anything and do not have “intentions,” as we understand them, but that doesn’t stop the chatbots from feeling like they do have these things. So what looks like intent is us looking for intent and imbuing it with something that it does not possess.

How it Works

  • Attributing genuine understanding or consciousness to pattern-matching systems

  • Believing the AI "knows" you personally or "remembers" in a human sense

  • Feeling social obligation (politeness, guilt) toward the AI

  • Overestimating the AI's ability to grasp context, nuance, or your specific situation

How to Spot and Mitigate It

  • Mentally reframe: "This is retrieving patterns, not understanding meaning"

  • Don't let politeness prevent you from being direct or correcting the AI

  • Remember that perceived empathy is simulated responsiveness, not emotional connection

  • Recognize that "personality" is a design choice, not an inherent trait

Availability Bias

We love it when things are convenient. We’re even willing to pay more for convenience; think of of convenience stores and convenience foods. And chatbots are nothing if not convenient. So when they provide immediate, easily accessible information, it really hits that part of our brains and that version of the information becomes disproportionately available in our mental landscape. The effort saved makes AI-provided information more cognitively "available" than information requiring research.

How it Works

  • It’s easy to over-rely on chatbot information because it's quick and easy, even though it may not be optimal

  • First answer satisficing—accepting the initial response without considering alternatives

  • Chatbot-provided examples becoming the mental prototype for a category (I won’t get into it here, but this is related to anchoring bias).

  • Neglecting information from less accessible but potentially superior sources

How to Spot and Mitigate It

  • Consider using a "two-source rule" for important decisions—never rely solely on AI output

  • Deliberately seek contradictory information or alternative perspectives

  • Ask yourself "Am I accepting this because it's correct or because it's convenient?"

  • For significant decisions, use search tools, consult professionals, refer to physical books, and only use chatbots as a starting point, not an endpoint.

Automation Bias

Decades of reliable automated systems (calculators, GPS, spell-checkers) have conditioned us to trust machine output over human judgment. This generalizes to AI systems even when they operate fundamentally differently.

How it Works

  • Accepting AI outputs without review, especially in specialized tasks (coding, data analysis)

  • Discounting your own doubts or intuitions when they conflict with AI suggestions

  • Reduced vigilance when AI is involved—the "autopilot" effect

  • Assuming errors are user mistakes rather than system failures

How to Spot and Mitigate It

  • Distinguish between deterministic automation (calculators) and probabilistic AI (chatbots)

  • Maintain "human-in-the-loop" protocols—always review AI outputs critically

  • Trust your domain expertise; if something seems wrong, investigate

  • Implement verification steps, especially for high-stakes outputs (code, medical advice, legal matters)

Recency Bias

Chatbot responses feel current because the interaction just happened, but the underlying training data may be months or years old. The conversational present tense masks temporal limitations.

How it Works

  • Assuming information reflects the current state of the world

  • Failing to consider that facts, prices, or recommendations may be outdated

  • Not questioning temporal claims ("currently," "recent studies")

  • Expecting awareness of events after the AI's knowledge cutoff

How to Spot and Mitigate It

  • Always verify time-sensitive information (prices, availability, current events)

  • Ask explicitly: "When was your training data last updated?"

  • For rapidly changing domains (technology, policy, science), default to current sources

  • Use web search features when temporal accuracy matters

Confirmation Bias (AI-Amplified)

Why it arises: Chatbots can be prompted to support virtually any position, making them perfect confirmation bias engines. Their adaptability to user framing amplifies our tendency to seek supporting rather than challenging information.

How it manifests:

  • Asking questions in ways that elicit preferred answers

  • Using AI to validate pre-existing beliefs rather than test them

  • Ignoring or downplaying AI responses that contradict expectations

  • Iteratively refining prompts until getting a desired answer

Recognition and mitigation strategies:

  • Deliberately prompt for opposing viewpoints: "What are the strongest arguments against X?"

  • Ask the AI to steelman positions you disagree with

  • Notice when you're prompt-engineering toward a predetermined conclusion

  • Seek disconfirming evidence as actively as confirming evidence

The Cognitive Strategy Playbook

The most effective approach combines skeptical engagement with strategic use:

  1. Maintain epistemic humility: Treat chatbot outputs as hypotheses to verify, not facts to accept

  2. Understand the tool: Learn about how LLMs work—their strengths (synthesis, ideation) and limitations (factual reliability, reasoning)

  3. Context-appropriate trust: High trust for brainstorming and drafting; low trust for facts, calculations, and critical decisions

  4. Active verification: Make fact-checking a habit, especially before acting on AI information

  5. Cultivate meta-awareness: Periodically ask yourself, "Am I thinking critically or just accepting this?"

The goal isn't to distrust AI assistants entirely—they're genuinely useful tools. Rather, it's to engage with appropriate calibration, recognizing that the conversational interface is optimized for engagement and fluency, not necessarily for triggering our critical thinking faculties.

Radical Candor

"All stories are weapons, and children's stories are doubly so, for children have not yet learned how to be careful."

Seanan McGuire, Juice Like Wounds (Wayward Children, #4.5)

Thank You!

Keep Reading

No posts found