In partnership with

Effortless Tutorial Video Creation with Guidde

Transform your team’s static training materials into dynamic, engaging video guides with Guidde.

Here’s what you’ll love about Guidde:

1️⃣ Easy to Create: Turn PDFs or manuals into stunning video tutorials with a single click.
2️⃣ Easy to Update: Update video content in seconds to keep your training materials relevant.
3️⃣ Easy to Localize: Generate multilingual guides to ensure accessibility for global teams.

Empower your teammates with interactive learning.

And the best part? The browser extension is 100% free.

Today's Agenda

Hello, fellow humans!

The AI Cognition Trap: Why Your Brain Needs More Friction, Not Less

We all suspect relying on AI weakens our cognitive muscles — the better AI gets at giving you great responses, the worse you might be getting at actually learning anything.

I've been thinking about this a lot lately, especially as I watch professionals—myself included—lean harder into AI for everything from research to writing to analysis. The tool is phenomenal. The results are often excellent. But something feels off when I realize I can't quite reconstruct the logic of something I asked Claude to explain just yesterday.

Turns out, there's a robust body of cognitive science that explains exactly what's happening. And more importantly, it points toward six specific strategies that can help us learn better with AI, not just produce better through AI. Let me walk you through what I found.

The Fluency Illusion

The research community has a name for what AI creates: extreme fluency. When information flows easily, when answers arrive complete and polished, when every question gets immediately resolved, our brains make a predictable mistake. We confuse the ease of processing with actual learning. We feel like we understand because the explanation was so clear. We feel like we'll remember because it made perfect sense in the moment.

The problem is that this feeling is almost entirely uncorrelated with long-term retention. In fact, decades of work on something called "desirable difficulties" shows that the relationship often runs in the opposite direction. The conditions that make learning feel easy and produce the best immediate performance—high fluency, immediate answers, polished explanations—often produce the worst long-term retention and transfer.

The conditions that actually build durable knowledge? They feel harder. They're slower. They're less satisfying in the moment. Which means AI, in its current form, is almost perfectly designed to maximize the feeling of learning while potentially minimizing the actual retention of it.

So what do we do about it? The answer isn't to abandon AI—that ship has sailed. The answer is to strategically reintroduce what researchers call "germane cognitive load": the right kind of mental effort that strengthens memory without overwhelming working memory. Here's how.

Generate First, Consult Second

The single most powerful shift you can make is this: before you ask AI anything, force yourself to generate your own attempt first. The research on elaboration and self-explanation is remarkably consistent here. When learners generate content themselves—even imperfect, incomplete content—the encoding process runs deeper than when they passively receive perfect answers.

What this looks like in practice: you're working on a strategy document. Your instinct might be to prompt AI with your requirements and let it draft the framework. Instead, spend twenty minutes sketching your own framework first. Rough is fine. Incomplete is fine. Then bring AI in to refine, challenge, or extend what you've created. You're not trying to compete with the AI's output quality; you're trying to activate the cognitive processes that make learning stick.

The same principle applies to problem-solving. Before asking AI for the solution, work through your own logic. Get stuck. Make mistakes. Then use AI to unstick yourself or validate your approach. The struggle isn't waste; it's the point.

Space It Out

Here's where it gets interesting. The research on spacing effects shows that distributing learning over time produces dramatically better retention than massing it all together. We're talking about optimal spacing gaps of roughly ten to twenty percent of however long you want to retain the information. If you need to remember something for a month, space your interactions over several days. If you need it for a year, space it over weeks.

This cuts directly against how most of us use AI right now. We have one comprehensive conversation, get our answers, and move on. It feels efficient. It is efficient for immediate output. But it's terrible for learning.

Try this instead: break your AI interactions into multiple sessions separated by meaningful time gaps. Monday, get the framework overview. Tuesday and Wednesday, work with the material without AI. Thursday, return to AI for refinement and deeper questions. Yes, this feels slower. Yes, it feels less efficient. That's the point. The very inefficiency creates the retrieval practice that strengthens memory.

Interleave Your Practice

One of the more counterintuitive findings in the learning research involves interleaving—mixing different types of practice together rather than blocking them. Study after study shows that interleaved practice produces worse performance during the practice itself but substantially better performance on delayed tests and novel problems. Some research shows three times better performance after a week's delay.

For AI collaboration, this means alternating between AI-assisted and manual work, even when it feels inefficient. Solve a problem with AI assistance. Then tackle a similar problem without it. Then back to AI for a different type of problem. The constant switching feels choppy and less smooth. That's precisely why it works—your brain has to keep reactivating and reconstructing knowledge rather than riding a groove.

The risk with AI is that we develop a single, smooth workflow that always includes AI assistance. This creates dependency rather than capability. Interleaving prevents that trap by forcing your brain to operate in multiple modes.

Elaborate Before Moving On

When AI explains something to you, the worst thing you can do is immediately move to the next question. The best thing you can do is close the chat, put the explanation in your own words, and connect it to something you already know. This is elaboration, and the research shows it's one of the most reliable ways to convert information from temporary activation into durable schema.

The mechanism here is that you're forcing your brain to do the organizational and integrative work that builds understanding. When AI explains something clearly, it's doing that work for you. When you elaborate in your own words, you're doing the work yourself, which means the neural pathways are forming in your brain rather than just running through Claude's architecture.

Practically, this might look like: after getting an explanation from AI, close the conversation and write a paragraph explaining the concept to a colleague, or connecting it to a project you're working on, or identifying where it contradicts or extends something you already believed. Then—and only then—reopen AI to verify or extend your understanding.

Retrieve Rather Than Reference

Here's where we need to talk about the testing effect. The meta-analyses on this are striking: practice testing produces effect sizes of 0.6 to 0.7 compared to restudying the same material. That's a medium-to-large effect, and it's one of the most robust findings in all of learning science. Effortful retrieval strengthens memory traces in ways that repeated exposure simply doesn't.

The problem is that AI makes retrieval unnecessary. Why recall anything when you can just ask again? The information is always there, always perfect, always accessible. Which means we're systematically removing one of the most powerful learning mechanisms available to us.

The fix: treat AI conversations as study sessions, not reference manuals. After learning something from AI, close the conversation and attempt to recall and apply it. Then return to AI to check your retrieval. The goal isn't to avoid using AI; it's to force retrieval practice before you do. This single shift can dramatically improve what actually sticks in your long-term memory.

Introduce Productive Variation

The final strategy is about preventing overfitting to a single pattern of AI interaction. The research on varied practice shows that variation in practice conditions improves transfer to novel situations. While the specific research on perceptual disfluency (harder-to-read fonts) shows mixed results, the broader principle is solid: variation prevents your brain from encoding superficial patterns instead of underlying principles.

With AI, this means varying your approach to similar problems. Use different prompts. Try different models. Vary how much assistance you request. Sometimes ask for comprehensive answers; other times ask only for hints. The variation forces your brain to engage with the underlying concepts rather than just learning a successful collaboration pattern.

The Boundary Condition That Matters

Before you take all this advice and make learning maximally difficult, there's a critical caveat. All of these strategies assume you have sufficient prior knowledge and that task complexity isn't already overwhelming. If you're a complete novice or working with highly complex material, you should reduce friction and use AI's scaffolding heavily. These desirable difficulties work when you're building on a foundation, not when you're constructing the foundation itself.

The strategic insight here is that AI's greatest strength—removing friction—becomes its greatest liability for learning when overused. The solution isn't to resist AI but to use it more intelligently, deliberately reintroducing the specific kinds of effortful processing that the research shows strengthen retention and transfer. The goal isn't to make learning harder for its own sake. It's to make sure that when we're building capability alongside AI, we're actually building it in ourselves, not just borrowing it temporarily from our tools.

Radical Candor

The obstacle in the path becomes the path. Never forget, within every obstacle is an opportunity to improve our condition.

Thank You!

Keep Reading

No posts found