In partnership with

Simplify Training with AI-Generated Video Guides

Simplify Training with AI-Generated Video Guides

Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos.

Here’s how:

1️⃣ Instant Creation: Turn complex tasks into stunning step-by-step video guides in seconds.
2️⃣ Fully Automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.
3️⃣ Seamless Sharing: Share or embed guides anywhere effortlessly.

The best part? The browser extension is 100% free.

Today's Agenda

Hello fellow humans! Today, we really put humans first as we explore some of the psychological hazards of AI.

News

A Question of (AI) Ethics

There may have been an impression among people implementing agentic AI systems that if they configure programmatic workflows, they could simplify and automate complex human systems for handling information. The dream for agentic AI is to eliminate messy back-office work such as human oversight, review processes, complex approvals, and biases, and to automate processes like collating and analyzing data, regulatory compliance, and so on. But the human sins that require all of these ethics and safety measures live on inside the AI systems that we’ve built. The result is that if these agentic AI systems are to truly autonomously perform useful work, the agents ultimately become equally, if not more, complex than the human systems they are intended to replace. The curse of the LLM is that it mimics human behavior — all of it — including all of our cognitive biases, thinking errors, and blind spots.

For back-office information processing, what we want isn’t a faster, cheaper human; what we want is a tool that, above all, performs work more consistently and reliably than we can. We don’t want to have to double-check it; we want fire-and-forget solutions.

I’m not sure that this future is possible, but whatever system we build, it will require humans to deeply understand the ethics and integrate them into the design. The complexity is necessary to establish safety and ethics because humans are complex and these systems have real effects on humans.

However, The New Stack offers a crash course for agentic AI ethics, a blueprint for how to address these challenges with six essential safety features for autonomous AI systems, including:

  • Algorithmic fairness and bias mitigation

  • Transparency and explainable AI

  • Accountability frameworks

  • Human-Centric design

Authors Vrushali Sawant, a data scientist with the SAS Data Ethics Practice, and Manisha Khanna, a senior product manager of AI and Gen AI at SAS, go on to offer five best practices for implementing automations that require a lot of work that can only be performed by humans. These systems need design inputs from cross-functional teams, guidelines for defining reliability, and continuous monitoring.

The emphasis is on creating systems that are not only technically sound but also ethically aligned and transparent in their decision-making processes.

Automations have a lot of potential for automating repetitive that demand more consistency than humans may be able to offer, but we must not lose sight of the fact that these automations are software. Software needs planning, design, engineering, testing, and ongoing maintenance. All of this work requires human skills that cannot be replaced by AI. For now.

AI at Work is Costing Your Self-Compassion

Michelle McQuaid, Ph.D. writes for Psychology Today that as users adopt AI tools, they lose their self-compassion. This isn’t surprising; social media has been shown to have a negative effect on compassion and empathy generally.

As difficult as it may be to have compassion for others, it can be even more difficult to have compassion for ourselves. As AI in the workplace challenges our notions of what makes us valuable as humans, we have even more triggers for self-doubt.

McQuaid performed a study of 1000 participants and found “performance with AI follows a predictable U-shaped curve—initially declining as we learn new tools, then eventually rebounding to surpass previous levels—our self-compassion doesn't follow the same trajectory.” She also found that non-AI users generally have a healthier level of self-compassion.

The root cause appears to be that “[the AI] simulated compassion fails to provide genuine connection. The constant comparisons between human and machine output, accelerated workplace change, and the unpredictable nature of AI tools all contribute to heightened self-criticism.”

Fortunately, she reports that there are things we can do to remedy this effect, and it sounds a lot like developing emotional intelligence. She recommends:

  1. Name what you’re feeling

  2. Activate your body’s calming system

  3. Set learning goals, not performance goals

It may feel like our emotional health is just another item that requires our attention and management, but our own emotional health may be the foundation for our ability to manage everything else in our lives, making it the most important management task of all.

In AI We (Do Not?) Trust

There’s a growing temptation in education to treat AI systems as substitutes for judgment — to imagine that if we can automate content creation, feedback, or personalization, we can streamline the messy human processes of teaching and learning. But the more sophisticated AI becomes, the more we are reminded that LLM reliability is limited, and that makes them difficult for us to trust. A new study in the Journal of Data Science and Management reveals what many educators are starting to feel intuitively: as generative systems grow more capable, we have to learn when to trust them and how to maintain human oversight.

Not surprisingly, users liked the conversational, human-like quality of AI-generated content — the illusion of understanding, the ease of use — but their trust collapsed the moment they began to question factual accuracy or ethical intent. The same qualities that made the AI feel approachable made it feel deceptive once errors emerged. People didn’t just want reliable information; they wanted systems that aligned with their sense of fairness, accountability, and purpose.

This finding reframes the challenge for ed-tech. Building trust in AI isn’t just a matter of improving accuracy; it’s about designing for ethical resonance — ensuring that learners, teachers, and parents can see the values behind the outputs.

Disclose AI provenance and authorship

Students and teachers deserve to know when content is machine-generated, by what model, and under whose supervision. Transparency about provenance creates a shared understanding of where responsibility lies. When learners know what the system is and who stands behind it, trust can be grounded in accountability rather than illusion.

Keep humans in the feedback loop

AI can draft lessons, generate quizzes, or personalize recommendations — but the educator’s role in reviewing, refining, and validating remains essential. Human-in-the-loop review is not inefficiency; it’s integrity. It signals that AI outputs are tools in service of pedagogy, not substitutes for it.

Design interfaces that clarify boundaries

Users need help understanding what AI can and cannot do. Interfaces should reveal the limits of the model — whether that means uncertainty markers, confidence scores, or simple disclaimers that invite human verification. The goal isn’t to diminish trust, but to ground it in realism.

Build trust through co-design

Finally, trust in AI is social. Teachers, students, and parents all interpret AI outputs through their own experience and expectations. Involving these groups in pilot programs, feedback sessions, and policy shaping helps surface where the system earns or loses credibility. Co-design isn’t just good practice; it’s how institutions learn to calibrate collective trust.

The study’s deeper insight is that trustworthiness in AI is not a static attribute. It’s a relationship — one that must be continuously negotiated between technical reliability and ethical alignment. In education, that relationship defines the difference between a tool that empowers learning and one that erodes it.

Radical Candor

AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions.

Mark Ryan, via Montreal AI Ethics Institute

If your users can’t trust the technology, you’re not going to bring it into your product. … We pour a lot of resources into closing potential risk factors, like toxicity or bias.

Aiden Gomez, Cohere

Thank You!

Keep Reading

No posts found