Find your customers on Roku this Black Friday
As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.
Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.
Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.
Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.
Today's Agenda
Hello, fellow humans! Today, we have outstanding reports from Harvard Business Review, the Council on Foreign Relations, and VentureBeat reporting on how developers view AI.
The bottom line is that we need more trust. We need more trust in our organizations, in the AI technology, and trust in the priorities and objectives that top leadership is setting for this global AI experiment. I’ll cover the problem space and solution opportunities, but I strongly encourage you to click through and read the original reporting. Trust is a challenging question, so let’s dig in.
Human-in-the-Loop Series: The Trust Gap—Why AI Adoption Is Failing and How to Fix It
Sources
Trust in company-provided AI fell 31% between May and July 2025. Trust in agentic AI systems dropped 89% in the same period. Only 9% of developers believe AI code can be used without human oversight. Deloitte’s TrustID index conducts regular surveys of employee sentiment, and found that between May and July of 2025, employees are deeply skeptical of AI value and capabilities, according to the summary of AI trust from Harvard Business Review.
Is this a crisis? You might think so if you think AI adoption is a business imperative. There are serious shortcomings in the technology that require human monitoring and management.
Even though AI capabilities seem to be growing exponentially, organizational trust is collapsing. Employees are bypassing company-sanctioned AI tools for unauthorized alternatives they trust more. Shadow IT is back, and this time it's powered by ChatGPT subscriptions and personal API keys.
This isn't a problem that more impressive models or internal tool rollouts will solve. The trust gap represents a fundamental misalignment between how organizations are deploying AI and how humans actually build confidence in new systems. Nearly half of frontline employees are using unapproved AI tools, not the AI tools that their organizations are working to roll out.
Why Trust Matters More Than Technology
Trust isn't about giving our colleagues the warm fuzzies—it's a mental and emotional tool that we rely on to know what is reliable. In other words, trust is human infrastructure. Without it, even the most capable AI becomes backlog-ware or, worse, a liability that teams actively work around rather than integrate.
The economics are brutal. Companies invest millions in AI deployments that employees won't use. The promised productivity gains get consumed by constant verification overhead. Shadow AI creates security nightmares, compliance gaps, and integration chaos. Teams avoid using AI for anything critical, which means it never delivers the transformational value that AI promises.
Meanwhile, organizations that build genuine trust in their AI systems will outpace competitors with superior technology but inferior adoption. Trust enables faster iteration, broader deployment, and deeper integration into actual work. The trust gap isn't just a problem—it's a competitive opportunity for those who solve it first. I’ve written about how to define a relationship of trust with AI, and the challenges of AI and trust at scale, but this study helps us understand the situation more deeply.
And that is just the internal trust challenge. The Council on Foreign Relations discusses how these trust issues expand into the consumer space and as national security concerns. As consumers, we need confidence that these AI systems are securing our personal information, and as citizens, we need trust that all of our digital infrastructure is secure, from our Apple ID, to our electrical and internet providers, and the security infrastructure that supports our national security readiness.
Our Trust Expectations Betray Us
We expected that AI would prove itself through its impressive capabilities, that improvements would naturally build confidence, and that productivity gains would drive organic adoption. With all of that, surely workers would embrace AI as a collaborative partner.
Instead of meeting that expectation, we're getting something different:
Capability without reliability. AI is impressive but inconsistent. It generates confident nonsense with the same tone it uses for accurate information. We see performance drop unpredictably on edge cases, and the system cannot tell you when it doesn’t know something or when its own confidence is low.
Speed without judgment. Fast answers that require slow verification. The time saved in generation gets consumed in review, and often more. Your net productivity gain approaches zero, or might even go negative.
Autonomy without accountability. When AI makes mistakes, who's responsible? The developer who deployed it? The user who accepted its recommendation? The vendor who trained it? "The AI decided" isn't an acceptable answer in high-stakes contexts, but we don’t have any reliable frameworks for AI accountability.
Opacity without explanation. Black-box reasoning that asks for trust without providing justification. Even when AI gets it right, not understanding why makes it impossible to calibrate appropriate trust.
Three specific failures are driving the trust crisis:
The Reliability Gap: AI hallucinations aren't edge cases—they're a fundamental characteristic of how these systems work. They generate plausible-sounding falsehoods with full confidence. Inconsistent outputs for similar inputs make them unpredictable. Most critically, the systems can't reliably tell you when they're uncertain.
The Security and Control Gap: Vulnerabilities to prompt injection and data leakage are well-documented but poorly understood by users. Companies struggle with unclear data governance—what happens to the information we feed into AI systems? Where does it go? Who can access it? The lack of audit trails means we can't reconstruct how decisions were made, which makes AI unsuitable for regulated contexts.
Research from Anthropic, The UK AI Institute, and the Alan Turing Institute found that as little 250 malicious documents could create a backdoor into LLMs that could compromise the integrity of the entire model. The security risk is real.
The Alignment Gap: Currently, AI optimizes for metrics that don't necessarily reflect human values. It produces solutions that may be technically correct but contextually wrong. It automates decisions that actually require human judgment. And that is a persistent disconnect between AI recommendations and organizational reality that erodes confidence with every misalignment.
How Human-in-the-Loop Builds Trust
The human-in-the-loop model addresses the trust gap through four mechanisms:
Trust through transparency. Clear delineation of AI versus human responsibilities. Explicit acknowledgment of what AI can and cannot do reliably, and make those limitations visible. We need to see reasoning processes, not just conclusions.
Trust through control. We need to build systems that maintain human decision authority on what matters. AI can supplement decisions, but we cannot let it replace judgment. Systems should have override mechanisms at every critical junction with graceful degradation when AI confidence is low.
Trust through accountability. We need to ensure that throughout our systems, humans are ultimately accountable for outcomes. AI outputs make useful recommendations, but not decisions. Audit trails capture both AI reasoning and human choices. Responsibility cannot be offloaded to "the algorithm."
Trust through competence matching. AI handles tasks where its capabilities are reliable. Humans handle tasks requiring judgment, context, and values. Clear handoffs between AI and human work. Continuous calibration of appropriate trust levels based on performance.
This isn't about slowing AI adoption—it's about making it sustainable. Trust enables velocity. Distrust will create friction that will eventually grind AI initiatives to a halt.
Building Organizational AI Trust
Establish Clear AI Governance
Define acceptable use cases with explicit risk thresholds. Not all applications of AI carry the same stakes—drafting content is different from approving financial transactions. Create transparent policies about data usage and privacy that employees can actually understand. Build accountability frameworks that answer: who approves AI recommendations in different contexts? Implement security protocols that address real vulnerabilities, not hypothetical ones.
Design for Appropriate Reliance
Don't deploy AI in contexts where trust hasn't been earned. Start with low-stakes applications where mistakes are recoverable and learning is safe. Earn trust incrementally before scaling to critical systems. Provide confidence indicators with AI outputs—even crude ones help users calibrate. Enable easy escalation to human oversight when AI reaches its limits.
Invest in AI Literacy
Help teams understand what AI actually does and doesn't do. These aren't magic boxes—they're statistical pattern-matching systems with specific capabilities and limitations. Train people to calibrate trust appropriately for different tasks. Teach recognition of signals that AI is likely wrong. Build internal expertise rather than treating AI as an externally-provided mystery.
Create Feedback Loops
Systematically collect AI failure cases and share them. Run regular trust audits: how confident are teams in AI outputs across different applications? Build mechanisms for reporting concerns without penalty—psychological safety is essential for honest feedback. Make improvements visible so teams see that their concerns drive changes.
Building Teams AI Trust
Implement Trust Checkpoints
At each stage of the problem-solving framework from Part 2, ask: What is AI doing here? What am I doing? How confident should I be in this output? What would change my trust level? Where do I need human verification? These questions transform abstract concern into concrete practice.
Adopt "Trust but Verify" Protocols
Only 9% of developers think AI code needs no oversight—be in the 91% who understand verification is essential. Establish clear review standards for AI-generated work. Document cases where AI got it wrong and share learning. Build organizational memory about AI failure modes.
Use AI Where Trust Aligns with Stakes
High-trust contexts appropriate for more autonomy: drafting and ideation, data summarization, generating options for human selection, routine low-consequence tasks.
Low-trust contexts requiring human oversight: security-critical code, customer-facing communications, strategic decisions, anything with legal, ethical, or safety implications.
This isn't permanent—as AI improves and trust grows, the boundaries shift. But starting with appropriate caution beats recovering from catastrophic failures.
Build Team Norms
When do we use company AI versus external tools? What gets reviewed, by whom, at what depth? How do we handle disagreement with AI recommendations? What's our process when AI fails? Explicit norms prevent individual guesswork and collective drift toward either excessive trust or paralytic skepticism.
With Everyone on Offense, Defense is the Differentiator
The Council on Foreign Relations identifies a serious gap in the AI defensive game; as all the major AI players are pushing full-throttle to build the superior model, we’re not committing enough effort to hardening cyberattack surfaces or leveraging AI to monitor for AI-driven attack vectors in the public sector:
Half of critical infrastructure organizations reported facing AI-powered attacks in the past year, according to Deep Instinct’s Voice of SecOps survey. Anthropic documented North Korean operatives using frontier-AI services to secure remote employment at U.S. Fortune 500 technology companies fraudulently. According to Check Point Research, criminals are expected to use the Hexstrike-AI framework to reduce the time needed to exploit critical zero-day vulnerabilities from days to minutes.
And while the private sector is better — 80% of private companies are using AI to strengthen cybersecurity, and they’re able to contain incidents 98 days faster, still:
Deepfakes—synthetic images, videos, or voices used to impersonate individuals—accounted for 7 percent of all detected fraud by the end of 2024. Deepfake fraud losses reached $410 million in the first half of 2025, already surpassing the total for 2024.
AI is going to be as much a vector for attack as anything else, so the successful organization will be able to use it for defense as well as offense.
The Path Forward: Trust as Design Principle
Trust isn't binary—it's contextual. We shouldn't "trust AI" or "distrust AI" categorically. Trust should be calibrated to specific tasks, contexts, and stakes. The goal isn't maximum trust; it's appropriate trust. That makes it a design problem. We need a comprehensive picture of where and how trust is most critical for our organizations and operations and prepare to deploy well-designed AI solutions in those spaces.
Governments and public enterprises,“should treat all AI-generated code as untrusted input subject to mandatory review,” and “should adopt zero-trust architectures designed specifically for AI systems—sandboxed agents, least-privilege access, and real-time anomaly monitoring,” according to the Council of Foreign Relations. But that might be overkill for a smaller, less connected organization. Where you live in the network matters.
Overall, the human-in-the-loop framework from Part 2 isn't just about problem-solving—it's about creating a trust architecture. Each stage has explicit checkpoints where humans verify, validate, and decide. It maintains human agency while leveraging AI capability. It creates accountability without creating bottlenecks.
Organizations that learn how to deliver trust both internally and externally will have the real AI advantage. Those that don't will face continued shadow IT, underutilization, and eventually abandonment of AI initiatives. Imagine a workday where everyone is putting out fires all day every day. Now imagine those fires at AI automation scale. That is the risk. The competitive advantage won't go to whoever deploys AI first—it will go to whoever deploys AI trustably.
Bottom Line
The trust gap won't close by making AI more autonomous. It will close by making human oversight more effective, efficient, and embedded in how we work with AI.
Trust in AI isn't about believing the technology is perfect. It's about having appropriate confidence in a system where humans and AI each do what they do best—with clear accountability, transparent limitations, and effective oversight.
When trust collapses, AI sits unused. When trust is calibrated correctly, AI becomes genuinely transformative.
The next step isn't more AI. It's more thoughtful integration.
That's what human-in-the-loop delivers: not just better problem-solving, but trustworthy problem-solving at scale.
Radical Candor
Engendering trust is crucial to scaling the creation and adoption of AI across any organization. When trust is high, the results are striking: Our research found that employees are nearly 10 times more likely to see agentic AI as critical to their team’s success, almost three times more likely to use generative AI daily, and save an average of two hours each week compared with peers using the same tools without trust. Workers who believe these systems were built for them and will deliver real value to their work use them more and use them better. Because ultimately, AI’s biggest hurdle isn’t technical; it’s human.


