Diving Deep on Trust and Competency
Hello, fellow humans! Today, we’re looking at challenging questions of how AI is affecting our ability to trust, and the future that could include AI more deeply integrated into our phones.
But we have an actionable roadmap for how you can establish your human credibility. Lovable is the fastest-growing startup in human history — reaching $200 Million in Annual Recurring Revenue (ARR) in its first year, and valued at $6.6 Billion — and it emphasizes building in public. Today’s main feature gives you a roadmap for how you can build in public so that you can build a reputation with human skills that AI cannot replicate, too.
Today's Agenda
This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.
Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.
And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.
beehiiv isn’t just the best choice. It’s the only choice that makes sense.
Headlines
What Do You Trust in an AI World?
Building trustworthy AI systems has emerged as one of the most critical challenges as AI becomes more autonomous and influential in decision-making processes. The challenge spans multiple dimensions: technical reliability, transparency in decision-making, accountability for outcomes, and ensuring AI systems remain aligned with human values and intentions.
In healthcare, compliance teams are working to align AI models with regulations like the EU AI Act and FDA device-software guidance. The challenge is particularly acute in scientific research, where 79% of interviewed scientists cited trust and reliability concerns as primary barriers to AI adoption. Scientists want AI partnership but cannot yet trust it for core research tasks due to concerns about hallucinations and inconsistent outputs.
The concept of "epistemic trust" has become central to discussions about AI-mediated information systems. As AI systems increasingly mediate our access to information, there's growing concern about maintaining the ability to distinguish reliable from unreliable sources. This has led to calls for greater transparency, explainability, and "epistemic friction" - deliberate barriers that encourage critical evaluation of AI-generated content.
The Future of AI at Work and on Phones
The field of human-computer interaction is undergoing rapid transformation as AI systems become more sophisticated and ubiquitous. Traditional interfaces are giving way to more natural, multimodal interactions that can understand context, emotion, and intent. This evolution is moving us toward "invisible interfaces" where AI systems anticipate needs and respond without explicit commands.
Microsoft's New Future of Work Report 2025 highlights how ethnographic and HCI research demonstrates that workers adapt technology in creative ways, and that participatory design - i.e., where workers are co-designers - results in more effective AI integration. This represents a shift from technology-centered to human-centered AI design.
Emerging multimodal AI agents are gaining capabilities to interact with smartphones and other devices in ways that mirror human behavior. This development reframes mobile HCI automation from a software-bridge problem into an embodied perception-and-action problem. This aligns with how people naturally operate devices and could make AI assistance more intuitive and effective. I’m looking at you, Apple Siri.
Feature
Building in Public with AI: How to Demonstrate Competence in an AI-Assisted World
We’re all struggling with how to know what is real and what isn’t right now. Social media makes it hard, but now that generative AI can produce some kinds of work comparable to high-quality human work, it can be nearly impossible to distinguish human signal from AI noise. Job candidates can submit a perfect portfolio—entirely AI-generated. The consultant can deliver an impressive analysis that collapses under basic questioning. The professional with polished deliverables can’t explain their own reasoning.
We've reached a critical inflection point; when AI outputs are now indistinguishable from expert human work, we have to ask different questions and present our expertise differently. Anyone can produce professional-quality deliverables by prompting ChatGPT or Claude. Traditional competence signals such as portfolios, credentials, and polished artifacts have become unreliable. Hiring managers, clients, and colleagues face a new challenge: distinguishing who can actually think from who can merely prompt.
A better approach is to make your competence visible through how you work, not just what you produce. Your ability to critically evaluate AI, strategically direct its contributions, and navigate ethical complexities becomes the differentiator. The solution isn't hiding AI use or pretending you did everything yourself. It's building in public—making your AI collaboration process visible to demonstrate genuine capability.
Recent research on Collaborative AI Literacy provides a validated framework for exactly this challenge. This article shows you how to apply it.
The Three Dimensions That Are Redefining AI Competence
Researchers at CSIRO recently validated a model identifying three competencies that distinguish effective human-AI collaboration from mere prompt execution. These dimensions provide structure for building in public.
AI Evaluation. This means assessing AI capabilities and limitations during ongoing use, identifying when AI adds value versus when it doesn't, and continuously recalibrating your collaboration. The demonstration of human competence becomes being able to articulate why you accepted or rejected an AI suggestion.
AI Usage. Demonstrate that you’re not just operating tools, but engaging in interactive, goal-directed collaboration. This includes tailoring communication to optimize AI output, building upon its contributions strategically, and directing it toward desired outcomes. It’s important to show how you shaped the AI's contribution and worked to align the outputs with human-oriented goals.
AI Ethics. Understanding the implications of AI-assisted decisions, navigating responsible use in your specific context, and recognizing when AI shouldn't be used at all. We need to be able to explain the ethical considerations in our work.
When we build in public, these three dimensions transform from abstract competencies into demonstrable capabilities. Instead of producing opaque outputs, we show sophisticated collaboration. Let's examine how to surface each of these dimensions to make them visible.
In general, these can be difficult to navigate in professional settings because it’s usually not wise to give people reasons to question decisions once they’ve been made. But the new AI reality means that we have to reveal our decision-making process to demonstrate the human judgements, insights, and wisdom that AI itself cannot fake.
Framework One: Expose Your Evaluation Process
Start by creating decision logs for significant work. Document when you accepted versus rejected AI suggestions with explicit reasoning. For example: "Claude suggested approach X, but given our regulatory constraint Y, I chose Z instead." Or: "I used AI for initial research but verified claims against primary sources because the stakes were high." The template is simple: What AI suggested → What you questioned → What you changed → Why.
Next, make capability boundaries explicit. State clearly where AI helped and where it didn't: "AI generated the first draft, but I restructured the argument because it missed stakeholder consideration A." Or: "I used AI for data analysis but not interpretation because domain expertise matters here." In this case, transparency demonstrates your strategic thinking about task allocation.
When presenting AI-assisted work in meetings, be transparent and explain what you validated and how. Point out where AI's output required your judgment. Show alternative approaches you considered. This real-time assessment makes your evaluation capability visible to colleagues and clients.
Here’s an example implementation checklist:
Add "AI collaboration notes" to key deliverables this week
Document one accept/reject decision weekly
When sharing AI-assisted work, include 2-3 sentences on your evaluation process
Create one "why I overrode AI" case study monthly
The competence signal here is clear: People who can articulate evaluation criteria demonstrate genuine understanding. People who can't are just executing prompts.
Framework Two: Demonstrate Strategic Usage
Here, you can demonstrate how you direct and shape your AI collaboration to reveal expertise that AI cannot replicate.
In this framework, you can build in public by documenting your collaborative approach. Show your process: "I started by giving AI context X, then refined with constraint Y, then used the output to develop framework Z." For significant work, create a brief "how I approached this" write-up. This demonstrates strategic thinking beyond simple prompting.
When you clearly differentiate your contributions from AI, you’re making the division of labor transparent. For example, "AI: Initial research synthesis | Me: Strategic framework and recommendations." Or it might be "AI: Draft content generation | Me: Argument structure and stakeholder considerations." This shows you understand which tasks benefit from AI assistance and which require human judgment.
You should also document your iterative refinement examples. For example, showing the back-and-forth such as “Draft 1 from AI provided generic solution. Draft 2 after you added context X. Draft 3 when AI incorporated context but missed consideration Y. Final version after you integrated Y and refined for a specific target audience.” This reveals both your ability to guide AI and your domain expertise.
You may also work on novel problems where no established approach exists, in which case, you may need to document how you adapted AI collaboration. "No precedent existed for this analysis, so I combined AI research capabilities with first-principles thinking to develop approach X." Or: "AI provided patterns from domain A; I applied them to our domain B with these specific adjustments." This demonstrates your transferable problem-solving ability.
Here’s a sample implementation checklist:
Create a "collaboration method" template for project documentation
Share one "iterative refinement" example monthly on your blog or LinkedIn
Annotate work samples with human versus AI contributions
Document one novel problem approach quarterly
Framework Three: Make Ethics and Judgment Visible
The core principle: Ethical reasoning and situational judgment are distinctly human capabilities worth demonstrating.
When AI use involves judgment calls, document your ethical decisions publicly. "I didn't use AI for X because privacy concerns outweighed efficiency gains." Or: "I used AI but verified against authoritative sources because recommendation stakes were high." Or: "I disclosed AI use to stakeholders because transparency matters in this context." These statements demonstrate values-driven decision-making.
Build in public by explicitly acknowledging limitations and uncertainty. "AI provided this analysis, but I'm less confident about trend X given limited historical data." Or: "This recommendation reflects AI synthesis plus my judgment; alternative perspective Y deserves consideration." Or: "AI couldn't account for political dynamics, which may significantly affect implementation feasibility." This calibrated confidence is a competence signal.
Develop and share your decision framework for AI use. Create guidelines for when you use AI versus when you don't, what you always verify versus what you trust, and how you handle edge cases. Make this framework public—blog it, share it with your team, refine it based on experience. This demonstrates systematic thinking about responsible AI use.
Example implementation checklist:
Create a personal "AI use guidelines" document this month
Document one ethical decision monthly with reasoning
Include uncertainty and limitation statements in significant deliverables
Share your decision framework with your team or professional community
Building Your Public Competence Portfolio
Now integrate these three frameworks into a systematic practice of building in public.
Create quarterly "collaboration retrospectives." Document 3-5 significant AI collaborations each quarter. Include your evaluation decisions, usage strategies, and ethical considerations. Show how your approach evolved. This creates a visible track record of sophisticated AI collaboration.
Develop public artifacts that demonstrate your process. Write blog posts explaining your AI collaboration methods. Create case studies with annotated decision processes. Publish "how I approached this problem" write-ups. Share templates others can adapt. The portfolio structure shifts from "outputs I created" to "how I think and work with AI."
Include decision logs with reasoning, process documentation with commentary, and evolution of your approach over time. Don't omit failures—document what didn't work and why. This demonstrates genuine engagement versus cherry-picked AI outputs.
Your 30-Day Implementation Sprint
Week One focuses on evaluation. Add evaluation notes to your next three deliverables. Document three accept/reject decisions with reasoning. Write one "why I overrode AI" example and share it.
Week Two focuses on usage. Create your collaboration process template. Document the division of labor for one project explicitly. Share one iterative refinement example publicly.
Week Three focuses on ethics. Draft your personal AI use guidelines. Add limitation statements to one significant deliverable. Document one ethical decision with reasoning.
Week Four focuses on portfolio building. Create your first collaboration retrospective. Publish one "how I work with AI" article. Set up your quarterly documentation rhythm.
The Competitive Advantage of Transparency
The reality is that most professionals will either hide their AI use or produce opaque outputs, hoping no one notices the difference, which represents a real opportunity for someone willing to differentiate themselves.
By building in public—documenting your evaluation process, showing your strategic collaboration, and explaining your ethical judgment—you differentiate yourself fundamentally. Better prompting won’t replicate these capabilities. They require genuine understanding, domain expertise, and strategic thinking.
The professionals who can demonstrate these three dimensions of collaborative AI literacy will capture disproportionate opportunities. In 6 months, you can have a substantial body of work demonstrating sophisticated AI collaboration, while your competitors will have portfolios of indistinguishable outputs and no process documentation to back them up.
As your demonstrated capability becomes your career capital, the visibility you create through building in public becomes your competitive moat.
Start today. Pick one framework from this article. Implement it for 30 days. Make your competence visible. The future belongs to those who can prove they're more than prompt executors.
Radical Candor
AI 'workslop' refers to AI-generated work content that appears useful but lacks substance, is incomplete, or contains inaccuracies. Such content undermines productivity by forcing recipients to interpret, correct, or redo the work... Workslop may be a key reason why individual productivity gains are not seen at the group or organizational level.


