Create how-to video guides fast and easy with AI
Tired of explaining the same thing over and over again to your colleagues?
It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
1️⃣Share or embed your guide anywhere
2️⃣Turn boring documentation into stunning visual guides
3️⃣Save valuable time by creating video documentation 11x faster
Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.
The best part? The extension is 100% free
Today's Agenda
Hello, fellow humans! It’s tempting to look at algorithms and consider that “it’s just math!” And “how could math discriminate?” But fundamentally, the math inside the algorithm is about human choices; humans have chosen which metrics are important and which are not. Humans choose how much weight to give all of these different metrics. Those are inherently discriminatory choices. The question then becomes what do those choices represent and whether those measures harm people unfairly.
What Trump’s AI Executive Order is Really About
This week's White House executive order targeting state AI laws bills itself as prioritizing innovation and AI strategy. But when we strip away the rhetoric about "winning the race with China," there’s something more fundamental: a 50-year-old civil rights debate with trillion-dollar stakes.
California and Colorado recently passed laws requiring AI transparency, but this is the important part: holding algorithms responsible for discriminatory results. The White House alleges that this forces AI systems to produce "false results" to avoid discrimination. States counter that they're simply applying existing civil rights law to algorithmic decisions. Both sides are talking past each other because they're fighting about something nobody wants to name directly: whether discrimination should be judged by outcomes or intentions.
This matters far beyond AI policy. The answer determines whether civil rights law survives the age of automation.
The Concept at the Heart of the Controversy
Here's what "disparate impact" actually means: your system discriminates even if you didn't mean to. If your hiring algorithm rejects Black applicants at twice the rate of white applicants, you've got a problem—regardless of whether anyone intended that outcome.
This doctrine emerged from Griggs v. Duke Power in 1971, when the Supreme Court ruled that requiring a high school diploma for manual labor jobs was discriminatory because it screened out Black workers at disproportionate rates and wasn't necessary for the job. The company didn't intend to discriminate. That didn't matter. The outcome did.
For 50 years, this framework has governed employment, lending, and housing decisions made by humans. Colorado and California are now saying it also governs those same decisions made by algorithms. And AI companies are panicking.
Why? Because disparate impact liability makes three things impossible: hiding behind "the algorithm decided," claiming ignorance of discriminatory outcomes, and optimizing purely for profit without constraint. You must audit for group-level impacts. You must justify disparate outcomes through business necessity. You must document your testing and mitigation efforts.
The ideological battle lines form quickly here. The progressive view holds that discrimination is about power and measurable harm—intent is irrelevant when you can statistically demonstrate unequal treatment. The conservative and libertarian view insists discrimination requires intent, and disparate outcomes often reflect legitimate differences rather than bias. The AI accelerationist position goes further: any constraint on algorithmic "optimization" is artificial interference with objective truth-seeking.
These aren't technical disagreements. They're fundamental conflicts about what discrimination means.
But anyone who has worked in product management, business analysis, or statistics can tell you about how to use one metric as a proxy for another. A product manager cannot measure customer “joy,” but we can measure how many times a day the user logs in and how long they stay. An analyst cannot measure job satisfaction, but they can measure aggregate performance scores, attendance, deadlines met, and so on. None of these proxies are perfect, but as they say “once a measure becomes a target, it ceases to be a useful measure.” you can argue that this is the reason why quotas are not a useful tool against discrimination, but it cuts both ways and is equally effective in making the case that algorithms are not an effective tool against bias.
The Perverse Logic of Intent Standards
Regardless of which position you take on disparate impact, if proving discrimination requires proving intent, you've just created a massive incentive for hiding your intentions, regardless of what those intentions are.
If you game it out, under an intent-based standard, companies avoid liability by demonstrating they didn't intend discriminatory outcomes. The best way to demonstrate this? Don't document anything that might suggest awareness of the problem. Delete the Slack channels where engineers discussed fairness trade-offs. Avoid testing for disparate impacts because discovering them creates evidence of knowledge. Claim the algorithm is a black box that nobody understands.
"We didn't know it discriminated" becomes your legal defense. Ignorance is rewarded. Documentation is punished.
We've seen this movie before in lending. Pre-disparate impact, banks could systematically deny mortgages to Black neighborhoods while claiming "no discriminatory intent"—just "sound business practices." Post-disparate impact, they had to explain why Black applicants were denied at twice the rate of white applicants with similar financial profiles. Suddenly, the mechanisms of discrimination became visible because outcomes had to be justified, not just intentions claimed.
The same dynamic applies to legislative processes. When courts require proving legislative intent to demonstrate discrimination, lawmakers simply avoid stating their true purposes in the record. Neutral language conceals discriminatory goals. The transparency problem is the same whether we're talking about algorithms or statutes: intent standards encourage obfuscation.
Disparate impact standards flip this incentive structure. When outcomes matter, documentation becomes protection rather than liability. Testing for bias isn't evidence against you—it's evidence of good faith. Understanding your system's behavior becomes mandatory, not optional. The opacity that shields companies under intent standards becomes a liability under outcome standards.
This is the transparency paradox: the standard that sounds more demanding actually creates more visibility into how systems work and who they harm.
The 'Truthful Outputs' Sleight of Hand
The executive order claims state laws require AI models to "alter their truthful outputs" to avoid disparate impact. This framing positions fairness as falsification, as if engineers must choose between accuracy and non-discrimination.
But this commits a fundamental category error. AI outputs aren't "truths"—they're predictions, classifications, and recommendations generated by models trained on historical data. When a hiring algorithm says "this applicant has a 73% likelihood of success," that's not an objective fact about the universe. It's a model's estimate based on patterns it learned from past hiring decisions.
If those past decisions were biased—say, systematically undervaluing female candidates—the model will learn that bias. It will confidently predict that women are "less likely to succeed" because in the training data, they were promoted less often. The model's output is statistically accurate to the training data and completely perpetuates historical discrimination.
So what does "truthful output" mean here? A model optimized purely for profit maximization produces different predictions than one optimized for profit-within-non-discrimination-constraints. Both are "accurate" to their objectives. Neither is "true" in any metaphysical sense.
The real question hiding behind "truthful outputs": Is disparate impact proof of system bias or proof of underlying group differences? If your credit model denies loans to Black applicants at twice the rate, is that because the model discovered something true about credit risk, or because the model learned from a world where Black families were systematically denied wealth-building opportunities?
Your answer to that question determines whether requiring justification for disparate outcomes means demanding "false results" or simply demanding justification. This is the actual ideological fight. Everything else—innovation, China competition, state versus federal authority—is proxy warfare.
What's Really Being Regulated
It gets lost in the headlines, but the executive order doesn't block safety requirements or transparency mandates. It specifically blocks civil rights enforcement in algorithmic systems.
These are not the same category of regulation. When California requires frontier AI developers to publish safety frameworks, that's safety regulation. When California says you can't use hiring algorithms that discriminate against protected groups, that's civil rights enforcement. The EO targets the second, not the first.
What's at stake? Whether companies can automate decisions that have historically been subject to anti-discrimination law without maintaining accountability for discriminatory outcomes. Whether automation exempts you from justifying disparate impacts. Whether "the algorithm decided" becomes a universal liability shield.
If AI systems become exempt from disparate impact liability, watch what happens: massive incentive to automate every consequential decision. Employment, lending, housing, healthcare—anything currently subject to civil rights enforcement gets delegated to black-box algorithms. Civil rights law becomes unenforceable because you can't prove intent in systems specifically designed to obscure decision logic.
This isn't speculation. We're already seeing it. Hiring platforms explicitly market themselves as reducing discrimination liability by removing human decision-makers from the process. The pitch is literally "the algorithm can't be sued for bias."
The Innovation Theater Nobody Discusses
The standard argument runs like this: disparate impact liability creates compliance costs, compliance costs burden startups, therefore disparate impact liability kills innovation. It's repeated so often it sounds like economic law.
But consider the alternative. Under intent-based standards, companies face multi-year litigation trying to prove (or disprove) what engineers were thinking when they built the model. Regulatory uncertainty stretches for years while courts parse Slack messages for evidence of discriminatory intent. Discovery becomes a fishing expedition through every internal communication.
Which is worse for innovation: clear rules about testing for disparate impact, or unclear liability that only resolves through prolonged litigation?
The transparency inversion appears again. Companies that avoid disparate impact testing aren't more innovative—they're more legally exposed. They don't know about discriminatory patterns until someone sues and forces discovery. Meanwhile, companies that built robust fairness testing into their engineering culture know their risk profile, can explain their systems to customers and regulators, and have documentation that demonstrates good faith.
Outcome-based standards reward proactive testing. Intent standards reward strategic ignorance. There's a reason Anthropic and Google DeepMind don't fear disparate impact laws—they've already built the testing infrastructure. It's the companies that ignored fairness concerns who now face expensive retrofitting.
The real innovation moat isn't freedom from testing requirements. It's building systems robust enough to withstand scrutiny.
Where This Leaves Us
The likely scenario is years of federal-state litigation over Commerce Clause questions and preemption doctrine while the actual issue—whether disparate impact applies to algorithms—gets obscured by procedural complexity. Industry operates in sustained uncertainty, which is worse for innovation than either clear rules or clear exemptions.
What would actually help? Clear federal standards with safe harbors. Robust testing requirements paired with liability protection for good-faith compliance. Transparency mandates that create public trust rather than legal exposure.
What we're getting instead: regulatory arbitrage, jurisdiction shopping, and a race to wherever provides the most legal ambiguity.
But here's the question nobody wants to ask directly: Do we want AI systems held to the same anti-discrimination standards as humans? If historical data reflects discrimination, should algorithms perpetuate or correct those patterns? Is requiring justification for disparate outcomes "embedding ideology" or "enforcing existing law"?
Your answer determines not just AI policy, but whether civil rights law survives automation. The framing of disparate impact as "anti-innovation" is itself ideological. The real debate is whether outcomes or intentions measure discrimination. Everything else flows downstream from that choice.
For those deploying AI in consequential decisions: disparate impact testing is good risk management regardless of legal requirements. Documentation creates legal protection under current law, not liability. The strategic position isn't to wait for regulatory clarity—it's to build robust governance now and let the regulatory environment catch up to you.
Because eventually, the question of whether algorithms can discriminate by accident will get answered. Better to be on the right side of that answer before the courts decide.
Radical Candor
Human beings are behind the screen: our values, our ideologies, our biases and assumptions... [Technology] is reflecting back at us a pattern that we often take for granted and fail to look at.


