Software sprawl? That’s SaaD.
Software was supposed to make work easier. Instead, most teams are buried under it.
That’s SaaD – Software as a Disservice. Dozens of disconnected tools waste time, duplicate work, and inflate costs.
Rippling changes the story. By unifying HR, IT, and Finance on one platform, Rippling eliminates silos and manual busywork.
HR? One update applies to payroll, benefits, app access, and device provisioning instantly.
Finance? Close the books 7x faster with synced data.
IT? Manage hundreds of devices with a single click.
Companies like Cursor, Clay, and Sierra have already left outdated ways of working behind – gaining clarity, speed, and control.
Don’t get SaaD. Get Rippling.
Today's Agenda
Hello, fellow humans! As we think about how our own brains work, we can dig deeper into what we want and need AI to do, to give AI better instructions, and how to understand the outputs and act based on what AI gives us in return.
What does that mean for us as individuals? What does that mean for organizations?
AI Fluency > AI Skills
Nate B. Jones produces high-quality AI news, insights, analysis, and advice at a blistering pace. For anyone doing advanced AI work, I recommend giving him a follow on YouTube and subscribing to his Substack.
He recently did a piece on the difference between being AI-trained and being AI fluent. And the difference really comes down to human judgement skills. He goes into some depth explaining what each of these mean, but here’s a quick rundown:
Decomposing complex problems into AI-sized pieces
When to iterate in the chat versus when to start over
How do you recognize when AI is confident but incorrect?
What kind of context will help the AI? Quantitative data? Qualitative experiences?
Can your team get similar results when using different AI tools?
Check out the full video on YouTube, because he gives a lot of important context on the problem space in corporate AI environments; there are significant organizational factors and considerations that he lays out thoughtfully.
The product person in me wants to test and validate his claim that 10 AI-fluent people can outperform 500 AI-trained employees. So how would we measure that? We know that throughput is not a good metric; AI is verbose and can produce thousands of code lines of non-performant slop in minutes. So how do you measure the value of high-quality business analysis? How do you measure the value of addressing the right problem space?
These are going to become meaningful questions going forward as the cost of this kind of intelligence work becomes more salient. We’re going to have to think about the trade-offs of the costs of AI and of hiring the right person to use AI, and how to know what that combination truly buys us.
Where is the AI in Your Organization Tree?
As if work isn’t difficult enough already, we are getting news and updates about AI technology and implementations at a blistering pace while we are also bombarded with messages trying to shape our thinking about AI and agents, sometimes incorporating real-world intelligence, sometimes not. It is taxing our discernment and critical thinking skills.
But everyone agrees that AI is forcing organizations to rethink the way work gets done. We have the hype cycle telling us that “AI agents will replace workers.” A lot of people tasked with operationalizing AI and agents are reporting that “AI struggles with messiness, breaks under pressure, and needs humans more than ever.”
Here’s the uncomfortable truth:
Agentic AI is still an immature and developing technology, but it is also failing because real business operations are chaotic, inconsistent, and filled with exceptions that require human judgment and experience.
At the same time, junior employees whose work is mostly “production”—churning out documents, research, analyses—are feeling real pressure, because modern agents are good at eliminating repetitive, predictable tasks.
So how do we reconcile this?
How can small and mid-size companies use AI without falling for unrealistic promises?
AI agents are software products, so when we think about how they’re structured and what problems the agents are designed to address, it is instructive to look at them through the lens of Conway’s Law, and how the organizational structure informs the product structure. And let’s face it: our organizational trees are messy, political, and constantly evolving. So it is with the AI agents that we’re building. But here, I outline three practical interaction models—ways teams actually work with AI today—and layer in what we now know about economic constraints, deployment failures, and talent trends.
Matthew Skelton and Manuel Pais identify four team types in their book Team Topologies…
Stream Aligned — these are production teams that take raw inputs and process them directly into valuable products. These are usually the final stage in the value chain.
Enabling Teams — these are research teams that identify new technologies and techniques and prepare them for use by other teams, usually a stream-aligned team.
Complicated Subsystem Teams — these teams handle technical components of things like algorithms, robotics, or audio and video processing. These are designed to reduce the complexity for the stream-aligned teams.
Platform Teams — Teams like DevOps provide the infrastructure, services, and tools used used by all the other teams to reduce their operational burden in providing tools for the stream-aligned team.
This piece draws from amazing content from TechButMakeItReal and Nate B. Jones.
Let’s break it down.
Model 1: The Copilot Era — Good for Individuals, but Hardens Silos for Big Orgs
This is where almost everyone starts: humans lead, AI copilots assist. A PM runs competitive research with ChatGPT. A designer generates directions for a wireframe. A founder drafts an investor update.
It feels great. Productivity jumps. But there’s a hidden flaw:
Your organization doesn’t get faster—only your people do.
Copilots don’t coordinate across departments. They don’t unify context or coordinate across teams. So each of the four team types might see small individual gains, but this copilot approach won’t help the overall organization; you’ll still see the same bottlenecks (e.g., incompatible data, differing alignments) moving intelligence between teams. Although stream-aligned teams may appear to get the most benefit because they tend to be more customer-facing, but this masks the messiness in the backend.
They simply accelerate whatever silo they’re plugged into as we’re seeing across companies deploying AI:
Copilots work because the tasks are predictable.
They don’t require judgment or context retention.
They give the illusion of transformation without structural change.
The takeaway for smaller organizations:
Copilots are cheap, safe, and useful—but they won’t meaningfully compress your discovery → design → MVP → PMF cycle. They’re helpers, not operational assets.
Model 2: Functional AI Agents — “Digital Team Members” With Real Leverage
This is where things get interesting. Instead of copilots assisting individuals, AI agents take on recurring, domain-specific responsibilities:
Research Agent
QA Agent
Customer Insights Agent
Data Validation Agent
Compliance Agent
These agents don’t replace humans—but they do act like digital employees inside departments.
In team topologies, this can start to build bridges across teams as the agents become more reliable and multiple teams begin to speak the same language. Enabling teams can build agents to anticipate the most common needs from the stream-aligned teams. Internally, a data pull-system can start to take shape as the team needing a tool or asset can easily pull it into their process ready-to-go, rather than needing to ask for it.
Crucially, this aligns with what the data actually shows:
Agents are most successful when their tasks are 90% predictable and domain-expert configured.
Harvey works because ex-lawyers configure its workflows. Deployment works when someone deeply understands the operational system the agent will run inside.
This model matches the emerging reality in agent deployments:
Success requires domain expertise.
The value isn’t labor replacement—it’s eliminating repetitive grunt work.
These agents compete with the labor budget, not the IT budget.
For SMBs, this is the highest ROI model today because:
✅ It speeds up real operational cycles
✅ It reduces human admin work
✅ It improves experimentation frequency
✅ It’s achievable without massive reorgs or agent infrastructure teams
But it introduces one new bottleneck:
Your human orchestrator becomes the constraint.
You need someone who can verify agent output, connect tasks across agents, and ensure alignment. This requires problem-solving skills—not production skills. And it explains why senior talent is safer in the AI transition: expertise + judgment = irreplaceable.
Model 3: Multi-Agent Autonomous Systems — The Digital Cross-Functional Org
This is the frontier: agents stop acting like departments and start acting like product teams. Research informs design. Design triggers prototyping. Prototypes trigger validation. Experiments run continuously.
Humans set goals and constraints, agents execute.
Imagine finding new data, tool, or assets in your workflow that you didn’t know you needed. The AI itself anticipates your needs, feeds that back to the supporting team, and that supporting team can deploy an asset with minimal human interaction, and you are now smarter in the moment, rather than needing to have complicated coordination meetings.
It sounds magical—and early demo videos make it look real—but here’s the grounded reality:
90% of agent deployments fail because business operations are messy.
Agents break on exceptions and unpredictable logic.
Enterprises are “miles miles away” from replacing even 20% of human workload.
But small orgs CAN use this model for specific, sandboxed workflows.
The value here isn’t autonomy—it’s speed of iteration.
When the workflow is well-structured, this model can compress weeks into hours.
But it requires deep investment in:
Governance
Testing
Alignment
Safety constraints
Observability
This is not a “plug and play” model today.
But it’s where we’re headed.
The Big Insight: AI Isn’t Replacing Work—It’s Replacing “Production” Work
When you synthesize both the agent economics and the human talent perspective, one coherent truth emerges:
AI isn’t replacing human work. It’s replacing the predictable parts of human work.
That means:
Workers who operate at the “production layer” are increasingly exposed.
Workers who operate at the “problem-solving layer” become more valuable.
Organizations that elevate their people early will adapt fastest.
Small to mid-size teams have a unique advantage because they can:
✅ Move faster than enterprises
✅ Avoid heavy agent infrastructure
✅ Integrate agents into workflows without bureaucracy
✅ Train employees on problem-solving while roles are still flexible
The Playbook for Small & Mid-Size Organizations
Here’s the actionable guidance I’m giving every SMB leadership team right now:
1. Start with copilots to identify where your real bottlenecks are.
Let individuals explore and build internal champions.
2. Introduce functional agents in the highest-repetition parts of the org.
Research, QA, customer support, validation, documentation.
3. Require every employee to validate AI outputs.
Verification is the new basic skill.
4. Shift talent expectations from production to problem-solving.
The safest employees will be those who combine domain expertise with AI leverage.
5. Move to multi-agent workflows only when:
Tasks are predictable
Guardrails are clear
Human review is baked in
That’s how you avoid the 90% failure rate—and start building real compound advantage.
Radical Candor
Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”


