In partnership with

Which way, 2026?

Hello, fellow humans! 2025 saw the rise of agents, and with that, we also saw their shortcomings. The big model makers are frantically trying to make their systems more agentic and more reliable, but there are serious questions about whether LLMs can ever be reliable enough for meaningful autonomous work.

Forbes is taking both sides of this bet as it emphasizes the human skills that will become necessary and valuable to account for the gaps in an agentic workplace, but also takes a top-down view of what the financial class expects from enterprises implementing AI strategies.

Will enterprise top-down AI strategies deliver on these stratospheric promises? Or will crafty SMBs outmaneuver their bigger, bulkier competitors with a more grass-roots AI strategy? 2026 may be the year we find out.

You can (easily) launch a newsletter too

This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.

Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.

And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.

beehiiv isn’t just the best choice. It’s the only choice that makes sense.

8 Skills You Need To Manage The New AI Agent Workforce

A critical theme involves the complex dynamics of trust and skills between humans and AI systems. While 77% of technology leaders express trust in workplace robotics, there's growing concern with AI systems around issues like empathy, reliability, strategic thinking, original thinking, and human interpersonal interactions.

A new piece from Forbes identifies critical human skills needed to effectively manage AI agents in the workplace. And to the extent that AI can improve our productivity, it’s still human skills that differentiate us from our peers and from autonomous AI systems.

We’re all learning how to build AI agents, but finding that agents aren’t very reliable. We’re finding that AI can simulate empathy, but it doesn’t feel genuine; 79% of users prefer human empathy. AI can simulate strategic thinking, but it cannot understand the full context that we have as humans who live in the world. LLMs fundamentally cannot create something that hasn’t been created before; it can only probabilistically recombine existing digital artifacts. And anything that happens in the organic world without a digital footprint is simply not accessible to any AI system. And while there are several new models on the horizon, such as world models and neurosymbolic models, they do not seem poised to address any of these shortcomings.

According to Forbes, as we look at the shift from managing human teams to orchestrating human-AI collaborations, some of the most the important skills for managing AI systems with human oversight are:

  • Strategic Thinking

  • AI Literacy and Collaborative AI Literacy

  • Agentic Workflow Design

  • Human Interpersonal Skills

  • Change Management

  • Data Governance

As much as the AI models want to promise an autonomous future, the reality is that human oversight remains a critical safety mechanism for security, effective thinking and planning, and collaborating with other people.

AI's Honeymoon Is Over: 12 Predictions for 2026

Forbes is making predictions for AI in 2026, and unsurprisingly, it takes a very enterprise-oriented view of AI for 2026 that has more to do with finance, stock prices, and executive initiatives than how to leverage AI in meaningful ways. But they do predict a rise of the “agent economy,” when autonomous agents can reliably do work on our behalf, and the cost of AI intelligence diving to near zero. McKinsey estimates $2.6 to 4.4 Trillion in annual added value and Accenture anticipates 2.5x revenue and 2.4x productivity. Those are big expectations to fill. Forbes also predicts that Wall Street will reward and punish organizations based on how well they adopt AI which could be a big deal because 2026 will also be the year that finance and investors will start expecting real financial returns on those AI investments.

We have to consider what it will mean if 2026 proves that AI cannot actually deliver on these massive promises.

Radical Candor

We won’t get to AGI in 2026 (or 7). At this point, I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.

Gary Marcus, via Substack

Thank You!

Keep Reading

No posts found