Today's Agenda
News
Apple Intelligence 2.0
Apple has struggled with false starts and internal drama over its AI roadmap, but recent changes in leadership over AI and Siri telegraph a new strategy for making AI work well with the other elements in Apple’s ecosystem. Apple is letting the press know that it finally has a game plan to calm the investors and fanboys. Here’s what to expect from Siri, possibly this fall:
Personal context support
Onscreen awareness
In-app and cross-app capabilities
Siri is also expected to have a major update in early 2026 to make it behave more like ChatGPT and other LLMs, and may use Google Gemini or OpenAI ChatGPT, but details are still under discussion.
Bloomberg reports that Safari may be Apple’s route to providing AI search, to either complement or even replace Google search as the default search engine. Eddy Cue, Apple Senior Vice President of Services, revealed this in testimony during the Department of Justice’s antitrust lawsuit against Alphabet, Inc. This move may be because of the first-ever drop in Google searches on the Safari browser.
Hardware Makers are Still Bullish on AI
Despite some indications that AI might be in a bubble, demand is still very strong for AMD’s AI chips, and there is a lot of capital still committed to the idea that AI is here to stay. Nvidia and AMD, the two primary players in the AI space, with Apple working furiously to cover lost ground, are all committing major capital.
The AI Takeover That Didn’t Happen
Leading AI researcher Geoffrey Hinton predicted that almost no humans would be left in radiology in five years. Nine years later, the Mayo Clinic is bullish on hiring human radiologists, with the radiology profession forecasting growth through 2055. The takeover turned instead into an AI transformation; Mayo has an AI team of 40 AI scientists, researchers, engineers, and data analysts in addition to the staff of radiologists.
These AI trend forecasts tend to “underestimate the complexity of the work that people actually do — just as radiologists do a lot more than reading scans” according to Massachusetts Institute of Technology labor economist David Autor. Predictions also tend to overlook the human tasks. For example, radiologists do more than read images; they also “advise other doctors and surgeons, talk to patients, write reports and analyze medical records.”
AI can help accelerate the work, though. The Mayo Clinic reports that a process once taking 30 minutes or more can now be accomplished with greater accuracy in seconds, thanks to specialised AI tools.
How Human Should AI Be?
Harvard Business Report shared a study that consumers are unsure how human an AI should be and are expressing some reservations. We tend to anthropomorphise things whenever possible, but there are some scenarios where this could become a liability for AI.
As AI becomes more human-like in our ChatGPT interactions, it can be attractive to product teams to incorporate natural speech and expressive human faces to make the AI more relatable, human, and social. There is some data to support this path forward for AI, as Sam Altman shared recently that there is a generational divide in AI usage:
Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system… there’s this other thing where, like, they don’t really make life decisions without asking ChatGPT what they should do.
In one scenario, the AI may not seem quite human, landing the product in the uncanny valley where the human representation is no longer a good facsimile, but fundamentally flawed. The other risk case is where customers feel comfortable with the anthropomorphic AI for a time, but an accumulation of small errors or one large error can damage the customer’s sense of trust. The possibility of an enduring loss of trust could be a major AI adoption roadblock.
The HBR report found that the key is “emphasizing the human input behind AI development,” and not to try to humanize the AI. There is an opportunity for AI product teams to explore what this means and how best to connect with customers without alienating users.
Human Value Proposition
Evolving Collaboration

They say you won’t be replaced by AI; you’ll be replaced by someone who can use AI. But AI is so new in the workplace, we’re still trying to understand what that means. Some people may hear that and conclude that AI skills are separate and distinct from other skills. To some, AI skills might mean “prompt engineering,” and others may interpret it as building more complex AI agents that can automate a flow chart of AI tasks. It’s tempting to point to these “hard skills,” but the real advantage and opportunity is to invest in deeper human cognitive skills like communication, analysis, critical thinking, and defining problems and objectives.
To get AI to complete a task or automate a process, you must communicate clearly: what to do, what outcome to expect, the steps involved, and how to measure success. I’ve seen well-formed prompts that are several pages long, including extensive context, specificity, and expectations. And even after all that, the output will still need careful review and revision. When you ask a colleague to perform some task, there is a lot of shared tacit information and context. The AI does not have that context and needs to have it spelled out, and depending on your situation, it may not be worth the trade-off.
The students who are using AI to write their essays are often executing complex workflows in order to avoid being detected. They might start with an AI writer like EssayFlow, perform some sentence shuffling, rewording, use an AI humanizer to make the flow seem more human, insert personal stories, and even use AI to add typos. After all that, you have to wonder if it would have been less effort to just write the essay.
But the problem here isn’t effort or tech; it’s a skills issue.
And like the arms race, educators are trying to sniff out the AI-generated content using… more AI. Students and teachers at a loss for how to connect with each other over the work are just delegating robots to talk to other robots.
Students and teachers alike are struggling with the bigger issue that we’re misframing what AI ‘skills’ actually require. The novelty of the tools obscure the importance of how we use the tools with our communication, thinking, and collaboration skills.
Whether you’re trying to work with AI or a cross-functional team, the skills are the same: communication, analysis, collaboration, strategy, and leadership. The fact that these skills are increasing in demand is partly because we’ve overlooked them and partly because they are becoming more necessary for navigating organizational and information complexity.
So some people who are skilled with more advanced AI tools may have an advantage, but we reframing them as cognitive skills opens up opportunities. In the end, it’s not how well you use AI, but how well you think, communicate, and lead—with or without it.
New Reading
The Disengaged Teen
I haven’t had a chance to read The Disengaged Teen yet, but we can preview the perspective on the phenomenon it discusses in Ezra Klein’s fascinating interview with Rebecca Winthrop.
She focuses on specific developmental needs of children and teens at a critical developmental stage for learning cognitive skills like maintaining attention, critical thinking, social and emotional intelligence, and literacy. There is still a lot of experimentation to be done to find effective methods and strategies, but recommends two starting points to help students maintain attention, think deeply and critically, and generally do cognitively hard work:
Prohibit personal smart devices during school. Full stop.
Empower students with agency over their education, to experience their internal motivations and curiosity
“Kids will find a way. We cannot outmaneuver them with technology.” — Rebecca Winthrop via The Ezra Klein Show

Anderson and Winthrop suggest teens have always found ways to disengage in their education, smart devices are just the latest and biggest method. But there are four modes for how and why teens disengage.
Thank you!
We’re glad you’re here! We’d love to hear what you think if today’s issue.