Today's Agenda

Hello humans! Today, there is big news around AI layoffs and regulation as AI starts taking a bigger role on the world stage. I also wanted to share some thoughts on how to engage ethics with AI in the classroom and begin to think about how to leverage AI to serve education objectives.

News

The First Major AI Layoffs Are Here*

In their earnings call on September 25, Accenture reported that they laid off about 11,000 employees over the past quarter, calling this “an AI-driven restructuring.” The company explains that they are laying off employees who do not have AI reskilling as “a viable path.”

To understand what this means, Accenture is a business and technology consulting service. The company is a leader in AI services, deploying AI systems and tools for industry-leading enterprise clients. And they are doubling down on that strategy. Accenture is saying that in order to execute that strategy, they need a large-scale increase in AI-skilled employees. In fact, the company has hired 77,000 AI-skilled employees in 2025, and they expect that number to grow.

CFO Angie Park explained that the layoffs were part of a broader "business optimization program,” betting that demand for AI services will take off for enterprise-scale organizations. She also reported that the program has cost Accenture $865 million in restructuring charges, but produced $1 Billion in savings for the company.

Reported $3 billion in AI bookings in 2024, and expects an increase for 2025 amid strong demand for AI-driven services as they reported $65 Billion in revenue for fiscal year 2025.

This restructuring is a strong signal of where the job market will be going over the coming years.

The First AI Regulations Are Here

California has pass SB53 into law, marking the first major U.S. legislation regulating AI, with full reporting in The Verge. The full title of the law is California Transparency in Frontier Artificial Intelligence Act. Even though this is not a national law, the effect is national. Silicon Valley is based in California, and frontier AI firms following these regulations will be visible and reported globally.

This law is a second draft after the first draft, SB1047, failed. The provisions of this law are largely based on a report of AI researcher recommendations. However, third-party evaluations is one major recommendation that is not included in the law.

Transparency is the primary function of this law:

  • Requires AI frontier firms to report their safety and security processes

  • Protects whistleblowers

  • Provides for ongoing evaluation of risks and risk management processes

  • Defines catastrophic risks, such as:

    • Using AI to design a Chemical, Biological, Radiological, or Nuclear (CBRN) weapon

    • AI evading control of the human developer or user

    • Unsupervised conduct such as cyberattack or theft

The idea is to prevent catastrophic risk by requiring frontier AI firms to proactively assess these risks and publish their risk management processes. According to the State of California press release, it will require AI frontier firms to “publicly publish a framework on [their] website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.”

The California Office of Emergency Services and Department of Technology will work with frontier AI firms for the ongoing evaluation and reporting of risks of their AI systems.

There was federal legislation proposed in July 2025 that would limit the ability of states to regulate AI, but that bill failed to pass Congress. So as of this writing, there is no federal prohibition on this kind of AI regulation.

There is broad consensus within the AI industry that regulatory guardrails are needed, despite the objections of AI financiers. But AI is still evolving rapidly, so while this may be the first regulation, it isn’t perfect and certainly won’t be the last AI regulation.

Feature

Ethics as an AI Education Objective

Not only is AI not going anywhere anytime soon, but it may be everywhere soon. If pervasive AI defines the world that our students will graduate into, what are the key objectives for education? The traditional model of writing papers, doing mathematical work, or reading research materials isn’t the goal; it’s a method for achieving something else. Maybe the goal is functional literacy with variables, the ability to read subtext or grasp symbolism, develop executive function, or to effectively communicate an original idea to peers. These are real learning objectives, and the blue books are just evidence of that skill. When the mode of work no longer accurately represents the learned skill, how else can we measure learning and growth?

As much as we may not want to, we have to consider integrating AI into classroom practice but in a way that supports learning objectives while still teaching critical cognitive skills. Because, as easy as it may be to get ChatGPT to write a High School essay, there is critical, valuable work that AI cannot do, and may never be able to do. But the High School level cognitive work is a necessary stepping stone to gain the higher-order thinking skills needed for effectively managing AI. Here, I’ll offer some guidance for identifying outcome-oriented goals for using or limiting AI in your teaching strategies, curriculum design, and ethical boundaries. Whether your focus is on developing critical thinking, preparing students for the workforce, or reinforcing foundational skills should inform your approach to AI. With clearly defined objectives, educators can make thoughtful decisions about how—and when—to leverage generative AI in ways that encourage and support student growth.

Clarify Educational Priorities and Objectives

Whether your classroom priorities are subject mastery, critical thinking, collaboration, communication, or workforce readiness, we need clear priorities to build a meaningful AI strategy. Thoughtful use of generative AI can support some of these goals. Defining which outcomes matter most will help you decide where AI belongs in your teaching—and where it doesn’t.

In product management, the North Star framework is a common rubric for keeping teams focused on a single, clear outcome to guide their strategic choices. Educators can adopt a similar mindset by identifying a core learning goal—like critical thinking or collaborative problem-solving. With a North Star in place, AI becomes a tool in service of your objectives, not a distraction from them.

Build Trust Through Transparency

Dr. Casey Fiesler, a professor and expert in AI ethics, encourages educators to be clear about their AI policies. Whether the policy is to allow, limit, or prohibit AI use, she recommends being extremely clear. She requires students to disclose all use of AI and holds them accountable for all submitted work and how they used AI. The other side of the coin is that she discloses to her students how she uses AI. She strongly recommends against using AI detectors because they are unreliable, and false positives can lead to a breakdown of trust between instructors and students.

Many schools that actively engage AI include discussions of AI ethics and LLM experimentation in their curricula to identify where AI is used successfully and where it creates problems.

Engage the Ethics Questions

Strict enforcement of AI rules can be impractical and less effective for teaching students about ethics. Instead, embrace the complex ethical questions that AI presents, helping students make thoughtful decisions as they navigate this new technology.

We can engage students with these questions and give them space to explore and experiment. Students are able to grasp these challenges and respond thoughtfully and ethically without setting strict rules. A focus on nuanced ethical questions will help students make thoughtful decisions about AI.

That’s the idea behind graidients.ai, a tool for developing AI ethics in the classroom from the Harvard University Graduate School of Education. Both educators and students are experimenting with different AI techniques and strategies, and you may need to explore those gray area uses with your students. I’ve mimicked the graidients tool with a Miro board that you are welcome to copy and use with your students for discussion and voting to build consensus on what is ethical use of AI in classwork.

Feel free to use this Miro board as a discussion starting point with students and vote on the board to show where the consensus is on the ethical use of AI.

In the next part of this series, we’ll discuss strategies for using AI in the classroom in ways that support cognitive and skill-based educational objectives that prioritize outcomes over outputs.

Here are some valuable voices in the AI business, technology, product spaces. If they sound like your thing, you should give them a follow!

John Cutler is the Head of Product at Dotwork and writes about all aspects of the product process including product thinking, analytics, prioritization, and organizational dysfunction at The Beautiful Mess on Substack.

Julie Zhuo is the Co-Founder of Sundial and author of The Making of a Manager. She writes about leadership and technology at The Looking Glass on Substack

Lenny Ratchitsky interviews leading voices in product and technology as the host of Lenny’s Podcast on YouTube and Lenny’s Newsletter Substack

Wes Kao was a Co-Founder at Maven and now coaches executives at weskao.com. She writes about effective communication and leadership at Wes Kao on Substack.

Nate B. Jones is an AI strategist, product leader, and professor whose work revolves around metacognition and delivers news and insights on YouTube and Substack.

Thank You!

Keep Reading

No posts found