Today's Agenda
Hello, fellow humans! Today, we have a bombshell deposition from Ilya Sutskever on his role in the firing of Sam Altman from OpenAI. We have also reviewed five reports on the skills most valued in an AI-enabled workforce and synthesized them into a practical guide on what skills you can work on to keep up and how best to build those cognitive muscles.
News
“My Opinion Was That Action Was Appropriate…”
This deposition transcript documents testimony from Ilya Sutskever, former Chief Scientist and co-founder of OpenAI, in the case of Musk v. Altman. The deposition, lasting nearly 10 hours, focused on events surrounding Sam Altman's firing from OpenAI in November 2023.
Sutskever revealed that he prepared a detailed 52-page memo (Exhibit 19) documenting concerns about Sam Altman's leadership, which he sent to independent board members using disappearing email. The memo alleged a "consistent pattern of lying, undermining his execs, and pitting his execs against one another." Sutskever testified that he had been considering Altman's removal for at least a year, waiting for favorable board dynamics.
The deposition revealed that most of Sutskever's information came from Mira Murati, OpenAI's CTO, rather than direct investigation. Sutskever admitted he did not verify claims with the individuals involved, including allegations that Sam was fired from Y Combinator and that Greg Brockman was fired from Stripe. He acknowledged the process was "rushed" due to the board's inexperience.
During the weekend after Altman's firing, Helen Toner facilitated discussions about merging OpenAI with Anthropic, with Dario and Daniela Amodei participating. Sutskever expressed strong opposition to this merger. He later supported Altman's reinstatement, stating he regretted the decision and realized the importance of firsthand knowledge.
In the memo cited by the deposition, Sutskever writes "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." Sutskever confirmed this was "clearly [his] view at the time."
He bluntly assesses his thinking at the time, "My opinion was that action was appropriate... Termination" - confirming he explicitly recommended firing Sam Altman to the independent board members.
The Jakub incident: Sutskever testified that the episode "Involves Sam lying, undermining Mira, undermining Ilya, and pitting Jakub against Ilya." He explained: "Sam was telling me and Jakub conflicting things about the way the company would be run."
The GPT-4 Turbo/DSB matter: A section of his memo was titled "Lying to Mira About Jason's Opinion About the DSB," though Sutskever later acknowledged he relied on secondhand information from Mira and didn't verify this directly.
Sutskever was also reflective in the deposition and admitted he learned "the critical importance of firsthand knowledge" and that much of his information came from Mira Murati rather than his own direct observation or verification. When asked about specific incidents, he often acknowledged he hadn't spoken to the people involved or verified the claims independently. However, he did not explicitly retract the characterizations from the November 2023 memo.
Sutskever left OpenAI in May 2024 to pursue a "new vision" at his company Safe Superintelligence. He confirmed he still holds a financial interest in OpenAI, which has increased in value since his departure. The deposition ended contentiously, with disputes over document production and instructions not to answer questions about his financial stake's specific value.
Simplify Training with AI-Generated Video Guides
Simplify Training with AI-Generated Video Guides
Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos.
Here’s how:
1️⃣ Instant Creation: Turn complex tasks into stunning step-by-step video guides in seconds.
2️⃣ Fully Automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.
3️⃣ Seamless Sharing: Share or embed guides anywhere effortlessly.
The best part? The browser extension is 100% free.
A Practical Guide to Acquiring the Most Valuable AI Skills
We looked at five sources to assess what are the most practical and valuable skills for humans to have going into an AI-enabled workforce, and here’s what they had to say.
McKinsey reports a labor market shape-shift. Fewer traditional roles, more re-configured roles, and rising skills uncertainty as gen-AI and agentic AI collide with aging workforces and uneven demand. Entry-level software roles soften while data/AI–adjacent roles (e.g., data engineering) hold up; employers want AI-capable people who can also sell/lead. (McKinsey & Company)
University of Dallas identifies leadership virtues. Five virtues—prudence, temperance, courage, justice, transcendence—frame how leaders make choices, manage risk, sustain trust, and build cultures that can absorb AI-era shocks. (blog.udallas.edu)
LinkedIn highlights skills on the Rise. “Skills on the Rise 2025” shows a mix of human and AI-tool fluency. (Social Media Today)
AI literacy
Conflict mitigation
Adaptability
Process optimization
Innovative thinking
The World Economic Forum’s Global Outlook Global outlook:
Fastest-growing skills:
AI & big data
Networks & cybersecurity
Tech literacy
Creative thinking
Resilience/flexibility/agility
Leadership/talent management
Environmental stewardship newly elevated.
Fastest-growing jobs:
AI/ML
Big data
Software
Security
Green-transition
UX/IoT roles
What to Learn and How to Get It
A. Build AI literacy the right way (4 weeks to “usefully dangerous”)
Daily AI reps: document three workflow automations (prompt + before/after metric). Focus on writing, analysis, outreach.
Portfolio artifacts: a before/after case study + a prompt library with guardrails.
Upskill path: start with AI literacy (use tools effectively), then add LLM application basics (RAG, prompt chains), then data fundamentals (SQL + pandas). (Maps to WEF #1–3.)
B. Strengthen creative/analytical thinking (weekly cadence)
Creative constraints: do 5 “one-hour solution sketches” to the same problem with different constraints; pick 1 to A/B test.
Analytical drills: weekly “fact-check & reframe” memo (claim → data → counterfactual). (Maps to WEF #4, #9.)
C. Resilience, adaptability, and conflict mitigation (team habits)
Team pre-mortems + red-teaming: schedule bi-weekly 45-min sessions; rotate facilitator; grade decisions (clarity/speed/fairness).
Conflict toolkits: adopt “seek first to understand” scripts; measure time-to-resolution and “issue-reopen” rate. (WEF #5; LinkedIn “conflict mitigation”.) (Social Media Today)
D. Networks & cybersecurity essentials (non-security pros)
Target competence: Zero-trust basics, identity & access, data classification, incident tabletop drills.
Evidence: complete a lab (try cloud free tiers) + a one-page threat model for your product/process. (WEF #2.)
E. Environmental stewardship (tie to revenue)
Project: calculate one Scope-2 reduction or energy-efficiency win; publish a 1-pager with ROI and emissions impact. (WEF #10.)
F. Leadership virtues (make skills durable)
Prudence → decision logs (assumptions, options, why/why-not).
Temperance → WIP limits (cap projects; say “not now”).
Courage → escalate early; ship small bets.
Justice → fair process & transparency norms.
Transcendence → purpose review in OKRs. (University of Dallas virtues.) (blog.udallas.edu)
G. Role-specific stacks (quick starts)
AI/ML Specialist: complete a small LLM app (retrieve → summarize → action), add evals; show latency/cost/accuracy trade-offs. (WEF roles.)
InfoSec Analyst: earn an entry credential (e.g., Sec+), run a tabletop for your org’s top risk; document mitigation plan.
Renewable/EV Engineer: publish a mini techno-economic analysis (TEA) for a local project; include grid/storage constraints.
Data/Analytics Specialist: build a metric pipeline (ETL→dashboard) with alerting; add a bias/quality checklist.
Radical Candor
"I think the trick if you are in a junior role is to start to as actively and aggressively as you can push across the spectrum… you're on a spectrum from ‘I just produce stuff’ to ‘I solve problems,’ you want to be pushing as hard as you can toward problem solving."


