
Nothing has recently challenged my thinking about the future of learning quite like Ezra Klein’s interview with global education expert, Rebecca Winthrop. Their discussion on Ezra Klein’s New York Times podcast was particularly relevant to my studies in AI ethics and governance.
The conversation highlights a profound dilemma facing education today.
Generative AI has emerged as a technology that can perform many of the tasks traditionally assigned in schools like summarising texts, solving maths problems, and writing essays. As Klein pointedly asks in the interview, “If you have this technology that not only can but will be doing so much of this for you, for us, for the economy, why are we doing any of this work at all?”
This question stops me in my tracks whenever I consider the future of education. Winthrop, the Director of the Center for Universal Education at the Brookings Institution, acknowledges that AI fundamentally challenges our assumptions. “We have to really rethink the purpose of education,” she states, and highlights that we need to reconsider what skills and knowledge children will need in an uncertain future.
One high school student Winthrop spoke with described breaking essay prompts into three parts, running each through different AI models, combining the outputs, and using anti-plagiarism checkers before submission.
Another mentioned using “AI humanisers” to add typos to make AI-generated work appear more authentic. As Winthrop observes, “No matter what, kids will find a way. We cannot outmanoeuvre them with technology.”
If students have already outpaced our ability to distinguish between human and AI-generated work, what foundational skills should we prioritise teaching that remain uniquely valuable in an AI-abundant world?
Rethinking Education’s Purpose
For decades, the implicit purpose of education has been straightforward. Get good grades, go to university, and get a good job.
As Winthrop notes, “Certainly in my lifetime, the implicit purpose of education. The way we say to ourselves: Did this kid’s education work out? is, Do they get a good job?”
I’ve often critiqued this view of education, and I was pleased to see Klein and Winthrop explore how AI disrupts this paradigm. When AI can write essays, pass exams, and qualify for professional certifications, this approach begins to crumble.
The traditional model is disrupted when AI can perform tasks previously done by humans acting as “machines of a kind.”
Klein’s parental concern resonates with me as a mother: “I don’t know what the economy will want from them in 20 years… how do I know how they should be educated?”
Winthrop responds by advocating for “flexible competencies to navigate a world of uncertainty,” emphasising motivation and engagement to continually learn new things, creating “go-getters” and “way finders.”
This approach seems far more valuable than viewing education merely as job training for increasingly soon-to-be obsolete skills.
The Engagement Crisis
What hit me most from Winthrop’s research is the alarming statistic that only about one-third of students are deeply engaged in school meaning two-thirds are not.
In my experience, this disengagement crisis precedes AI, but I worry that generative AI will dramatically worsen the problem.
Winthrop elaborates on four modes of student engagement from her book “The Disengaged Teen”:
🚗 Passenger mode — Students who are coasting, doing the minimum required. They might get good grades but are “bored to tears” and have “dropped out of learning.”
🥇 Achiever mode — Students focused on perfect outcomes but not necessarily engaged with learning for its own sake.
👊 Resistor mode — Students who avoid and disrupt learning.
🏕️ Explorer mode — Students who genuinely love what they’re learning, dig in, and are proactive.
I’ve observed that AI poses a particular threat to students in passenger mode.It effectively allows them to outsource their learning entirely. As Klein puts it, “You’ve basically hired your own fill-in student who can help you coast.”
This strikes me as the educational equivalent of hollowing out. Basically maintaining the appearance of learning while emptying it of substance.
Winthrop’s concern that AI will create a “frictionless world for young people” resonates with me. She worries this will hinder the development of essential “muscle” for doing hard things and the neurobiological wiring for focus, attention, effort, and connecting ideas.
I share this concern. Learning requires effort and struggle, and AI threatens to remove the productive friction necessary for cognitive development.
Creating Ethical Guardrails
The interview suggests several approaches to addressing these challenges:
Contain AI systems with clear guardrails. Winthrop emphasises we “cannot just bring commercial tech into our schools and hope it will solve these problems.” Companies developing educational AI should prioritise social good over profit “where regulation and government could and should step in.”
Rethink learning experiences. Rather than banning AI outright, schools should prioritise experiences that cultivate motivation, critical thinking, and creative problem-solving.
Consider schools as screen-free spaces. Both Klein and Winthrop express concern about the “catastrophic experiment” with screens and children. Schools could become “rare screen-free oases” focused on human interaction and developing core capacities.
Develop new assessment methods. Moving beyond traditional grades to evaluate student agency and engagement. “Schools are not designed to give kids agency. Schools are designed to help kids comply.” Parents should assess whether their child can reflect, identify interests, and pursue information independently.
The Promise and Risks of AI Tutors
As a part-time student, I’m obviously intrigued by AI tutors. Klein suggests, “A.I. gives you more tutors than there are children,” potentially tailoring education to individual learning styles.
Winthrop acknowledges this potential, citing a Nigerian trial where an AI tutor helped children learn “two years of average English” in just six weeks, and highlighting benefits for neurodivergent students without specialist access.
However, I share her caution against students sitting “in front of an A.I. tutor alone for eight hours.” She envisions AI instead automating administrative tasks, freeing teachers for human interaction.
What resonates most is Winthrop’s reminder that “kids learn in relationships with other humans. We’ve evolved to do that.”
And I have to agree. Teachers remain essential for fostering relationships and understanding different perspectives.
Lessons from the “Screens and Phones Debacle”
I believe that the parallels between AI and the earlier introduction of screens into education are impossible to ignore.
Klein and Winthrop view widespread screen adoption as a “catastrophic experiment” with negative consequences. Winthrop stresses we “cannot take a wait-and-see approach again” with generative AI.
And I think these lessons are essential:
Only use generative AI for clear problems. Winthrop’s example of AI for parent-child wellbeing conversations as “crazy” is refreshingly sensible.
Be cautious of commercial motivations. Tech companies race to gain student allegiance — “Google just made Gemini available for kids… racing to get the allegiance of young kids.” Commercial interests often masquerade as educational innovation.
Consider schools as screen-free spaces. Klein imagines “screen-free oases” for developing deep attention and reflection. In my studies, I’ve found boundaries around technology often prove more effective than controlling its implementation.
An Uncertain Future
The Klein-Winthrop conversation underscores one fundamental challenge. We’re preparing children for an unpredictable future using tools whose impacts remain unclear.
I’m fully convinced by Winthrop’s argument that parents should focus less on grades and more on whether children develop learning agency and adaptability. The “straight line: All A’s equals good job” is becoming “much more complicated.”
Education in the AI age must shift from knowledge transmission to developing flexible competencies and the capacity to navigate uncertainty. This requires fundamentally rethinking not just how we teach, but why we teach. That’s a challenging but essential task for educators, policymakers, and parents alike.
Their conversation offers no easy answers, but provides a thoughtful framework for approaching these complex questions.
Use my free gift link to read the full article or listen to the Ezra Klein Podcast: https://www.nytimes.com/2025/05/13/opinion/ezra-klein-podcast-rebecca-winthrop.html?unlocked_article_code=1.HU8.50WR.xeEU_JMPI8d8&smid=url-share
If you find value in these explorations of AI, consider a free subscription to get new posts directly in your inbox. All my main articles are free for everyone to read.
Becoming a paid subscriber is the best way to support this work. It keeps the publication independent and ad-free, and gives you access to community features like comments and discussion threads. Your support means the world. 🖤
By day, I work as a freelance SEO and content manager. If your business needs specialist guidance, you can find out more on my website.
I also partner with publications and brands on freelance writing projects. If you're looking for a writer who can demystify complex topics in AI and technology, feel free to reach out here on Substack or connect with me on LinkedIn.