Metadata →
Over the past few days at Davos Elon Musk, Dario Amodei, Demis Hassabis, and Ray Kurzweil (on the Moonshots podcast) have all offered variations on the same definition of AGI: intelligence that matches or exceeds the combined cognitive capabilities of the top humans across all fields.
Highlights
id982476785
Yes, the companies shaping our technological future are planning for a world where machine intelligence surpasses collective human expertise within a decade—possibly within three years. And education? We’re still arguing about whether students should be allowed to use ChatGPT on their homework.
id982476786
If cognitive labor is automated, what is education for? Human flourishing? Democratic participation? Social cohesion? Psychological resilience? Each answer implies radically different curricula.
id982476787
What does it mean to be a “teacher” when the student’s AI assistant knows more than the teacher ever will? What is the teacher’s irreducible function? Facilitator? Coach? Therapist? Ethical guide? Witness to human struggle and growth? And how do we attract, train, and retain people for a role that hasn’t been defined yet?
id982476788
Human cognitive development evolved over millions of years without AI. What happens to developing minds that grow up with superintelligent assistants from age three? What capacities atrophy? What new capacities emerge? Are we even studying this?
id982476789
If citizens cannot evaluate whether their AI advisors are giving them good information, what happens to democratic deliberation? How do we maintain meaningful human agency over collective decisions when the analysis is beyond human comprehension?
id982476790
Young people have always struggled with “Who am I?” and “What am I good at?” In a world where AI can be better at everything academically, how do humans develop a sense of competence, contribution, and meaning?
id982476791
Who Should Lead This? Right now, the honest answer is: almost no one is. The AI companies are too busy building the systems to think carefully about educational implications. Universities are too slow and too captured by existing structures. Governments are multiple cycles behind. Traditional education researchers are still debating whether AI is a threat or an opportunity—a framing that will seem quaint in five years. This leaves a handful of us on the margins: debate coaches, educational reformers, AI researchers who care about human development, philosophers who haven’t given up on practical questions. It’s not enough. We need institutional capacity. We need research programs. We need foundations willing to fund speculative but essential inquiry. We need educational leaders with the courage to ask questions that have no good answers yet. We need all of this within the next two to three years—before the systems arrive that will make our current debates obsolete.