We are glad to share another guest post from Tom Rachman. In this essay, Tom explores the human motivation to learn, the evolution of modern education systems, and how both could evolve with more powerful AI. Please let us know your thoughts & critiques. Like all the pieces you read here, it is written in a personal capacity. You can read Tom’s earlier essay on how AI may affect human behaviour, here.
A baby giggles on a picnic blanket in the park, her gaze darting around, before she blinks at a dazzle of sun. The child is still oblivious to the name of her planet, or that dinosaurs once roamed here, or that the blue above has black on the other side. But her lifelong pursuit—the project of all humanity—is underway: learning.
Only, what for?
Today (and especially tomorrow), children will blink into existence as the latest joiners of a once-superlative cognitive tribe, now somewhat diminished. In the talent we admire most, and rely on entirely, we’re falling behind machinery. When people feel ineffectual, they plunge in mood and effort. At the margins, you glimpse The Great Dejection already.
“I’ve grown not to entirely trust people who are not at least slightly demoralized by some of the more recent AI achievements,” Tyler Cowen said, while Ethan Mollick remarked: “If you haven’t had at least a minor crisis (What does this mean for my job? What does it mean for my kids’ jobs? What does it mean to think?) you probably haven’t used AI enough.”
Such anxiety feeds into a thought haunting AI progress: What are we for? It’s a species-level interrogation that helps explain why philosophers find employment in the tech sector nowadays. The public—once they reckon with what’s coming—may prefer therapists. Thankfully, chatbots can listen to our woes.
The emotive response is to condemn technology, as if it might be stopped. More pragmatic is to consider what it reveals. For instance, if students are cheating with generative AI, does this suggest that education’s objectives have become misaligned from the incentives?
Today, its main goals are job-preparation, socialization, and wellbeing. Yet in all three, education falters. Consider wellbeing: the mental health of school-age children has deteriorated for more than a decade. Socialization seems an elusive goal too, now that “real life” is screen life for many. And job-preparation is a precarious promise, given how many occupations are contingent on what AI does to the world.
Facing all this, educational policy tends to tinker with the What of learning (curriculum) and fret about the How (methods). But it’s the Why that demands its boldest recalibration since the Enlightenment.
HOW WE GOT HERE
Socrates brought a cup of poisonous hemlock brew to his lips, its whiff of mouse urine pervading his nostrils. He swallowed, then paced till his feet grew numb, whereupon he stretched out, awaiting the consummation of a death sentence for corrupting Athenian youth with his teachings. Reclining there, he embodied one last Socratic lesson: that education is a matter of control and ethics—the original alignment problem.
Society creates intelligent agents (its young), and aspires to design their behaviour. But how to protect against data-poisoning? Which reward-functions to set? And how to ensure that those agents never go berserk?
In prehistory, education meant trailing after family, absorbing skills, myths and hearsay about the larger world. But the technology of writing led to formal schooling. Sumerians established eduba to train scribes in cuneiform, Ancient Egypt constructed per-ankh houses of learning, the Islamic Golden Age saw the flourishing of kuttab schools, Medieval Italy developed scuole d’abaco, and Aztec Tenochtitlan ran calmecac for the nobility.
Technology jolted learning afresh when the printing press multiplied the quantity of information and diffused it, weakening institutional control over knowledge. The Industrial Age saw further updates, with more machinery requiring more skilled workers, and an engineering ethos infiltrating classrooms. “Teach these boys and girls nothing but Facts,” the data-processing schoolmaster, Thomas Gradgrind, says in Dickens’ novel Hard Times (1854). “You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.”
To gather the Facts of education itself, testing became standardized, tracking pupil progress, and sifting the young according to their apparent talents. Standardized assessments, besides stressing children, became a debatable proxy for learning. Orwell recalled one of his teachers yanking pupils’ hair, kicking their shins, and making them plead with hands raised to affix dates to wars of which they knew nothing. “The whole process was frankly a preparation for a sort of confidence trick,” he wrote. “Your job was to learn exactly those things that would give an examiner the impression that you knew more than you did know, and as far as possible to avoid burdening your brain with anything else.”
Foucault noted the resemblance of schools to prisons, characterizing them as institutions to regulate and enforce control. But, while it’s naïve to ignore power, it’s naïve to see only power. People also wish children to thrive, and they care about strangers’ offspring too. During the Enlightenment, this humanistic vision seeped into educational theory, notably by the hand of Rousseau, who considered children as virtuous creatures that society befouls. “All is good upon leaving the Maker’s hands; all degenerates in the hands of man,” he wrote in his treatise on education, Émile.
Amid the bloodshed of the Napoleonic wars, a disorganized but kindly Swiss educator sought to apply Rousseau’s ideals. Johann Heinrich Pestalozzi—brow furrowed in a muddle of compassion and anxiety—stood before 80 uneducated orphans, far too numerous to teach at once. So he had them draw, write, and learn by their own propulsion. “It quickly developed in the children a consciousness of hitherto unknown power, and particularly a general sense of beauty and order,” he wrote. “It was the tone of unknown powers awakened from sleep; of a heart and mind exalted with the feeling of what these powers could and would lead them to do.”
Here was a fresh Why for learning: the discovery of beauty external and powers internal. This educational ideal spread via Pestalozzi’s German acolyte Friedrich Fröbel, who equated school to nourishing a garden of children, so named it Kindergarten. John Dewey’s progressive movement followed, as did Steiner’s Waldorf education, and the Montessori schools. Gradually, the humanistic Enlightenment Why (flourishing) merged with the mechanistic Industrial Why (training), a sometimes-awkward marriage that persists today.
Into this tense union, the internet barged. Learning had never been more available; writing and speech boomed. Yet peculiar changes were afoot, with a range of human cognitive metrics falling from around 2012, arguably because the deluge of digital inputs overwhelmed our minds, turning humans into a race of scatterbrains. Book-reading plummeted, while teenagers’ performance in science, reading and math dropped across the industrialized world, with many reporting an inability to concentrate. Adults were scoring lower in numeracy and literacy too. Then chatbots arrived.
THE HOMEWORK APOCALYPSE
According to a recent survey, 92% of British undergraduates are using AI, with nearly all doing so for assignments. Videos circulate online showing how to conceal homework fraud with AI apps that “humanize” chatbot text and evade educators’ counterintelligence apps, which seek out dubious prose. It’s an arms race that leaves teachers in despair, not least because they’re losing.
The dilemma is this: Do you hold proudly to the educational values of before? Or decide that current testing is measuring little but the past? In recent generations, many kids who seemed maladapted because of tech obsession—early internet hackers, all-night gamers, social-media addicts—ended up succeeding in business, the arts, politics. Early adopters rule the world today. So maybe AI cheating is adaptive.
Moreover, the will to cheat one’s own learning is not the fault of tech. The American writer John Warner spent years posing the following hypothetical question to his college classes: If I offered you an ‘A’ but you had zero work to complete, no more classes, and could never tell anyone—would you go for it? In one course, the takers commonly exceeded 90%. That was back in 2013. AI may be satisfying a pent-up demand.
The postwar expansion of higher education made the degree a minimum admission ticket to most professions. When student numbers ballooned and tuition costs rose, the educational Why for many wasn’t learning but earning. Most students still want an education. But even the most earnest face a collective-action problem: If your peers are all generating impeccable AI assignments, you may be penalized for “humaning it.”
Educators cannot dump assessments, though. They need metrics to track teaching efficacy, to motivate, and to propel students into roles where they’d thrive. Ironically, technology is prompting a retreat to the past in some quarters, with oral examinations—which Oxford and Cambridge phased out starting in the 18th century—now back in vogue.
The tricky part is that cheating and learning with AI may be proximate. While it’s dishonest to submit a chatbot essay as your own, is it wrong to pose your essay question to a chatbot, hear its insights, request that it pull up primary sources, assign its deep-research function to create a study guide, and pore over this, digging into the most-pertinent primary sources yourself, then posing follow-up questions to the AI? That process might be more instructive than battling with poorly written academic texts.
AI TO THE RESCUE?
A World Bank pilot program in Nigeria involving AI tutoring claimed gains so striking (two years’ learning in six weeks) that they seem unlikely to replicate. But another study, of an AI-powered math tutor in Ghana, also saw meaningful success, claiming learning benefits equivalent to an extra year of study in eight months.
Personalized AI tutors may offer timid students the chance to pose questions they’d be shy to ask before their peers, or that the overtaxed teacher might lack the time (or ability) to resolve. Educational AI also allows students to probe a source, seeking clarifications or sharpening their comprehension, perhaps even voice-chatting with a virtual Darwin. AI apps could track individuals’ learning in real-time rather than punctuating courses with tests, allowing educators to identify and respond dynamically to students’ progress. And data-tracking at scale could generate a finer conception of human learning than social-science methods have yet produced, such that machine-learning helps humans learn how humans learn.
AI might help with testing too. Oral examinations are far more time-consuming for human teachers than written tests, but voice AIs could quiz any number of students, evaluating their understanding, and issuing marks alongside audio transcripts, score rationales, and future-learning advice. Voice-assistants might even salvage the at-home essay, obliging their purported authors to discuss the contents, assessing students’ understanding of what they “wrote.”
Many teachers remain worried about where this is heading, envisioning classrooms where students gape at AI interfaces, barely interacting with their peers, let alone human teachers, who are kept around as behaviour police. Above all, educators suspect that developers misunderstand humans, with edtech companies building tools around what tech can do, not what students need.
Previous digital tools promised transformative effects too, professing that the internet would bring elite education to all, irrespective of where on Earth and who on earth they were. Yet many tech tools succeeded only for the tiny fraction of students who used them as prescribed (typically the top of the class anyway), a phenomenon known as “the 5 percent problem.”
AI can worsen learning too, as shown by research on German university students who learned coding with the help of chatbots. Those who sought AI explanations showed notable gains, but those who had the chatbot complete exercises undermined their own learning. More surprising is that AI might make us stupider while believing ourselves smarter: users tend to overestimate how much they’ve learned with AI, mistaking machine intelligence for their own.
An even more alarming prospect is cognitive atrophy, where users offload critical thinking and creativity to AI. If humans are able to pursue higher-order thinking by dumping intellectual drudgery onto machines, that would be fine. However, what constitutes higher-order thinking is debatable. When scientists used AI at a materials-discovery lab, they produced 39% more patent filings, yet this meant handing the idea-generation to machines, downgrading human creators into idea-judges. For 82%, job-satisfaction fell: their hard-earned skills seemed unwanted, their inventions second-best. A common pacifier is that humans collaborating with AI will be stronger than either alone: the “centaur model.” They said that of chess once. But these days, Magnus Carlsen would only slow down an AI grandmaster.
Technology is about making what’s hard easier. And learning is hard, even aversive. Yet the practice is not like doing the laundry, where we lose nothing by cramming filthy socks into a machine, and retrieving them clean. Learning is laborious or there is no learning. And labor is impossible to motivate if we don’t see its purpose.
This circles back to the question haunting this era: What are we for? The tautological response is that humans are for tasks that humans want other humans to do: You don’t fancy walking into a grief counsellor’s office only to find a robot. Yet algorithmic aversion—our preference for humans to make key judgments, even when algorithms perform better—seems likely to fade as AI becomes clearly superior, then commonplace. Until recently, sitting in a taxi that drove through San Francisco with nobody at the wheel would have seemed like a horror-movie scene. The real-life horror may be when no human is needed to drive anything.
LEARNING IN THE AGE OF MACHINE LEARNING
So what?
Machines can do our duties, feeding and clothing and cuddling us, leaving humans to sail yachts and paint watercolours. We’d never need to study again (but could for kicks). You hear such predictions, whose only flaw is the entirety of human history, a saga shaped by our cognitive hunger and the restless drive for comparative advantage. Artificial intelligence will do plenty, but not erase what evolution wrote.
Yet evolution itself might help explain our predicament, why technology satisfies our wants while producing effects we regret. Biological evolution set our inner clock, with neuronal firings like a cognitive speed-limit. Yet technological evolution keeps speeding up, the clock-hands spinning faster and faster, such that humans perceive only a blur now. We’re inundated with inputs at the speed of computational time, trying to keep up in biological time. And we’re going mad from it.
“As computational systems accelerate while biological rhythms remain stubbornly constant, humans face an insurmountable temporal divide,” Nicklas Berild Lundblad explains, setting forth his Bifurcation of Time theory. “We appear destined for increasing friction between silicon speed and cellular patience, with our institutions caught in the crossfire. But this conclusion misses something profound: the emergence of artificial intelligence as a temporal mediator.”
Already, large-language models are doing this, digesting the (humanly) indigestible immensity of data, and converting it into chatbot responses intelligible at the pace of human cognition. AI will keep evolving, becoming our extrasensory sensors and interpreters of the world while remaining fluent in human time. This points to a future role for humans, where action is not our highest calling; judgment is.
“In the judgment economy, value derives from qualities that resist technological acceleration—discernment, wisdom, creativity, and ethical reasoning,” Lundblad writes. “These capabilities aren’t necessarily improved by moving faster; often they benefit from deliberate slowness.”
Humanity can’t compete on the factory floor, so walks upstairs to management, setting the objectives, designing human-AI labour relations, evaluating the outcomes. This becomes a new Why for education: to develop the human discernment and ethical reasoning to govern our new powers rather than letting them govern us.
But how to teach that?
SPECULATIVE IDEAS
The Great Dejection—a worsening mood over human cognitive recession during the AI boom—brings a risk: passivity. If people see little worth in educating themselves, perhaps they stop bothering, surrendering further competence to machines, and forfeiting human decision-making, much as the AI-safety movement long feared. Therefore, education policy should target motivation, seeking to drive learning across the lifespan.
Here are three possible approaches, taking inspiration from self-determination theory, which identifies three psychological needs that motivate us: autonomy (feeling in control of one’s actions); competence (feeling capable and effective), and relatedness (feeling connected to others):
Choose Your Own Adventure. (Autonomy)
Learning should be reframed as a personal asset, earned through self-directed R&D. To initiate this, schooling could set “choose-your-own-adventure” hours, during which even the youngest pupils embark on personal enterprises in any area of interest, including the unacademic. The only condition would be that the student pursues an adventure—that is, adds to their knowledge and skills.
Pupils could use AI to help brainstorm the steps of the adventure plan, allowing them to stay in charge without the adult supervision that can puncture motivation. Nor should teachers mark the adventures. Rather, pupils mark the AI, assessing how well it helps them achieve their stated goals, and noting where the objectives fell short, which would grant the pupil insight into managing AI collaborations in the future. The adventure app should gather insights on each child’s learning strategies and efficacy too, providing feedback to help them learn how they learn.
Behavioural AI might help address student wellbeing here, employing data-analytics to detect which activities worsen a particular child’s determination and mindset, then adapting recommender systems to discourage these. That could also help resolve an enduring frustration: that years into the decline of children’s mental health and test scores, we still dispute the causes.
Students who relish autonomy might unlock more, while those who need (or prefer) closer guidance could select a more directed curriculum. As for teachers, they could upgrade from information-crammers to adventure-mentors, focusing on the meaningful part of their jobs: figuring out each learner, and inspiring them. In higher education, the “choose-your-own-adventure” approach could transition into a fully customizable degree: nothing but electives.
Multiply Your Talents. (Competence)
Lifelong occupations commonly spring from absurd factors such as location, wealth, and fluke—all mediated by the decision-making of adolescents whose prefrontal cortices have yet to mature. By this haphazard process, competencies (or their absence) become the boundaries of one’s life. Even when workers could expect a relatively predictable employment future, this career process was often cruel and foolish. Now that predictable work paths are dissolving, we may have a chance—and a need—to make credentialing more adaptive.
Micro-credentials could become source-agnostic, available outside traditional institutions, and open to all ages, avoiding the cost and stigma of returning to school after one’s typical schooling years. However, source-agnostic credentialing should take care not to undermine the institutions of higher education, which still produce inspiring learning communities, and spark interdisciplinary creativity (not to mention the plethora of non-educational benefits, such as varied social exposure and friendships).
The system should resist the tech-era tendency to worsen isolation. This could be done by requiring group work, either virtually or locally, for micro-credentialing. Also, human tutors could supplement AI education, allowing students to discuss and digest their learning with a person.
AI might help individuals choose their credits through data-analytics of public information such as job ads, economic indicators, and other correlates of future demand, permitting users to align their learning with purpose. Intelligent systems could also connect a person’s existing skill/knowledge assets with others’ needs, whether professional or not.
Micro-credentials might even be assigned to learning experiences such as extended foreign travel or volunteer work, or could be awarded for one’s decades alive, such that competence is less a framed diploma from youth than a photo roll of accumulating wisdom.
The Wisdom Exchange. (Relatedness)
Every person amasses a repository of wisdom over the course of life, but much of it goes untapped. AI could help build social platforms for wisdom-sharing, linking human experts to human learners, perhaps with a serendipity option for those wanting randomized knowledge discovery.
Wisdom exchanges should be voluntary, serving both sides of the equation: instruction for one, the gratification of purposeful assistance for the other. Each exchange could include a reciprocal element, meaning the expert turns the table on the learner at the end—say, a retired politician at first supplying insights to a young activist, then asking for a lesson on bewildering emojis. To encourage respectful interactions, the system should include reputation ratings, as with taxi apps.
Wisdom-exchanges could also employ AI for learning across borders, conducting real-time language interpretation between participants. A responsive system could also present relevant context during interactions, and offer post-exchange fact-checking and takeaways.
Great food for thought. I like the idea of the wisdom exchange. Substack here is a good start on that, and hopefully it evolves further in that direction.
Great article Tom, very provocative. My own research is focused on what is learning, but you can't ask that without asking why. Many thanks for your work on this.