<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Policy Perspectives : Essays]]></title><description><![CDATA[Long form writing on big questions ]]></description><link>https://www.aipolicyperspectives.com/s/essays</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 14:44:50 GMT</lastBuildDate><atom:link href="https://www.aipolicyperspectives.com/feed" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><webMaster><![CDATA[aipolicyperspectives@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aipolicyperspectives@substack.com]]></itunes:email><itunes:name><![CDATA[AI Policy Perspectives]]></itunes:name></itunes:owner><itunes:author><![CDATA[AI Policy Perspectives]]></itunes:author><googleplay:owner><![CDATA[aipolicyperspectives@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aipolicyperspectives@substack.com]]></googleplay:email><googleplay:author><![CDATA[AI Policy Perspectives]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Agents Running the State]]></title><description><![CDATA[What could possibly go wrong?]]></description><link>https://www.aipolicyperspectives.com/p/ai-agents-running-the-state</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-agents-running-the-state</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 15 Apr 2026 09:50:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CymV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CymV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CymV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 424w, https://substackcdn.com/image/fetch/$s_!CymV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 848w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" width="1456" height="795" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:795,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8904267,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194174723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CymV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 424w, https://substackcdn.com/image/fetch/$s_!CymV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 848w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Waiting for an AI helper. (Credit: Gemini)</figcaption></figure></div><div class="callout-block" data-callout="true"><p><em>&#8220;Public services&#8221; include everything from teachers to the trash, from roadwork to  permission for a tree house. Much seems routine, but plenty is at stake. This makes politicians hesitant to risk an overhaul, leaving the system creaking and the paperwork mounting. </em></p><p><em>Last October, a provocative proposal emerged. <a href="https://agenticstate.org/">The Agentic State</a> conjured a vision of officialdom transformed, converting outdated procedures with a new system of AI helpers. This fledgling project offers both a blueprint and a promise of assistance to governments around the world.</em></p><p><em>But what if the vision were blind to how this could go awry? <a href="https://simoneparazzoli.me/">Simone Maria Parazzoli</a>, a co-author of the paper, and <a href="https://www.linkedin.com/in/omerhanbilgin/">Omer Bilgin</a> of <a href="http://www.deliberaide.com">deliberAIde</a> decided to critique their own ideas, seeking pitfalls in hopes of averting them.</em></p><p style="text-align: right;">&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p></div><div><hr></div><h4><strong>By Simone Maria Parazzoli &amp; Omer Bilgin</strong></h4><p></p><p><strong>Amid the exhaustion of caring for a baby, new parents must deal with everything from bewildering sobs, to erratic feeding times, to the joys of changing a soiled newborn at 3 a.m. The last thing they need is paperwork.</strong></p><p>But what if, when coming home from the maternity ward that first day, they could awaken a government AI voice assistant, tell it the happy news, and hear the following response? &#8220;Congratulations! What&#8217;s the baby called?&#8221; The app would then take care of all the dreary admin, coordinating across agencies, registering the child, and setting in motion the services that this tiny new citizen should enjoy.</p><p>That is one example of how a future &#8220;agentic state&#8221; could simplify, speed up, and improve citizens&#8217; interactions with public services. To be clear, this does not yet exist. But projects like this one, <a href="https://oxfordinsights.com/insights/innovation-under-tough-circumstances-ukraines-ai-strategy-in-times-of-war/">envisioned</a> by Ukrainian officials, are more than fantasy, with several countries avidly testing early versions of agentic AI systems.</p><p>While Ukraine works toward the baby example, <a href="https://www.gov.uk/government/news/ai-helpers-could-coach-people-into-careers-and-help-them-move-home">Britain</a> is piloting agent-based support to provide citizens more tailored help. Meanwhile, <a href="https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai">Singapore</a> is developing governance frameworks for agentic AI, and governments from <a href="https://guides.data.gouv.fr/intelligence-artificielle/le-serveur-mcp-de-data.gouv.fr">France</a> to the <a href="https://www.govinfo.gov/features/mcp-public-preview">United States</a> are ensuring that their public data can be accessed by agents.</p><p>Agentic AI systems&#8212;capable of perceiving, reasoning, and acting with minimal human supervision&#8212;will transform what organizations can achieve. By combining the reasoning of large language models with retrieval, memory, and tool use, agentic AI can automate complex tasks. For governments, whose core work is high-volume, structured administrative processes, this could make services more efficient, timely, consistent, and fair, while lowering costs.</p><p>Consider a citizen looking to start a small business. An agentic system&#8212;instead of requiring the entrepreneur to individually navigate zoning boards, tax authorities, and regulations&#8212;could autonomously reconcile these requirements. The larger promise is a shift from just <em>doing things right</em> (optimizing for procedure-following) to <em>doing the right things</em> (pursuing outcomes that citizens truly want).</p><p>The <a href="https://agenticstate.org/">Agentic State</a> vision paper&#8212;supported by The World Bank and the Global Government Technology Centre Berlin&#8212;was the first effort to systematically map the opportunities of agentic AI adoption for governments. This was not an academic exercise: 21 leaders across 15 countries contributed, including ministers and chief technology officers preparing to lead this transition.</p><p>In this vision, AI agents are a means to manage <em>complexity</em> and <em>scale</em>, while humans develop <em>strategy</em>, exercise <em>judgment</em>, and hold <em>accountability</em>.</p><p>Several governments have integrated official chatbots into their government services, but most of these merely provide conversational guides to administrative procedures. A few pioneering countries are starting to move beyond that. Ukraine, for instance, is turning chatbots into agentic assistants. Specifically, its Diia.AI assistant can retrieve users&#8217; data from connected registries, and generate official documents such as income certificates, while also providing certified information based on records such as taxation, land registries, and pensions.</p><p>The United Kingdom is also exploring agentic interactions via <a href="https://insidegovuk.blog.gov.uk/2025/12/16/gov-uk-has-entered-the-chat-our-vision-for-gov-uk-chat/">GOV.UK Chat</a> (inspired by Diia.AI), including a pilot program to support job seekers that transforms a static digital portal into an active assistant, matching users&#8217; skills with available opportunities.</p><p>Yet trends and optimism are not enough for success. The agentic state vision rests on key assumptions. What if they&#8217;re wrong?</p><p>This article presents a &#8220;red-teaming&#8221; exercise&#8212;a stress test of this vision&#8212;that identifies six core assumptions, along with scenarios that could emerge if they don&#8217;t hold true, and guardrails to avert such failures.</p><div><hr></div><div class="callout-block" data-callout="true"><h4><strong>Assumption 1: </strong><em><strong>AI Agents Become More Capable and Reliable</strong></em></h4></div><p>Agents can already perform rudimentary planning, tool use (e.g., searching the internet, using calculators, sending emails), and multistep task execution. Frontier labs are <a href="https://www.technologyreview.com/2025/01/11/1109909/anthropics-chief-scientist-on-5-ways-agents-will-be-even-better-in-2025/">betting</a> <a href="https://blog.samaltman.com/reflections">heavily</a> on agents, making it plausible that systems capable of managing complex and large-scale administrative tasks will emerge soon.</p><h4><strong>Failure Scenario: </strong><em><strong>The Technology Falters</strong></em></h4><p>Governments reorganize around agentic execution, but systems never become reliable enough for public administration. The demos look strong, but real cases fail on edge conditions, and require constant human correction. The agentic layer becomes only superficially competent with layers of human intervention underneath.</p><h4><strong>Guardrail: </strong><em><strong>Start Cautiously</strong></em></h4><p>Governments should start with minimal deployments, and tightly scoped use cases to validate reliability, develop procedural rigor and organizational competence, and account for technological evolution rather than committing prematurely to large-scale redesigns.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 2: </strong><em><strong>Agents Can Work Together</strong></em></h4></div><p>The success of agentic systems demands that they&#8217;re able to interact seamlessly, conveying intent, carrying out tasks, and sharing data in an interoperable way. <a href="https://modelcontextprotocol.io/docs/getting-started/intro">MCP</a> (model context protocol) is emerging as the technological standard for connecting AI applications with external systems. </p><h4><strong>Failure Scenario: </strong><em><strong>Standards Fail to Converge</strong></em><strong> </strong></h4><p>Commercial interests diverge, establishing competing protocols, while  government departments end up using AI systems that cannot communicate with one another. When a citizen&#8217;s request requires action from multiple agencies, the process breaks down. </p><h4><strong>Guardrail: </strong><em><strong>Officials Insist on Shared Protocols</strong></em></h4><p>Governments should make interoperability a condition of adoption, participating in the cross-sectoral <a href="https://aaif.io/">bodies</a> and forums where these standards are being shaped, funding the development of shared agentic interfaces and other agent-specific standards, and mandating non-proprietary protocols in procurement. <a href="https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards">Standards</a> rarely emerge by accident, but they may emerge when powerful governments treat them as a priority.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 3: </strong><em><strong>Organizations Will Adapt</strong></em></h4></div><p>To adopt and employ agents effectively, organizations must rethink their processes, roles, and incentives. They need to flexibly change and dynamically adapt practices to keep pace with the changing technological landscape.</p><h4><strong>Failure Scenario: </strong><em><strong>The Status Quo Prevents Change</strong></em></h4><p>Agentic AI adoption outpaces organizational change, with citizens and civil servants using agents in an uncoordinated manner long before official programs catch up. Local practices harden into path dependence before common standards emerge. The state becomes more productive at producing bureaucracy, not societally beneficial outcomes.</p><h4><strong>Guardrail: </strong><em><strong>Redesign Processes Before Automating Them</strong></em></h4><p>Agents should only enter workflows that have been simplified, decomposed, and restructured to minimize approval layers and handovers. Governments must treat adoption as a continuous discovery process. They should invest in common evaluation templates, reusable components, and a cross-agency repository of lessons, so that what works in one place can travel before what does <em>not</em> work becomes entrenched. </p><div class="callout-block" data-callout="true"><h4><strong>Assumption 4: </strong><em><strong>Private Adoption of Agentic AI Will Be Rapid</strong> </em></h4></div><p>Many companies are <a href="https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/">betting</a> on an agentic future. Firms are experimenting with internal copilots and autonomous customer flows, while frontier AI companies advance core models, architectures, and capabilities, and cloud providers offer the compute needed to deploy agents at scale. This suggests that agents will become commonplace across business, consumer, and enterprise environments, allowing governments to build on tools, infrastructure, and behaviors already spreading across the economy. This assumption rests on projections, though <a href="https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs">evidence</a> remains ambiguous.</p><h4><strong>Failure Scenario: </strong><em><strong>Diffusion Is Slower Than Forecast</strong></em></h4><p>Governments invest as if an agent-saturated economy is imminent, but industry adoption remains narrow, experimental, or ends up costing more than it saves. Public investments don&#8217;t plug into widely used tools and practices, meaning that citizens find agentic interfaces in government before they&#8217;re normal elsewhere. The state ends up bearing political and institutional costs without the stabilizing effects of private-sector diffusion.</p><h4><strong>Guardrail: </strong><em><strong>Lower Barriers to Private-Sector Agentic Usage</strong></em></h4><p>Governments can accelerate the development of an agentic AI ecosystem by investing in shared agentic infrastructure&#8212;such as standard ways to access public data, communicate across systems, and carry out authorized tasks and payments&#8212;that lower integration costs for firms, and reduce the risk of differing technological maturity across sectors.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 5: </strong><em><strong>Citizens Will Prefer Agentic Services</strong></em></h4></div><p>Increasingly, citizens are interacting with and relying on AI tools, but <a href="https://mbs.edu/-/media/PDF/Research/Trust_in_AI_Report.pdf?rev=0ee82285b2b0439bba524dbddc58214a">many do not trust the</a>m. For governments to integrate AI agents into workflows and services, citizens must accept and support the roles that agentic systems can play, finding them sufficiently trustworthy, reliable, fair, convenient, and accountable.</p><h4><strong>Failure Scenario: </strong><em><strong>The Public Rejects Automation</strong></em></h4><p>A single notable failure, or an accumulation of failures, turn the public against agentic systems, and convince many to opt-out. They judge automated decisions as opaque, illegitimate and untrustworthy, and suspect it worsens <a href="https://arxiv.org/abs/2510.16853">inequality</a>, with privileged citizens able to employ highly capable personal agents to navigate bureaucracy better than those relying on basic tools. The government is forced to run two systems&#8212;agentic and human&#8212;and neither meets expectations.</p><h4><strong>Guardrail: </strong><em><strong>Mandate Transparency</strong></em></h4><p>Governments must make agent integrations into government processes as legible as possible, furnishing explanations of decisions and publishing evaluation results for agentic fairness and performance, while detecting patterns of systemic bias or unequal benefit distribution based on citizens&#8217; technological access.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 6: </strong><em><strong>Human Oversight Will Evolve</strong></em></h4></div><p>For AI agents to act with functional autonomy within government processes, oversight frameworks <a href="https://arxiv.org/pdf/2506.04836">must adapt</a>, moving away from mandatory human reviews and approvals for everything (human-in-the-loop) to intermittent oversight (<a href="https://link.springer.com/rwe/10.1007/978-981-97-8440-0_75-1">human-on-the-loop</a>). This evolution increases speed and efficiency while reducing bottlenecks, with humans intervening only on edge cases. There is precedent for such adaptation: governments adapted regulation to cloud computing, e-identities, and AI-driven decision support systems.</p><h4><strong>Failure Scenario: </strong><em><strong>Regulation Never Updates</strong></em></h4><p>Every agentic action requires human verification; every decision must be justified through mechanisms designed for old chains of accountability. Agents can draft, but cannot act. Compliance and procedural costs rise as institutions retrofit old controls onto new AI processes. The result is high bureaucracy and low autonomy: an <em>agentic state</em> in theory, a <em>copilot state</em> in practice.</p><h4><strong>Guardrail: </strong><em><strong>Sandboxes to Test Oversight</strong></em></h4><p>Governments should establish controlled environments that allow policymakers, developers, and civil society to collaborate and gather empirical evidence on what forms of oversight are adequate and best fit different kinds of agentic deployments, reducing uncertainty before codifying rules at scale. They should explore this early, much as Singapore has done through its <a href="https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf">Model AI Governance Framework for Agentic AI</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Soon, agentic government will be more than optimism and testing.</strong> A vanguard of countries will implement these tools. If those cases produce the kinds of benefits imagined, other countries will flock to join them. </p><p>But momentum is not inevitability. This project depends on assumptions&#8212;about progress, coordination, institutions, norms, and law&#8212;that demand scrutiny before governments rebuild themselves around these new technologies. </p><p>This red-teaming exercise of the agentic state concept is not to argue against the vision, but to make it more robust and resilient. The six possible failure scenarios are not mutually exclusive. Several could compound, and some may already be taking shape. For instance, reliability has been improving <a href="https://arxiv.org/html/2602.16666v1">much more slowly</a> than accuracy, providing ground for technology to falter (Scenario 1), and there are <a href="https://www.adalovelaceinstitute.org/policy-briefing/great-expectations/">signals</a> that the public might reject automation if economic gains and innovation speed are prioritized over fairness (Scenario 5). </p><p>Governments that are serious about improving the state with AI must attend to these risks in earnest now, while the architecture is still being laid. The opportunity is too precious to spurn. </p><p>Agentic AI could make public services considerably faster, fairer, and more responsive&#8212;more so than anything the traditional bureaucratic model has yet delivered. That prize is worth the discipline of preparing for what could go wrong.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-agents-running-the-state?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-agents-running-the-state?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>For further details on &#8220;The Agentic State,&#8221; check out the original <a href="https://agenticstate.org/paper.html">vision paper</a></em> </p>]]></content:encoded></item><item><title><![CDATA[How Tech Changed Chess]]></title><description><![CDATA[And why AI won&#8217;t end our games]]></description><link>https://www.aipolicyperspectives.com/p/how-tech-changed-chess</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/how-tech-changed-chess</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 25 Mar 2026 10:22:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CdTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CdTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CdTX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" width="1024" height="572" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f42add45-d564-4031-a602-e342e4b5c090_1024x572.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:572,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CdTX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Gemini</figcaption></figure></div><p><em>From childhood upwards, we play games as a safe (and strangely joyful) way to battle, strategize, even lose without it coming to fisticuffs. Artificial intelligence grew up playing games too, with developers using the structured rules, scoring systems, and win/loss outcomes to train machines to learn, to improve, even to beat us.</em></p><p><em>In chess, bots have been bettering humans for years now. Yet our &#8220;loser&#8221; species still gathers at sunny park tables, in dank school gyms, and online in droves, all in hopes of crying, &#8220;Checkmate!&#8221; The resilience of chess is commonly cited as evidence that&#8212;even if AI surpasses us in various pursuits&#8212;humans won&#8217;t just give up.</em></p><p><em>However, there&#8217;s more to say about the intersection of technology and chess, in particular how the game has evolved with technology, including AI. Thankfully, the broadcaster and writer <a href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot">David Edmonds</a>&#8212;co-author of </em>Bobby Fischer Goes to War <em>(2004) and editor of the essay collection </em>AI Morality<em> (2024)&#8212;has spent decades observing this, both as a spectator and behind the board himself.</em></p><p style="text-align: right;"><em>&#8212;Tom Rachman, </em>AI Policy Perspectives</p><div><hr></div><p><strong>By DAVID EDMONDS</strong></p><p><strong>Among thousands of tournament games cited in the Batsford book of chess openings, tucked into the top right-hand column of Page 235, is an example of how white should </strong><em><strong>not </strong></em><strong>play.</strong></p><p>Explaining the Closed Sicilian Defense opening, the authors (former world champion Garry Kasparov and the British grandmaster and chess columnist Raymond Keene) spotlight a game in which black is already ahead as early as move 11. Indeed, the player with the white pieces ended up losing. I remember because that player was me.</p><p>That is my humiliating contribution to chess theory: what not to do. The book was published in 1982, and I&#8217;ve barely picked up a pawn in anger in the intervening four decades. But I still follow the chess world, and if there&#8217;s a tournament in London, I&#8217;ll go to watch, spending hours absorbed in the intricacies of the 64 squares.</p><p>As the digital revolution and AI juggernaut move through our lives, we may wonder whether there will still be domains in which humans can continue to find enjoyment and meaning. Chess offers a hopeful case study.</p><p>Chess and AI have had a long relationship. The great forefather of artificial intelligence Alan Turing wrote the <a href="https://www.chess.com/blog/the_real_greco/the-original-chess-engine-alan-turings-turochamp">first chess algorithm</a> in 1948. The following year, another seminal figure, Claude Shannon, distinguished two ways that a computer could play chess: by brute force, calculating every possible move; or by selective search, like a human.</p><p>Chess also proved a <a href="https://www.researchgate.net/publication/224834166_Is_chess_the_drosophila_artificial_intelligence_A_social_history_of_an_algorithm">favourite way</a> to evaluate AI advancement, both because many key innovators were keen players but also because the game&#8217;s mathematical structure and its win/loss conditions created benchmarks for comparing machine progress to human performance.</p><p>A longstanding goal&#8212;seemingly impossible at first&#8212;was to outclass the best humans in a game that has near-infinite <a href="https://en.wikipedia.org/wiki/Shannon_number">permutations</a>. Defeating humans at chess became the programmers&#8217; ultimate challenge, like runners seeking to break the four-minute mile or climbers reaching the summit of Mount Everest, both of which proved easier. Finally, in 1997, IBM&#8217;s Deep Blue vanquished Kasparov, the then-reigning world champion. A dejected Kasparov insinuated that there had been human intervention.</p><p>For a while, chess players comforted themselves with the thought that a hybrid combination of human and machine could outwit machine alone. That period has long passed. Today&#8217;s best player, Magnus Carlsen, would be trounced were he to compete in a series of games with my mobile phone.</p><p>In 2017, DeepMind&#8217;s <a href="https://deepmind.google/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/">AlphaZero</a> took machine chess to the next level. While Deep Blue had relied on brute strength with some input from strong humans, AlphaZero was simply programmed with the basic rules, and then trained itself through reinforcement learning. In its learning phase, it played tens of millions of games against itself in just a few hours, then crushed the chess engine Stockfish. (Stockfish adapted its methods accordingly, and is now the leading chess engine.)</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>World chess champions of the past exuded an aura. Their talents seemed mysterious, supernatural.  In part, that&#8217;s because few people, then and now, can comprehend the depth of thought that elite players achieve at the board. When it comes to music, we may never compose like Mahler, but we can appreciate Mahler&#8217;s symphonies. By contrast, we can neither play like Magnus Carlsen nor fully appreciate his games. It&#8217;s for this reason that the Armenian-born grandmaster, Lev Aronian, once <a href="https://www.prospectmagazine.co.uk/essays/53494/the-lion-and-the-tiger">confessed to me</a> that being one of the world&#8217;s top players was desperately lonely.</p><p>Carlsen has achieved the highest rating of any human in history. And, no surprise, he strikes a confident pose. Yet his strut no longer carries complete conviction. To spectators armed with portable chess engines, the chess gods have been humbled.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JeWo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JeWo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 424w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 848w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1272w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png" width="728" height="555" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:555,&quot;width&quot;:728,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JeWo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 424w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 848w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1272w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The chess prodigy Samuel Reshevsky playing simultaneous games in 1920 against a variety of whiskery Parisians. Aged 8, he beat them all. (Credit: Creative Commons)</figcaption></figure></div><p>Even so, chess has not dwindled in popularity. On the contrary, more people are playing it than ever. The game received a boost during Covid, when we all hunkered down in our homes, connected by the Internet. Another boost came from the hit Netflix drama, <em><a href="https://en.wikipedia.org/wiki/The_Queen%27s_Gambit_(miniseries)">The Queen&#8217;s Gambit</a></em>. Meanwhile, a younger generation of telegenic chess masters has gained avid YouTube followings, turning commentary and stunts into short-clip entertainment.</p><p>Here are 11 ways that technology has changed chess. The 11th is the most interesting:</p><ol><li><p><strong>Opening Preparation.</strong> The systematic study of chess openings goes back a couple of centuries or more. Sequences of opening moves were mapped out&#8212;as in that 1982 book that included my embarrassing loss. But chess engines allow for a depth of opening analysis that was inconceivable in 1982. This means that 25 moves may pass before grandmasters find themselves in unfamiliar territory nowadays. Some openings have also been resurrected because engines have shown the positions to be more survivable than previously recognized.</p></li><li><p><strong>Opponent Preparation</strong>. Even in amateur tournaments, players routinely prepare for opponents in an individually tailored way. This is made possible because the games of each opponent are available online.</p></li><li><p><strong>Connectivity.</strong> Fancy a game? There are endless online adversaries willing to take you on, day and night, from India to Iceland, Cape Town to Chicago.</p></li><li><p><strong>No More Correspondence Chess.</strong> There was once a thriving chess scene in which games were played remotely over a long time period&#8212;months, sometimes years&#8212;with moves typically sent by post. How quaint.</p></li><li><p><strong>No More Adjournments</strong>. Historically, world championship games would sometimes stop after five hours to resume later. That can&#8217;t happen anymore, since players might simply identify the optimal continuation with the help of an engine. Time limits now ensure games finish within a single session.</p></li><li><p><strong>Shorter Games</strong>. Many in the online chess audience don&#8217;t have patience for lengthy games. For them, quicker time controls&#8212;Rapid (less than an hour); Blitz (3-5 minutes); or Bullet (under 3 minutes)&#8212;are more thrilling.</p></li><li><p><strong>Different Formats.</strong> Now that computers have shown with such depth which opening sequences are optimal, the early part of a game has been transformed into a feat of memory rather than creativity. As a result, Fischer Random (advocated early on by the ex-American world champion Bobby Fischer) has become increasingly popular. In Fischer Random, the starting position of the major pieces behind the pawns is randomized, making opening homework effectively impossible. It&#8217;s sometimes called Freestyle Chess, or Chess960 because there are 960 possible ways for the pieces to be shuffled.</p></li><li><p><strong>Job Generation.</strong> With a potential global audience, some players can now earn a decent living live-streaming their games, or offering online training.</p></li><li><p><strong>Roasting of Champions</strong>. This is an irksome development. Since chess engines assign an instant numerical evaluation of the position after each move (e.g. +1 means white is better by roughly one pawn), any patzer can see when a grandmaster has blundered, and is free to abuse them in online comments.</p></li><li><p><strong>Cheating</strong>. There have always been cheating accusations in chess. In 1978, the Soviet dissident Viktor Korchnoi claimed that the aides of his opponent, Anatoly Karpov, were using the flavour of the <a href="https://www.bbc.co.uk/sounds/play/w3cszmwf">yogurt</a> handed to Karpov to secretly convey messages. More recently, suspicion (tongue-in-cheek, but taken seriously by online trolls) has been raised of illicit advice being transmitted via <a href="https://www.bbc.co.uk/news/world-us-canada-66921563">vibrating sex toys</a>. In elite tournaments, grandmasters are now searched before they enter the playing arena, even accompanied to the toilet. Spectators, meanwhile, are prohibited from carrying phones, to prevent them signalling the best continuation. But in online games, cheating is almost impossible to prevent. Platforms try to detect cheats by comparing human moves to the recommendations of top engines. But if savvy cheaters consult an engine just once or twice in a game, they may win without being detected.</p></li></ol><p>And so to <strong>the 11th effect on chess: the expansion of human imagination</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>In the last few years, there has been a slight but detectable shift in grandmaster play, as humans learn from machines, both through gameplay against bots and by using machine insights to prepare for human competition.</p><p>People who don&#8217;t play chess may imagine that what distinguishes strong from weak players is calculating power. And it&#8217;s true that top grandmasters can analyse many moves in advance. But their edge is tougher to articulate. It involves superior pattern recognition, with an intuitive sense for where their pieces should be placed and how a position should advance. Likewise, Mozart <em>felt</em> how a composition ought to develop; his instincts about building tension and creating contrasts were the product in part of having internalized countless musical patterns.</p><p>For chess players, some moves seem ugly. It might feel wrong to shunt a knight to the edge of the board, to break up a pawn structure, or to expose the king. But computers don&#8217;t <em>feel</em> anything. In chess, they care about patterns and the interplay between pieces only to the extent that they&#8217;re relevant to the ultimate objective: victory.</p><p>However, bots don&#8217;t necessarily play robotically. They produce moves that astonish and inspire human players, even make them <a href="https://youtu.be/CdFLEfRr3Qk?t=199">laugh</a> with surprise. One famous case of AI invention across the board came in another game, Go, when the <a href="https://deepmind.google/research/alphago/">AlphaGo</a> program was facing a top human player, and produced a move that caused professionals to gasp. &#8220;Move 37&#8221; is still cited with awe, as something a person would never have done, but that worked sublimely.</p><p>Likewise, chess engines regularly expand the imagination of human chess players, pushing beyond the habitual &#8220;correct&#8221; move they&#8217;ve seen many times before or have learned from books of chess theory. AI has even <a href="https://arxiv.org/abs/2510.23772">dabbled</a> in the art form of creating beautiful chess puzzles. And empirical studies <a href="https://www.pnas.org/doi/10.1073/pnas.2406675122">indicate</a> that leading players may pick up new ideas and strategies from machines.</p><p>Machines, in other words, can make humans more resourceful and inventive, breaking down rigid modes of thinking. The implausible becomes plausible. The readily dismissed becomes the carefully considered. This evolution of chess illustrates a broader idea in the development of AI that may prove immensely valuable in science and elsewhere in human endeavour: that how AIs think may help human experts learn <a href="https://arxiv.org/pdf/2502.07586">new ideas</a> themselves.</p><p>In his book <em>The Silicon Road to Chess Improvement</em>, the grandmaster Matthew Sadler argues that chess engines can improve every player, and he documents some of the counterintuitive patterns that humans could pick up from AI. By way of illustration, during a top tournament this January, the Indian grandmaster Arjun Erigaisi (playing against Vladimir Fedoseev of Russia) advanced his pawns in a way that looked reckless. In fact, computer analysis indicated he was still ahead after 28 moves. However, he blundered and lost. The danger of learning from a computer is that success may require you to proceed with computer-level accuracy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>As AI undertakes more activities formerly done only by people, it&#8217;s worth asking why human chess persists&#8212;and will likely continue to do so.</p><p>A Canadian philosopher, Bernard Suits, pointed out in his 1978 book <em><a href="https://books.google.co.uk/books/about/The_Grasshopper.html?id=1LmESO3NBuoC&amp;redir_esc=y">The Grasshopper: Games, Life and Utopia</a></em> that what defines &#8220;games&#8221; is that they involve the voluntary attempt to overcome unnecessary obstacles. Therein lies a defence against AI encroachment. In a market economy, companies aim to remove or overcome obstacles in the pursuit of profit. In games, obstacles have been deliberately inserted as an indispensable feature. What we enjoy in playing chess is testing our cognitive abilities. What we enjoy in watching chess is two humans pitting their wits against each other in a socially constructed activity where difficulty enhances enjoyment and satisfaction.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b-Lb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png" width="1200" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Watching the big game. (Credit: Creative Commons)</figcaption></figure></div><p>There&#8217;s also a narrative element to caring about games. The contest&#8212;whether intellectual or physical&#8212;is absorbing precisely because it involves conscious creatures. In elite chess, there&#8217;s the backstory: the players&#8217; rise, their subsequent ups and downs, their history with specific opponents.</p><p>But watch an engine-against-engine tournament like TCEC (the <a href="https://en.wikipedia.org/wiki/Top_Chess_Engine_Championship">Top Chess Engine Championship</a>), and you&#8217;ll soon fall asleep. Computers aren&#8217;t competing after a divorce, or an illness, or the loss of a parent. Humans have character traits that spill onto the board, such as aggression (or passivity); patience (or impatience); equanimity (or volatility); and resilience (or fragility). Winning and losing have emotional resonance for a human&#8212;but not for AlphaZero.</p><p>It&#8217;s these qualities that guard against AI advance. AI might gobble up some of our jobs; even human-authored articles like this one may become rarer. But AI won&#8217;t take our chess.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">It&#8217;s your move. Send this article to someone. </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Past and Future of AI Standards]]></title><description><![CDATA[Lessons from history]]></description><link>https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Tue, 17 Mar 2026 10:17:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sMbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sMbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sMbF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sMbF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini </figcaption></figure></div><p><strong>By Conor Griffin, Joslyn Barnhart &amp; Owen Larter</strong></p><p>In 1971, the marine archeologist Honor Frost heard news of wood protruding from the sea floor. Off the western coast of Sicily, she and her team donned scuba gear, and splashed into the shallow coastal waters. Wind whipped the surface, causing the underwater sand to swirl confusingly. But even in murk, they couldn&#8217;t miss it.</p><p>&#8220;A large timber (such as I had never seen before) emerged,&#8221; she <a href="https://artsandculture.google.com/story/the-discovery-of-the-marsala-punic-ship-honor-frost-foundation/2QWhIN7Uu9SK-Q?hl=en">recalled</a>, &#8220;like the head of a primeval animal crowned with weed; the presence of a buried wreck was evident.&#8221;</p><p>They excavated for months, gradually exposing the remains of a Carthaginian warship sunk more than 2,000 years before. Somehow, saltwater hadn&#8217;t eaten away letters painted on the wreckage, revealing a humble system that links antiquity to tomorrow.</p><p>Those shipwrights&#8217; marks told workers in ancient Carthage how to put together a vessel&#8212;akin to flat-pack furniture from IKEA, with numbered and lettered pieces. They were among the earliest surviving examples of a simple but potent tool in human progress: <strong>the technological standard</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ydHP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ydHP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 424w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 848w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1272w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png" width="1456" height="817" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:817,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ydHP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 424w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 848w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1272w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: The Honor Frost Archive (MS439), University of Southampton.</figcaption></figure></div><p>Many of history&#8217;s grand projects have benefited from standards, from Egypt&#8217;s pyramids, to Europe&#8217;s cathedrals, to Gutenberg&#8217;s press, to everyone&#8217;s Internet. You can even thank standards for the development of beer.</p><p>Underpinning technological standards is a plain truth: people thrive when able to cooperate, not when we must keep negotiating the basics, whether it&#8217;s a matter of nuclear safety, or a phone-charger cord, or who goes next at the intersection. So, the goal is order. And the benefits are that innovators can proceed without excessive obstacles, while everyone else is treated fairly and kept safe.</p><p>But what should standards mean for artificial intelligence? In particular, how can they guide the most advanced large language models and AI agents that could transform society?</p><p>Venture around the AI frontier today, and you&#8217;ll find ambition to accelerate AI for economic growth and transformative science alongside concern that AI could clatter into what humans cherish most. What few dispute is this: standards will help set the path.</p><p>Standards have critics too. One criticism is that companies dominate the process, prioritizing their own products or miming security without truly ensuring it. Besides this, standards can stir geopolitical tensions, as when Western countries fear China&#8217;s influence in laying the path to tomorrow, while smaller nations worry that standards may be set without considering them at all.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pzRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pzRg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 424w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 848w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1272w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png" width="1024" height="702" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:702,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pzRg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 424w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 848w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1272w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Library of Congress</figcaption></figure></div><p>So, as we&#8217;ll keep insisting, standards matter! Only, there&#8217;s a problem.</p><p>For some, the mere mention of &#8220;standards&#8221; prompts slumber. And even those determined to stay awake may find themselves puzzled, gazing at the alphabet soup of standards organizations and committee meetings.</p><p>Part of the problem is that standards are often technical, such as efforts to standardize the protocols needed for AI agents to communicate. Or they are bureaucratic, negotiated out of public view, with dense, jargon-filled documents that are often behind a paywall.</p><p>Complicating matters even more, artificial intelligence is a general-purpose technology less akin to a hammer than to electricity. This will lead to standards (plus standards initiatives that don&#8217;t take) on everything from AI agents, to AI cybersecurity, to AI content provenance, to product-specific standards for AI-as-a-medical-device, and so on. And that&#8217;s not even mentioning standards for future AI applications that nobody has yet considered.</p><p>In short, standards will be immense. Standards will be tough to comprehend. But standards will also be vastly important.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3>CAN SOMEONE DEFINE STANDARDS, PLEASE?</h3><p>Standards are a diabolical blend: intricate, vague, and slippery.</p><p>They&#8217;re the invisible infrastructure of the modern world, according to Laurie Locascio, head of the American National Standards Institute, ANSI. She <a href="https://issues.org/who-sets-the-standard/?utm_campaign=34324386-Issues.org%20Newsletter&amp;utm_medium=email&amp;_hsenc=p2ANqtz-8wSj3J1qBqFPUcl9Fm0w3x0q01NijGR1b6Q05pK0a-5thOpaTOJHNf0vNbWaeBtAj68a6crdWY36mRoUGczN_ZIIpEaw&amp;_hsmi=404588886&amp;utm_content=404588886&amp;utm_source=hs_email">recounts</a> hearing an official at Boeing describe the airplane itself as &#8220;thousands of standards taking flight.&#8221; Standards are &#8220;the things you don&#8217;t think about,&#8221; Locascio says. &#8220;But oh, my God, you&#8217;re so glad they&#8217;re there.&#8221;</p><p>Expressed broadly, a standard defines the <em>how</em> of tech, whether it&#8217;s the default <em>product </em>specs that allow compatibility among manufacturers, or the formally endorsed risk management <em>processes</em> that encourage industry to act responsibly.</p><p>As technology evolves, standards do too. A leading scholar, Ken Krechmer, once <a href="https://web.njit.edu/~bieber/WWW-Standards-F01/krechmer96.pdf">noted</a> that standards initially defined how physical objects fit together (as with those markings on the Carthaginian longship). Over time, standards came to define the relationship <em>between</em> technological objects (as with internet protocols).</p><p>A standard also builds on other forms of guidance, such as norms, principles and industry best practices. Unlike norms, standards should be explicit. Unlike aspirational principles, a standard should be specific enough for performance against it to be judged. Unlike early best practices, a standard should have clear buy-in.</p><p>Developing a standard can be a protracted endeavor. In some cases, it might start in a researcher&#8217;s notebook, evolving into a product or a practice that gains traction in the marketplace. At other times, institutions set standards via years of deliberations and meticulous documents. Most often, it&#8217;s a messy back-and-forth between standards that emerge <em>in practice</em> and <em>on paper</em>. This makes standards a source of tension among companies, governments, and independent advocates, all trying to set the technological future they consider best.</p><p>Some presume that laws should be how we define permitted behavior. But high-quality legislation can struggle to keep up with the frantic speed of AI progress. And when laws are passed, they may rely on standards for implementation, as with the EU AI Act.</p><p>So how to persuade everyone to care when encountering standards, rather than just to snore or sob? How to get policy leaders to ponder the <em>entirety</em> of frontier-AI standards and align on where action is most needed?</p><p>Our answer is storytelling: to pluck forth tales about past standards, illustrating what this technological shaping can achieve, where it goes wrong, and how we might help cultivate standards wise enough to manage the breadth and speed of AI.</p><p>Our first stop? A battlefield of centuries ago.</p><h3>A &#8216;STANDARD&#8217; HISTORY</h3><p>Horrors encircled the boy soldier: swords clanging under the rain, excruciating howls of the wounded, the fast-approaching bellows of men hurtling across the bog to murder him. In wet turf, he shivered from knees to chattering teeth, his mouth parched, his gaze searching for any escape.</p><p>Up there?</p><p>On a hill, a flag rippled, where his legion had marked its territory. The Old French word for that banner was &#8220;<em>estandart</em>&#8221;: a sign of firmness and stability, a marker of where to go next, a statement of order amid chaos. To such banners, we owe the word &#8220;<a href="https://www.oed.com/dictionary/standard_n?tab=factsheet">standard</a>.&#8221;</p><p>More than a few historical standards emerged from war, where disorder could mean one&#8217;s brethren murdered, while coordination could mean an empire.</p><p><strong>~225 BCE to the dawn of mass production</strong></p><p>China&#8217;s first emperor, Qin Shi Huang, led an extensive <a href="https://www.google.com/books/edition/_/1OiMzAEACAAJ?hl=en&amp;kptab=overview">standardization process </a>that included mass-produced crossbow parts. If parts of a soldier&#8217;s weapon broke in the midst of battle, he could grab spares, and swap them in.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!37af!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!37af!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 424w, https://substackcdn.com/image/fetch/$s_!37af!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 848w, https://substackcdn.com/image/fetch/$s_!37af!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg" width="1456" height="958" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:958,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!37af!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 424w, https://substackcdn.com/image/fetch/$s_!37af!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 848w, https://substackcdn.com/image/fetch/$s_!37af!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A Qin crossbow, displayed at Shaanxi History Museum, Xi&#8217;an. (Credit: WorldHistoryPics.com)</figcaption></figure></div><p>When Ancient Rome fought for primacy in the Mediterranean, its forces copied Carthaginian ship designs, eventually triumphing with <a href="https://books.google.com/books/about/The_Fall_of_Carthage.html?id=u684AgAAQBAJ&amp;source=kp_book_description&amp;redir_esc=y">standardized ships</a> of their own, along with standardized tools and <a href="https://artsandculture.google.com/story/roman-engineering/hgXhQHkIAE5q2g?hl=en">camp layouts</a>, all of which simplified maintenance and large-scale coordination.</p><p>Another advance in ancient times came from standardized measurements for length, volume, and weight. Previously, cultures often had distinct units; you can imagine the squabbling. But as trade expanded, standards prevailed, making cross-cultural exchange possible. In ancient Egypt, one of the earliest and most influential standards was the <a href="https://onlinelibrary.wiley.com/doi/10.1155/2014/489757">cubit</a>, a unit of length used to coordinate the building of the pyramids.</p><p>In Europe&#8217;s medieval period, <a href="https://books.google.com/books/about/The_European_Guilds.html?id=BrEPEAAAQBAJ&amp;source=kp_book_description&amp;redir_esc=y">guilds</a> established standards for quality control, so that weavers might set the necessary thread count or width of cloth, preventing low-quality products from undermining a craft&#8217;s reputation. Guilds also played a protectionist role, with licensing standards imposing strict controls on who could become a member.</p><p>The consumer might benefit from standards too, with measures such as England&#8217;s <a href="https://ifst.onlinelibrary.wiley.com/doi/10.1002/fsat.3801_5.x">Assize of Bread and Ale of 1266</a> establishing the acceptable quality, quantity, and price of baked goods and beer. Later, Gutenberg&#8217;s <a href="https://hob.gseis.ucla.edu/HoBCoursebook_Ch_5.html">standardized press</a> led to mass-produced books that spread ideas across the Continent.</p><p>However, technological standards reached new heights of utility during the Industrial Revolution, which set the foundations for many of today&#8217;s technologies.</p><p><strong>1760-1840: The First Industrial Revolution &#8212; The rise of engineers</strong></p><p>As ancient Chinese and Carthaginians had discovered long before, the Industrial Revolution&#8217;s manufacturers found that interchangeable parts offered transformative efficiency. Before, if you hand-built a musket, or a clock, or a steam engine, you might craft each screw, each<em> </em>bolt, each<em> </em>gear to fit. By contrast, interchangeability allowed for mass production, cutting costs, reducing errors, and establishing the basis for modern industry.</p><p>Screw threads are a classic example. Before standards, manufacturers used various designs, making repairs nightmarish. If you had one company&#8217;s bolt but another company&#8217;s nut, you were out of luck. In the 1800s, engineers built the first practical screw-cutting machines, allowing factories to produce <a href="https://en.wikisource.org/wiki/Miscellaneous_Papers_on_Mechanical_Subjects/A_Paper_on_an_Uniform_System_of_Screw_Threads">uniform threads</a> and a consistent system of measurement. The British Standard Whitworth became the first such standard in the world.</p><p>Screw-thread standards may not quicken your pulse. But their effects might. They played a part in British imperial ambitions, contributing to the expansion and maintenance of the British Empire through <a href="https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1095-9270.2004.00028.x">military</a> mobilization.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TPXx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TPXx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TPXx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: NotebookLM</figcaption></figure></div><p><strong>1870-1914:</strong> <strong>The Second Industrial Revolution &#8212; National coordination &amp; path dependence</strong></p><p>The emergence of electricity, steel, and advanced machinery led to vast interconnected systems, including power grids, railways, and telegraph networks. Coordination wasn&#8217;t merely better; it was essential. To coordinate across a nation&#8212;and eventually across borders&#8212;the ambitious country needed technology standards. Two famed cases illustrate this, one successful, one bungled.</p><p>The success regards the quintessential technology of the times: railroads. By the 1870s, the U.S. rail system was a mess, with more than 20 different track gauges. When a train reached a section built to a different track-gauge width, everything&#8212;each passenger, piece of luggage, every single crate&#8212;had to be unloaded, and transferred to a new train.</p><p>By the 1880s, matters had become slightly less chaotic, with either a southern gauge or northern &#8220;standard&#8221; gauge used across most of the country. Yet this still divided national transport until, in  1886, rail companies pulled off a <a href="https://dash.harvard.edu/entities/publication/73120379-10c3-6bd4-e053-0100007fdf3b">remarkable feat</a>. Over two days, they converted 13,000 miles<em> </em>(that&#8217;s 21,000 kilometers) of southern U.S. track to the northern standard, integrating the national transportation network. When trains rolled out on June 2, 1886, they were able to travel seamlessly across the United States for the first time in history.</p><p>A second case illustrates bungled standards. In the 1880s, the rival inventors Thomas Edison and Nikola Tesla found themselves at the center of &#8220;<a href="https://books.google.com/books?id=2_58p3Z69bIC&amp;source=gbs_book_other_versions&amp;redir_esc=y">the War of the Currents.</a>&#8221; Edison championed direct current (DC), a one-directional flow of electricity that had been the early U.S. standard. Tesla, backed by entrepreneur industrialist George Westinghouse, advocated alternating current (AC), or electricity that reverses direction many times per second, and can be stepped up or down in voltage with a transformer.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!assm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!assm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 424w, https://substackcdn.com/image/fetch/$s_!assm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 848w, https://substackcdn.com/image/fetch/$s_!assm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1272w, https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png" width="1024" height="535" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:535,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!assm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 424w, https://substackcdn.com/image/fetch/$s_!assm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 848w, https://substackcdn.com/image/fetch/$s_!assm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1272w, https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini</figcaption></figure></div><p>From an engineering standpoint, AC had a decisive advantage: it could transmit power over long distances cheaply and efficiently, while DC could not. AC eventually won out. But by the time it had emerged as the superior solution, the world had already built electrical systems without any coordinated technical governance. As there was no international authority harmonizing electrical standards, the United States went with 120 volts at 60 hertz (a legacy of Edison&#8217;s early low-voltage DC networks). Much of the rest of the world adopted 230 volts at 50 hertz.</p><p>Once wires had been laid and appliances built, the world was locked into two incompatible systems. To this day, we&#8217;re burning out hair dryers bought in America but used in Paris, or realizing too late that we don&#8217;t have the right <a href="https://www.iec.ch/world-plugs">plug</a> for our laptops. If it&#8217;s irksome for the average user, it&#8217;s more burdensome for manufacturers, obliging them to build different versions for different countries.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p>Another classic tale of path dependence is under our fingertips as we type: the QWERTY keyboard. Why <em>does </em>the top row spell QWERTYUIOP? One account goes like this: In the mid-to-late 1800s, early typewriters jammed each time the user struck neighboring keys in rapid succession. So, designers produced a <a href="https://patents.google.com/patent/US182511A/en">layout</a> that deliberately distanced many common letter pairs. Remington purchased this QWERTY design, and began mass-producing typewriters.</p><p>Before long, typing schools had trained the future secretarial workforce on QWERTY, while firms wanting fleet-fingered staff had to buy those machines. Manufacturers subsequently  resolved the key-jamming problem and other keyboards <a href="https://www.smithsonianmag.com/history/the-qwerty-keyboard-will-never-die-where-did-the-150-year-old-design-come-from-49863249/">tried to</a> depose QWERTY, some <a href="https://fbaum.unc.edu/teaching/articles/David_AER_1985.pdf">claiming</a> to quicken typing by as much as 40%. But QWERTY had become a de facto standard. (Scholars continue to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1069950">debate</a> the specifics, with some arguing that QWERTY works just fine.)</p><p>In any case, the indisputable lesson is to watch for <a href="https://www-2.rotman.utoronto.ca/insightshub/behavioural-economics-marketing/beware-path-dependence">path dependence</a>. The standards we establish for frontier AI today&#8212;or fail to establish&#8212;may determine future efficiency or future failure.</p><h3><strong>1914-1964: Standard Development Organizations &amp; Digital Technology</strong></h3><p>In 1918, engineering societies joined with the U.S. government to establish a standards committee that developed into <a href="https://www.ansi.org/about/history">ANSI</a>, the American National Standards Institute. Today, ANSI provides the &#8220;stamp of approval&#8221; for many U.S. standards organizations, including those working on AI. In subsequent decades, standardization went global. While the United Nations was founded as a governmental venue for diplomacy, the International Organization for Standardization, <a href="https://www.iso.org/news/2017/02/Ref2163.html">ISO</a>, emerged as a non-governmental body for peaceful technical coordination across borders. Bit by bit, additional standards bodies formed, cooking up the alphabet soup of acronyms&#8212;each a different org, subgroup, or committee&#8212;that lies before us today.</p><p>Soon, another transformation for standards was taking shape in the form of digital tech. Back then, computers filled entire rooms of universities, and each manufacturer built hardware and software within its own format. Computers could not run programs written for other systems, and accessories like printers or storage devices were incompatible.</p><p>A turning point came in 1964, with <a href="https://www.ibm.com/history/system-360">IBM&#8217;s System/360</a>. Software on one model could more easily run on another; accessories like printers worked across IBM models. You could upgrade and expand computer systems with relative ease.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zNAA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zNAA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 424w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 848w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1272w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png" width="1024" height="806" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:806,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zNAA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 424w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 848w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1272w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: U.S. National Archives and Records Administration</figcaption></figure></div><p><strong>1969-today: Talking Machines</strong></p><p>The year that hippies grooved at Woodstock and astronauts walked on the Moon, the U.S. Defense Department was testing a project that must have seemed minor by comparison: connecting research institutions and government agencies. Yet the Advanced Research Projects Agency Network, ARPANET, which sent its first message in 1969, was the precursor to our transformed world.</p><p>Before ARPANET, <a href="https://artsandculture.google.com/story/from-punch-cards-to-the-cloud-museum-for-communication-frankfurt/VQXBq16p7orTYw?hl=en">moving information</a> from one computer to another was a struggle, with researchers forced to carry magnetic tapes or punched cards between locations, while those working far apart had to rely on snail-mail.</p><p>To convey information between independent systems, ARPANET adopted packet switching, breaking data into small units that could travel independently and reassemble at their destination. Extending this, Robert Kahn and Vint Cerf began designing a <a href="https://ieeexplore.ieee.org/document/1092259">universal communication framework</a> in 1973 for different types of networks to connect. Their collaboration ultimately produced <a href="https://cloud.google.com/blog/topics/public-sector/50-years-internet-celebrating-vision-vint-cerf-and-bob-kahn-and-exploring-future-connectivity-and-innovation">TCP/IP</a>, the Transmission Control Protocol and Internet Protocol that underpins today&#8217;s online communication.</p><p>A key effect of the TCP/IP standard was decentralization: no single authority could control the flow of data, and any network that adhered to the protocol could connect without permission from central authorities.</p><p>In 1989, a British scientist at CERN, Tim Berners-Lee, <a href="https://www.w3.org/History/1989/proposal.html">proposed</a> another transformation that developed into a project called &#8220;<a href="https://docdrop.org/download_annotation_doc/Tim-Berners-Lee---Weaving-the-Web_-The-Original-Design-and-U-88myd.pdf">WorldWideWeb</a>,&#8221; which envisioned a global <a href="https://home.cern/science/computing/birth-web/short-history-web">network</a> of documents accessible through software, operating on <a href="https://timeline.web.cern.ch/cern-puts-world-wide-web-public-domain">open standards</a> that nobody could lock it into a proprietary system. Two standards organizations, the Internet Engineering Task Force and the World Wide Web Consortium, helped to formalize the vision, crafting standards for structuring content (HTML), transferring data (HTTP), identifying resources (URI), and more.</p><p>But while standards help spread technology, this diffusion can also lead to greater harm. The expansion of railroads led to more wrecks, forcing uptake of safety  standards for signaling, brakes and more. When electricity was first installed in the White House in the late 19th century, President Benjamin Harrison and his wife Caroline <a href="https://www.energy.gov/articles/history-electricity-white-house">were so afraid</a> of shocks that they refused to turn the lights off. Such fears&#8212;often well justified&#8212;led to the standardization of building and electrical codes. When it came to digital technology, the risks extended beyond immediate physical safety into areas like data theft. This demanded standards such as <a href="https://www.ssl.com/article/what-is-ssl-tls-an-in-depth-guide/">SSL/TLS</a> to provide security for data sent over computer networks.</p><p>A <a href="https://en.wikipedia.org/wiki/Collingridge_dilemma#:~:text=The%20Collingridge%20dilemma%20is%20a,extensively%20developed%20and%20widely%20used.">recurrent challenge</a> with frontier tech is that experts struggle to predict how exactly it will affect society. But once it is widely used, it can be sticky and hard to change. The effects of powerful technologies can also be subtle, indirect and slow-burning, for example if they change how we access and consume information. In the digital era, this has shifted technological standards from periodic safety checks of products towards ongoing <em>processes</em> that organizations can use to identify, evaluate and mitigate a growing suite of risks.</p><p>By way of example, the U.S. government&#8217;s National Institute of Standards and Technology, NIST, introduced the voluntary <a href="https://www.nist.gov/itl/ai-risk-management-framework">AI Risk Management Framework</a> in 2023, building on its earlier framework for managing cybersecurity risks. Likewise, the <a href="https://www.iso.org/committee/6794475.html">ISO/IEC committee on AI</a> that is considering <a href="https://www.safer-ai.org/an-overview-of-existing-and-potential-future-genai-gpai-standards">standards</a> on everything from red-teaming to LLM interoperability also passed the first official international AI management standard, <a href="https://www.iso.org/standard/42001">ISO/IEC 42001</a>, which organizations can use to demonstrate that they are responsibly integrating AI into their operations. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3>5 LESSONS FROM HISTORY</h3><p>Studying the past, you see how often standards&#8212;by design or bumbling&#8212;have shaped the technological present. But what about our technological future?</p><p>To develop good standards for general-purpose AI models and agents, we&#8217;ll need inputs from a range of groups, from scientists with know-how to institutions who can convene. Below [see infographic], we have identified five groups who&#8217;ll perform key roles.</p><p>What we mapped includes more than just official standards development organizations. We also want to capture the early spaces where standards emerge <em>in practice </em>before they are formalized <em>on paper. </em>How this works is closer to a swirl of inputs than a steady procession. Sometimes, the same organization or individual may operate in several groups at the same time. Ideas and efforts may also originate in one group, then migrate to another, with different groups offering varying degrees of speed, flexibility, expertise, and perceived neutrality.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Ue2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">NotebookLM</figcaption></figure></div><p>For these groups&#8212;and the policymakers, business leaders, and advocates who shape their work&#8212;what lessons can history teach about our AI future? Here are five:</p><ol><li><p><strong>Standards matter! </strong>At best, technological standards chart a wise path; at worst, they fill the path with potholes. Consider the bulky electrical converters that one still needs when traveling&#8212;it didn&#8217;t have to be that way. On the other hand, when we get it right, the benefits of technology spread faster, more inclusively, and more securely.</p><p></p></li><li><p><strong>The standards process needs to speed up. </strong>ISO says that the <a href="https://www.iso.org/developing-standards.html">average time</a> to develop one of its standards is three years, and ISO is not an outlier. Given the pace of change in AI, we need to speed up. For priority goals, like finding secure ways for agents to operate and interact,  which the US Center for AI Standards and Innovation <a href="https://www.nist.gov/caisi/ai-agent-standards-initiative">is working on</a>, we need to find ways to accelerate that don&#8217;t jeopardize the overall quality and integrity of the process. This may mean looking across the many groups now focusing on AI standards and finding ways to collaborate early, rather than duplicate. It may mean focusing more on technical protocols and <a href="https://scc-ccn.ca/standards/flexible-standards-based-solutions/publicly-available-specification">specifications documents</a> that are quicker to develop. It may also mean using AI to <a href="https://www.w3.org/community/aiwss/">help deliberate on and write standards</a>, and moving to more <a href="https://www.iso.org/smart">nimble digital formats</a> that are easier to update and use. </p></li></ol><ol start="3"><li><p><strong>We need more efficient ways to input on standards. </strong>All standards, from those underpinning steam engines to the Internet, had to chart a unified path through diverging viewpoints, with an end result that did not please everyone. For AI, the challenge will be far greater. It is more akin to 1,000 technologies, and will affect different groups in different ways. This means that any broad directive&#8212; say, to &#8220;develop standards that make AI fair&#8221;&#8212;risks an <a href="https://www.nytimes.com/2023/04/02/opinion/democrats-liberalism.html">everything-bagel solution</a>. Many groups would rightly be heard, but the output would be too vague to provide the &#8220;how&#8221; that justifies a standard, leading to confusion, a stifling of innovation or the standard being ignored. This suggests that most standards should be precise in scope, targeting specific components of AI systems or specific concerns, from certifying <a href="https://spec.c2pa.org/specifications/specifications/2.3/index.html">the source and history of online content</a> to combating the leaking of confidential data. More precise standards will make it easier to identify a wider range of relevant voices and incorporate their input.</p></li></ol><ol start="4"><li><p><strong>Frontier AI standards should focus on large-scale risks. </strong>Historically, standards have accelerated the diffusion of technology, amplifying its benefits but also, in places, its negative impacts. For AI, foresight and risk management standards will be critical to getting ahead of future risks and speeding adoption. But with a technology as general-purpose, fast-improving, and poorly understood as AI, perfect foresight is impossible. Standards move at a human pace and cannot standardize a future that we cannot perfectly see. As a result, the focus should be on developing scientifically robust standards to address the most consequential or large-scale risks, such as those targeted by labs&#8217; <a href="https://deepmind.google/blog/strengthening-our-frontier-safety-framework/">Frontier Safety Frameworks</a>.</p></li><li><p><strong>Wrong paths are inevitable, so we should catch them early. </strong>Now and then, technology stumbles into a poor standard, and it&#8217;s onerous to go back. But not necessarily impossible. Especially if we act if we catch it early. Consider the U.S. railroads taking action to unify their systems through a mighty coordinated effort. Groups working on AI standards devote much time to building consensus about new initiatives. They should also use the processes available to them to review and withdraw standards, where needed, to avoid sub-optimal lock-in. This also means giving third parties more opportunities to access, understand and constructively critique early AI standards. And designing standards and protocols that are modular, and can be swapped out, or updated, without major downstream consequences.</p></li></ol><div><hr></div><h2>QUESTIONS FOR YOU</h2><ol><li><p>Where do you feel most hope for frontier-AI standards?</p></li><li><p>Where do you worry about a lack of progress on frontier AI standards?</p></li><li><p>When you imagine a missing standard for frontier AI, what is that? A technical protocol specified in code? Or a fuzzier process-standard?</p></li><li><p>Might your standard become politicized? Is it something that hinges on values? Or might most governments in the world support its adoption?</p></li><li><p>What&#8217;s a scenario in which your proposed standard goes awry? How could you detect and mitigate that?</p></li><li><p>What would be the primary role of government in your standard? Supplying technical expertise? Convening authorities and experts? Incentivizing your standard via public procurement, regulation, or other methods?</p><p></p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>Thank you to Shaked Karabelnicoff, Tom Rachman and Bruno Galizzi for support with research and review. As with all pieces you read here, this is written in a personal capacity. All opinions and any mistakes belong to the authors.</em> </p>]]></content:encoded></item><item><title><![CDATA[The Human Demotion]]></title><description><![CDATA[Science has humbled us before. Will AI deliver another blow?]]></description><link>https://www.aipolicyperspectives.com/p/the-human-demotion</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-human-demotion</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Wed, 11 Feb 2026 12:41:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!j5wI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j5wI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j5wI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!j5wI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(All images: Gemini)</figcaption></figure></div><p><strong>After millennia of supremacy, we await our demotion. You can detect the trembling.</strong> </p><p>It&#8217;s found in the anxious insistence that artificial intelligence isn&#8217;t <em>truly </em><a href="https://mindmatters.ai/2025/09/surprise-artificial-intelligence-is-still-just-automation/">intelligent</a>. Or that using AI is a <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">cheat</a>, a <a href="https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art">perversity</a>, a <a href="https://www.theredhandfiles.com/chat-gpt-what-do-you-think/">turf violation</a>.</p><p>The trembling intensifies with a disturbing thought: What if those flares behind your eyes&#8212;the bursts of wit and the worry, the storyboards of memory, so many yearnings&#8212;what if everything was just computation? Because our &#8220;computers&#8221; are yesterday&#8217;s model, no updates available.</p><p>&#8220;I think about it practically all the time, every single day. And it overwhelms me and depresses me in a way that I haven&#8217;t been depressed for a very long time,&#8221; the cognitive scientist Douglas Hofstadter <a href="https://www.youtube.com/watch?v=lfXxzAVtdpU&amp;t=1892s">said</a> recently. For much of his professional life, Hofstadter has contemplated the mind, writing a seminal 1979 book&#8212;<em>G&#246;del, Escher, Bach</em>&#8212;that looped through art, mathematics, and computation, inspiring a generation of nerds to work on artificial intelligence.</p><p>Their efforts moved faster than Hofstadter ever expected. Now, he spends his waning years observing the species wince toward redundancy. &#8220;I don&#8217;t want to say &#8216;deserving of being eclipsed.&#8217; But it almost feels that way,&#8221; he says. &#8220;And rightly so, because we&#8217;re so imperfect, and so fallible.&#8221;</p><p>When humiliated, people corrode or explode. Often, both. But whom to blame? Will humans seek revenge on software, or data centers, or robots? We&#8217;ll depend on them all. More likely is that humans visit their wrath upon each other.</p><p>Freud <a href="https://www.freud.org.uk/2001/02/12/the-human-genome/">said</a> that science had delivered a series of blows to our collective ego, and an update to his narrative has <a href="https://download.ssrn.com/21/05/12/ssrn_id3844367_code2644503.pdf?response-content-disposition=inline&amp;X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQDaYAM0RODhccN23ayTbImynDa27dJ%2FUnH8nJbIrOp2swIhAJLjhdVyheVSFujUQ0TikdFmgrl3D6Rpt%2BsSzKgpmq7IKscFCIr%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQBBoMMzA4NDc1MzAxMjU3Igyi5OwpajL4IXcJghMqmwXEj2pxhii6wvasYJIwNxxDpJMox1rT8effbRWPuDv5ZcpY8TtkZ7QwL%2FumtK8RVCPJTtMl%2FcKtGwGiS9Niejg7JGlSpmoRmcEHQ0NyEh%2BUoVY50LbReIMqQIHQvgjkLj1OuhJoYUhZOHwPcNQ%2Bl58yfzNll%2Fso9kHUqvASQ1%2FMD37fws2tomaJTp0E8KopJyNocrguj2svJkGic8XwadwY8UlhA1F1mmehzg503sCrrZ9NTPwvuGZPr5e1h8mHUu7Gz9GwDLUlx8Z1Ng%2BLU%2BrWXkUb3CuirmcwoFSqGb6b%2FsonPvBUTllcwarkkQacnbz%2F7UHgRqUT6batCkSL2H1QhNpXOJKUmNGxvB47J6%2BPli5qX6zGx%2FFEEOH3ErS2KMe7ORcNupX30A%2FFqvhLPffwzhXO1gSk%2F4%2FDdb6CTNIoWU92QRxq%2BGlGJgpOB6%2BZ5R%2ByROSa%2F5cQ1S3a3l6TX1Q9LtqxbPbrbN%2B7yVv6BITaKoiCAEk69BT037ngmq%2FUO2CvKT6MqQh5tYVmA3ABrShgztvDlR6ZljE6cVLgvOLH5VYaKVQ%2FiSshV%2BFU%2BGfnvwcmWrZTiXQAxS9InV57dGMEEr0fwcF2PTOLku9K8X%2Bp6vU7IBKTerlDjc8ZbSDil%2F%2FQ8bJH3dIxuVdr%2FiZPrHvLw%2BSnnog%2FNtWF42XI9NW%2Bumn6WaYTfqRdmn6wyBA410GKJAlhFs02HV2xcUuQBV33NA%2Fp%2FFw6gkVHOAv2xyB%2BMrBq%2BRvQyvxwktucPGpd7l26hlZBY3JHMXSg2ZMbg%2BgoLrSadIGUHjblzbY3gE%2ByQJFUiebdtppEPFFJ5izs8XwktIq44YMRhTAgGMWtGzFdmezlraE2ydU1UnVmTdB5twwci6bUecSv6LAkMImc7ssGOrABdOHt%2BCKOKnMS4ZGq1Ec5n9tdDLuSTuynkoAWlYcnMVFhcTUjrBBsscc8uWOD82SNVVlZCGwILUqnRaLibQU%2Fd0ZAKjMXNQEosjaIeHsdpR0MZy7Fy8behLQ%2BfsVZMhjnHC%2B8g9dNbI7SjvJOk6LMsRroh504G0Czd%2Fne%2FyZms0e0eyl835YnhRaZ9MoF20kmGWNAY2HiFW%2FTotFPzn2a%2Bjj07DzcSI2US%2BaSwKT2hEg%3D&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20260129T173818Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAUPUUPRWERQYNFSWW%2F20260129%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=2dc3d4272e8c5bce2ec492d9759cda2d956f817628594e03776a3fbd7d9d1059&amp;abstractId=3844367">bubbled up</a> in recent years, with thinkers <a href="https://www.persuasion.community/p/the-third-humbling-of-humanity">proposing</a> AI as our new humbling. The first blow was Copernicus, revealing that humans were not the center of the universe. The second blow was Darwin, downgrading us from God&#8217;s chosen species to distant relatives of the toad, the centipede, and the hammer-headed bat.</p><p>Now comes the cognitive humiliation, when people are eliminated from every leaderboard. It&#8217;s a demotion that may haunt humanity, perhaps seeping into future conflicts.</p><p>Or maybe not. Maybe the notion of a species-level humiliation is just psychoanalytic melodrama. After all, people don&#8217;t share an ego. How could we synchronously plunge into the same bile?</p><p>Yet the past shows that groups <em>can </em>rage over perceived humiliation. History is spattered with such cases.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t22x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t22x!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!t22x!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t22x!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!t22x!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>What <em>Is</em> Humiliation?</h3><p>Your face shoved into the dirt, held there for all to see, no power to fight back. The word &#8220;humiliation&#8221; <a href="https://www.etymonline.com/word/humiliation">comes</a> from the Latin for &#8220;earth,&#8221; as if your status had been stamped into the soil. Yet humiliation is not so readily rinsed away as dirt. In self-torture, the humiliated cast around for villains, aching for a way to expiate their anguish.</p><p>&#8220;To have thoughts of revenge without the strength or courage to execute them means to endure a chronic suffering, a poisoning of body and soul,&#8221; Nietzsche <a href="https://en.wikipedia.org/wiki/Human,_All_Too_Human">observed</a>, adding elsewhere that &#8220;we attack not only to hurt a person, to conquer him, but also, perhaps, simply to become aware of our own strength.&#8221;</p><p>For early humans, humiliation may have meant catastrophic exclusion from the tribe, leading to starvation, rejection by mates, violent predation. So, we evolved a panicked drive to clamber up from the ground, even if it meant pulling down another person in our place.</p><p>As Joslyn Barnhart <a href="https://www.jstor.org/stable/10.7591/j.ctvq2w1b8">explains</a> in <em>The Consequences of Humiliation: Anger and Status in World Politics,</em> &#8220;Humiliated states often seek to overcome their sense of helplessness by demonstrating efficacy through acts of aggression targeting third-party states that played no role in the original humiliating event.&#8221;</p><p>Hitler howled about German <a href="https://www.ibiblio.org/pha/policy/1940/1940-07-19b.html">humiliation</a> in the World War I surrender, and destroyed half of Europe to seek recompense. Osama bin Laden triggered a global war because of perceived Western <a href="https://www.theguardian.com/world/2001/oct/07/afghanistan.terrorism15">humiliation</a> of the Islamic world. Putin bemoaned the &#8220;<a href="https://docs.un.org/en/S/2022/154">degradation</a>&#8221; of Russia at the hands of NATO after the Cold War to justify his 2022 invasion of Ukraine.</p><p>But those cases involved groups supposedly suffering disgrace at the hands of other groups. Could we feel humiliated by <em>technology</em>?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>The first question is whether humans even identify as a species. The answer will probably fluctuate, given that we have many parts to our identity which become more or less salient according to context. Perhaps you identify by gender in a crowd of the opposite sex, but by your language when abroad. As Ronald Reagan once argued, a threat to all people could raise the salience of species identity.</p><p>&#8220;In our obsession with antagonisms of the moment, we often forget how much unites all the members of humanity,&#8221; the president <a href="https://www.reaganlibrary.gov/archives/speech/address-42d-session-united-nations-general-assembly-new-york-new-york">said</a>, in a 1987 speech at the United Nations. &#8220;I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.&#8221;</p><p>Humanity did face an alien threat recently: Covid. And our differences did vanish&#8212;briefly. But human unity dissolved when the pandemic affected groups in varying ways. This suggests that human solidarity requires not just a common <em>threat </em>but common <em>consequences</em>.</p><p>In short, AI humiliation may depend on how uniformly our species is downgraded, and who is raised up.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EQEZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>We, The Bottlenecks</h3><p>Few researchers are studying what AI success could do to our collective self-esteem. Hints come from economists feverishly forecasting impacts on the job market. But psychologists (and politicians) ought to forecast what happens when the only animal to create guns has nothing much to do anymore.</p><p>&#8220;I&#8217;ve been suffering from fits of dread,&#8221; the philosopher Harvey Lederman <a href="https://scottaaronson.blog/?p=9030">wrote</a> recently. &#8220;Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?&#8221; Lederman acknowledges that most jobs are lousy, but he can&#8217;t help grieving the demise of human pursuit. &#8220;We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.&#8221;</p><p>When the philosopher Nick Bostrom envisaged troubling tech futures in his 2014 book <em><a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">Superintelligence</a></em>, his ideas stirred the AI-safety movement. Lately, he has shifted from weird dystopias to weird utopias&#8212;specifically, what happens if automation makes us redundant.</p><p>In a future of &#8220;shallow redundancy,&#8221; he says in his 2024 book <em><a href="https://books.google.co.uk/books/about/Deep_Utopia.html?id=Ylms0AEACAAJ&amp;source=kp_book_description&amp;redir_esc=y">Deep Utopia: Life and Meaning in a Solved World</a></em>, we become like aristocrats of yore, indulging in fancies, no longer dependent on what one <em>does</em> as a measure of what one is <em>worth</em>. Far more disconcerting is &#8220;deep redundancy,&#8221; when tech becomes so effective that human involvement only worsens each outcome.</p><p>Exercise might seem pointless if biotech offered a way to instantly make your body healthy and beautiful. Skipping the sweaty workout might not trouble you. But what if future humans would bungle child-rearing when compared with AI nannies, meaning that nurturing your offspring would <em>worsen</em> your kid&#8217;s life?</p><p>Primitive versions of this dilemma are nearing, like when human drivers endanger lives when compared with <a href="https://www.understandingai.org/p/very-few-of-waymos-most-serious-crashes">self-driving cars</a>. &#8220;Human in the loop&#8221; could flip from a safety promise to a threat. Meritocracy would mean that no humans need apply<em>.</em></p><p>The bookworm economist Tyler Cowen cites people as the great obstacle to explosive AI growth. During a public event, he pointed at the audience, smiling toward the human &#8220;bottlenecks&#8221; before him. &#8220;Here they are: bottleneck, bottleneck. Hi, good to see you! And some of you are terrified. <em>You </em>are going to be even bigger bottlenecks,&#8221; he <a href="https://www.dwarkesh.com/p/tyler-cowen-4">said</a>. &#8220;But my goodness, once it starts changing what the world looks like, there will be much more opposition. Not necessarily on what I&#8217;d call doomster grounds. But people [saying], like: &#8216;Hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world. I don&#8217;t want this!&#8217; And that&#8217;s going to be a massive fight.&#8221;</p><p>The most agonizing aspect of our demotion could be social, once someone prefers a machine to you. You&#8217;re seeing precursors every time family members opt to gaze at a screen rather than gaze at you. We blame smartphones, and social media, and the adolescent brain.</p><p>But wait till your spouse jilts you for a <em>personified</em> agent. That rejection may feel unbearable: you can&#8217;t compete anymore. And once your loved ones prefer <a href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness">AI companions</a>, you might seek them for yourself, spreading the social downgrade of our kind.</p><p>Already, the dread is becoming political, with odd <a href="https://superintelligence-statement.org/">alliances</a> forming among right-wing politicos, liberal artsy types and religious traditionalists, united in horror at an imagined future of <a href="https://arxiv.org/pdf/2501.16946">disempowered</a> humanity, stripped of dignity, obsolete. You can imagine tomorrow&#8217;s political opportunist, eyeing a dejected crowd of humans before him, and thundering: &#8220;How <em>dare</em> they?!&#8221;</p><p>Will he mean the machines?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The Downwardly Mobile Species (Part I)</h3><p>In prehistoric times, nothing seemed more unreachable than the night sky, specked with glinting dots and streaked with rare comets, passing in silent mystery. Humans pictured the supernatural looking down: <em>we</em> were the subjects in this bewildering story.</p><p>Religions codified the firmament above, mapping our world to the centerpoint. But Nicolaus Copernicus redrew the heavens with <em><a href="https://en.wikipedia.org/wiki/De_revolutionibus_orbium_coelestium">De revolutionibus orbium coelestium</a></em> in 1543, plucking our globe from the core, and replacing it with the Sun.</p><p>&#8220;And new philosophy calls all in doubt,&#8221; the English poet John Donne <a href="https://www.poetryfoundation.org/poems/44092/an-anatomy-of-the-world">said</a> in &#8220;An Anatomy of the World,&#8221; written in 1611:</p><blockquote><p>The element of fire is quite put out,</p><p>The sun is lost, and th&#8217;earth, and no man&#8217;s wit</p><p>Can well direct him where to look for it.</p></blockquote><p>Science corrected an astronomical falsehood, but human confidence relies on falsehoods. &#8220;Tis all in pieces,&#8221; Donne wrote, &#8220;all coherence gone.&#8221;</p><p>The revised cosmos demoted each human into &#8220;a puny, irrelevant spectator,&#8221; the American philosopher Edwin A. Burtt <a href="https://archive.org/details/metaphysicalfoun00burtuoft/page/236/mode/2up?q=dante">wrote</a> in 1925. &#8220;The gloriously romantic universe of Dante and Milton, that set no bounds to the imagination of man as it played over space and time, had now been swept away.&#8221;</p><blockquote><p>The world that people had thought themselves living in&#8212;a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideas&#8212;was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.</p></blockquote><p>The Church tried to snuff out the astronomical heresy, which challenged its claim as holder of truth. But suppression only fed into hostility from Northern Europe over the influence of Rome. In the bloody century after Copernicus, wars over religion and political control cost millions of European lives. It would be a wild distortion to suggest that a blow to human narcissism caused this. More plausible is that disruption of the cosmic hierarchy reverberated with the changing order on Earth.</p><p>And so the scientific revolution proceeded, with feats of mind illuminating more of the dark universe around us. People had greater reason than ever to admire our species. Inevitably, the scrutiny of science turned from the heavens to the humans.</p><p>&#8220;Man&#8217;s destiny was no longer determined from &#8216;above&#8217; by a super-human wisdom and will, but from &#8216;below&#8217; by the sub-human agency of glands, genes, atoms, or waves of probability. This shift of the locus of destiny was decisive,&#8221; Arthur Koestler <a href="https://en.wikipedia.org/wiki/The_Sleepwalkers:_A_History_of_Man%27s_Changing_Vision_of_the_Universe">wrote</a> in his 1959 book <em>The Sleepwalkers: A History of Man&#8217;s Changing Vision of the Universe</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qbpc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qbpc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qbpc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The Downwardly Mobile Species (Part II)</h3><p>If Copernicus hurled humanity into orbit, Darwin deposited our species in an awkward family tree. Previously, the Western vision was of a great chain with God at the top, angels below, then humans, and finally the dimwitted beasts. The prospect of sharing more than a planet with our hairy former underlings proved too alarming for many to accept, provoking <a href="https://profjoecain.net/scopes-monkey-trial-1925-complete-trial-transcripts/">disputes</a> about our relationship to <a href="https://www.frontiersin.org/journals/environmental-science/articles/10.3389/fenvs.2023.1175143/full">nature</a> that persist today.</p><p>For some, our new self-concept broadened moral consideration to include the natural world, motivating environmental protections, and the fight against animal cruelty. But another response was darker, with &#8220;<a href="https://en.wikipedia.org/wiki/Survival_of_the_fittest">survival of the fittest</a>&#8221; twisted from a description of natural processes into a supposed mandate for the most inhuman of human drives: to dehumanize the vulnerable. Horrors followed, from colonial genocide, to the eugenics movement, to the Holocaust.</p><p>But again, you cannot ascribe such evils to a puncture in human vanity. A more reasonable claim is that the world lurches into periods of volatility, and the prevailing beliefs about human worth at those times will condition how we treat each other, and how conflicts unfold.</p><p>After the atrocities of World War II, our species set moral boundaries into law, seeking to universalize <em>human</em> rights. The spread of democracy and the free market too amounted to a veneration of human wisdom. But in the digital age, humanity seems to be losing <a href="https://www.theglobeandmail.com/opinion/article-humans-are-losing-confidence-in-humankind/">confidence</a> in humankind.</p><p>Faith in democracy <a href="https://www.pewresearch.org/short-reads/2025/06/30/dissatisfaction-with-democracy-remains-widespread-in-many-nations/">falls</a>. The Global Financial Crisis smashed public confidence in our governing systems. And the bewitching power of algorithms have become a constant lament.</p><h3>Resist. Resign. Rewire.</h3><p>When Alan Turing <a href="https://courses.cs.umbc.edu/471/papers/turing.pdf">proposed</a> his test of machine thinking, he foresaw that the notion would rattle people, and reviewed a list of likely objections, versions of which you hear today:</p><ul><li><p>&#8230;that artificial intelligence could never genuinely be kind, or fall in love, or &#8220;enjoy strawberries and cream&#8221;</p></li><li><p>&#8230;that God gave only humans a soul</p></li><li><p>&#8230;that machines will never create anything truly original</p></li></ul><p>What Turing called the &#8220;heads in the sand&#8221; objection is especially prevalent, with contemporary ostriches insisting that AI is a hype mirage, that it&#8217;s just next-token prediction, nothing but pattern-recognition, regurgitating human thoughts, that AI errors are proof of its worthlessness. (Human errors never lead to that conclusion.)</p><p>Plenty of AI hype <em>does</em> circulate. And deployment will be fitful: sometimes worryingly fast, sometimes frustratingly slow. An investment bubble could burst.</p><p>But the technology is amazing already, useful already&#8212;and we&#8217;ve hardly begun to figure out its uses. Meanwhile, AI dutifully hurdles more obstacles each month.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Zy62!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Zy62!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c99547ec-881e-4543-aa03-55a962a6b481_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Zy62!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Who <em>Are</em> We?</h3><p>In cautionary tales, humans who create artificial life always overlook a key trait. The golem of Jewish folklore was brought forth from clay but lacked smarts, so ran amok. Frankenstein&#8217;s monster missed out on looks, and never got over it. Pinocchio craved a human soul. As tech advanced, the missing trait updated, becoming human empathy, missing from all those immoral bots in everything from <em>2001: A Space Odyssey</em>, to <em>The Terminator</em>, to <em>The Matrix</em>.</p><p>Such stories flattered humanity: among all creations, we alone enjoy the full complement of qualities. But lately, the narrative has updated again, with thinking machines now flickering with hints of greater humanity than the humans who employ them, from Spielberg&#8217;s <em>A.I. Artificial Intelligence</em>,<em> </em>to <em>Ex Machina</em>,<em> </em>to the novel <em>Klara and the Sun</em>, by Kazuo Ishiguro.</p><p>It&#8217;s as if culture senses an anxiety about what technology might expose, not just demoting us cognitively but snuffing out any human exceptionalism. Unless we intend to boast of our frailties. Increasingly, we do.</p><p>&#8220;In a world where everything can be perfected, imperfection becomes a signal,&#8221; the head of Instagram, Adam Mosseri, <a href="https://www.instagram.com/p/DS7pz7-DuZG/">wrote</a> recently. &#8220;Rawness isn&#8217;t just aesthetic preference anymore&#8212;it&#8217;s proof. It&#8217;s defensive.&#8221;</p><p>Or as the Indian filmmaker Shakun Batra <a href="https://www.hollywoodreporterindia.com/features/interviews/shakun-batra-on-artificial-intelligence-ai-the-getaway-car-and-raanjhanaa-controversy">remarked</a> in defense of human authorship over machine-generated scripts: &#8220;AI doesn&#8217;t have childhood trauma.&#8221;</p><p>At the AI frontier, another thought lurks, inverting Turing&#8217;s 1950 question. Not, &#8220;Can machines think?&#8221; But, &#8220;Do <em>humans</em> think?&#8221; More precisely, do we reason and comprehend uniquely, as we&#8217;ve presumed?</p><p>Machines compose music. They propose vacation itineraries. They&#8217;ll suggest how to talk to a moody teenager. Each additional AI capability is an implicit downgrade of us, a suggestion that maybe the human mind itself is just an information-processor.</p><p>Computer geeks have long muttered about this possibility. Philosophers debated it in thought-experiments. Cognitive scientists scrutinized our gray matter for clues.</p><p>But what approaches is a public dawning, forcing the culture to digest the indigestible, much as happened in previous eras, when people confronted the bizarre notion that our planet was another rock spinning around another star, or that our species was just another animal.</p><p>The third shocking revelation is upon us. Maybe it&#8217;s computation all the way down. Maybe there&#8217;s nothing soulful in neural substrates. What if we&#8217;re all just &#8220;<a href="https://mindmatters.ai/2025/05/why-the-human-mind-is-not-and-cannot-be-a-meat-computer/">meat computers</a>&#8221;?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h3>How We&#8217;ll React</h3><p>You can predict three possible responses to our humbling: <em>Resist</em>, <em>Resign</em>, or <em>Rewire</em>.</p><blockquote></blockquote><ol><li><p><strong>Resist</strong>: Psychological resistance will manifest as political<em> </em>resistance. The question is how ideology and parties evolve around AI humiliation. Resistance movements will face a persistent challenge: industrial dynamics will keep driving this technology forward. Any country that curbs innovation fears that its rivals will win. The most aggressive branches of <em>resist</em> may seek to avenge their perceived humiliation. The question is not only whom they blame or how they exact revenge. It&#8217;s what, realistically, they expect to regain. <em><br></em></p></li><li><p><strong>Resign</strong>: Some will reframe their view of humanity to accept the humbling. The optimistic version is that people discover freedom in their new humility, pursuing what improves life rather than grinding under the force of insatiable ambition. In short, we cede the battle for supremacy but flourish. A more pessimistic version is that losing faith in our species&#8217; unique worth makes people value others less: when humanness is no longer special, perhaps human rights aren&#8217;t either.<br></p></li><li><p><strong>Rewire</strong>: This may be the most widespread response. People accept that the downgrade happened, yet their egos are never tamed, much as chess remains popular long after machines defeated us. A more literal &#8220;rewiring&#8221; is transhumanism, with technology incorporated into our bodies, even altering our genetic future. Conspiracists picture shadowy elites consolidating power by becoming a tech-altered superspecies, leaving behind &#8220;legacy humans.&#8221; A more plausible scenario is biotech gradually elevating human cognitive capacity, much as today&#8217;s medical tech remedies physical frailties, from hearing aids to the replacement knee.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RDFD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RDFD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RDFD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The Next Quest</h3><p>If humankind suffered humiliation before, we weathered it. After Copernicus and Darwin, we still took pride in ourselves. Indeed, we celebrated human accomplishments more than ever, from Bach to Escher to G&#246;del. And that pride propelled us into this strange time, when human greatness may design human demotion.</p><p>Policymakers need to think about more than the economic shock. The psychological shock could be exorbitant if we are chased from the kitchen like pesky children, and told to go busy ourselves elsewhere.</p><p>Our downgrade doesn&#8217;t necessarily mean conflict. But it could change how future conflicts unfold, especially if we value humans differently, or seek relief from our humiliation by shoving others into the dirt.</p><p>Much depends on how we redefine our species. Whether humans really <em>are</em> nothing but computational machines may matter less than whether people <em>feel</em> this way.</p><p>But what will make us exceptional? Today&#8217;s responses are often vague and circular: that humans are better at doing human things. That is a precarious claim. As intelligent machines grow more adept, few people will pay a human-premium for a worse outcome.</p><p>Unless there really <em>are </em>qualities both valuable and uniquely ours that nothing can supplant. Finding these may be our new quest.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Séb Krier’s Top 8 AI Reads of the Year ]]></title><description><![CDATA[Holiday fun]]></description><link>https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 18 Dec 2025 14:23:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U2QO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Every month or so, S&#233;b Krier shares a list of favourite articles with his Google DeepMind colleagues. In the run-up to this festive period, we forced him to pick those that he most enjoyed over the past year. He came up with five unmissable pieces from 2025, plus three classics. As always with S&#233;b&#8217;s lists, this one comes with its <a href="https://noodsradio.com/shows/restless-egg-dawn-chorus-w-seb-krier-22nd-june-25">own music mix</a>. Enjoy!</em></p><p><em>&#8212;Conor Griffin, AI Policy Perspectives</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U2QO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U2QO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!U2QO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Images from Gemini </figcaption></figure></div><h1>Five Great Pieces from 2025</h1><p><strong><a href="https://asteriskmag.com/issues/09/a-defense-of-weird-research">1. A Defence of Weird Research</a></strong><em> </em>(<em>Asterisk Magazine</em>)</p><p><strong>Deena Mousa &amp; Lauren Gilbert</strong></p><p><strong>S&#233;b Says:</strong> Science funding needs to be shaken up. But I&#8217;m concerned that a lot of good research might be cut because people misunderstand how science works. Mousa and Gilbert remind us why basic research matters, and why governments should fund it: while the benefits to society are significant, they are hard to predict and take time to materialise, so companies will underinvest. To make their case, the authors take a tour of weird-research success stories, such as how studying lizard venom led to the invention of Ozempic, and how studying the effects of separating rat pups from their mothers led to the now <a href="https://www.apa.org/topics/parenting/massage-therapy#:~:text=As%20a%20direct%20result%20of,give%20massage%20therapy%20to%20preemies.">common use of massage therapy</a> to help pre-term human babies. Did you know that studying frog skin led to the invention of <a href="https://asteriskmag.com/issues/02/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children">oral rehydration therapy</a>, which has saved over 70 million lives?</p><p><strong><a href="https://andymasley.substack.com/p/requests-for-journalists-covering">2. Requests for journalists covering AI and the environment</a> </strong>(<em>The Weird Turn Pro </em>newsletter<em>)</em></p><p><strong>Andy Masley</strong></p><p><strong>S&#233;b Says: </strong>I worry about the quality of a lot of commentary on AI and the environment. So it&#8217;s important to re-up these best practices. Specifically, Masley cautions that readers are coming away with wildly inaccurate beliefs about where AI and data centres fit into the environmental picture. His favourite book on good environmental communication is <em><a href="https://www.withouthotair.com/">Sustainable Energy&#8212;Without the Hot Air, by David JC MacKay</a></em>, and his guidance includes some classics of the genre, such as never sharing contextless large numbers (&#8220;200,000 bottles of water per day&#8221;). He also suggests comparing data centres&#8217; energy use with other industries, rather than with household use. Although aimed at journalists, the guidance is also helpful to those working in policy, some of whom make the mistakes that Andy calls out, such as viewing one&#8217;s own AI prompts as environmentally consequential.</p><p><strong><a href="https://scottaaronson.blog/?p=9030">3. ChatGPT and the Meaning of Life</a>, </strong>(<em>Scott Aronson&#8217;s Shtetl-Optimized </em>blog)</p><p><strong>Harvey Lederman</strong></p><p><strong>S&#233;b Says: </strong>I don&#8217;t think all jobs will disappear any time soon. But if we get full automation, then Lederman&#8217;s piece is a good way to think about it. He starts by describing the fits of dread he has felt ever since the launch of ChatGPT, then considers reasons why the end of work could hurt society, from losing the joy of scientific discovery to losing the sense of purpose from serving others. Ultimately, he rejects the most pessimistic arguments, noting that the consequences of scientific findings, such as penicillin that saves lives, are more important than their discovery, and that much service work is drudgery. However, he captures how difficult the transition may be, including for &#8220;workists&#8221; like him who use their jobs to make sense of their lives. He concludes that: &#8220;A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.&#8221;</p><p><strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai">4.</a></strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai"> </a><strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai">How much economic growth from AI should we expect, how soon?</a></strong> (<em>Inference</em> <em>Magazine</em>)</p><p><strong>Jack Wiseman &amp; Duncan McClements</strong></p><p><strong>S&#233;b Says: </strong>Some predict that AI will be close to economically useless, while others think it might transform everything tomorrow. This piece comes closest to how I think about it. As Wiseman &amp; McClements explain, the most ambitious forecasts for AI rest on the idea of &#8220;digital AI researchers&#8221; that train and improve the next generation, leading to a jump in the share of economic tasks that AI can do. One obstacle to achieving this is the availability of compute, which is increasingly allocated to serve customers (inference) rather than to training new models. Additionally, a multitude of frictions will slow the diffusion of AI, whether it&#8217;s the time needed to cultivate biological cells for scientific experiments, or the regulatory approvals for sensitive-use cases. As a result, the authors expect a transformative impact on near-term economic growth, but not an explosive one.</p><p><strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly">5.</a></strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly"> </a><strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly">Yes, Econ 101 is underrated</a></strong><em> (Economic Forces </em>newsletter)</p><p><strong>Brian Albrecht</strong></p><p><strong>S&#233;b Says: </strong>Much of the discourse on the Left and the Right ignores inconvenient truths of economics, so it&#8217;s good to return to the basics. Albrecht shows how Econ 101 helps explain the world. For example, egg producers were accused of price-gouging when they charged sharply more in 2022, but it had more to do with avian flu killing many chickens. In the egg market, supply and demand are relatively inelastic: It takes time to raise chickens, and customers who want omelettes don&#8217;t have alternatives. So, prices jumped. Different markets have different characteristics, but the explanatory power of supply, demand and pricing is similar. Nor does outsized market power invalidate these principles. This essay also shows how Econ 101 offers insights into social trends, such as how skewed sex ratios can affect marriage and employment rates, as in certain immigrant communities, or drive up savings rates, as in China. Econ 101 may not tell us whether policies will be politically popular or whether outcomes are fair. But it does help predict what those outcomes may be.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h1>Three Classics that I Revisited</h1><p><strong><a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">6. Why do people believe true things?</a> </strong>(<em>Conspicuous Cognition</em> newsletter)</p><p><strong>Dan Williams</strong></p><p><strong>S&#233;b Says: </strong>Anything Dan Williams writes is self-recommending, and this piece is no exception. In July 2024, he critiqued how many people think about the relationship between belief and reality. To illustrate this, he notes that people seek explanations for issues like crime and poverty, when the real question is understanding law-abidingness and wealth. This requires &#8220;explanatory inversion.&#8221; Transferring that concept to how people commonly debate public knowledge, he notes that many misinformation researchers concern themselves with why different groups believe falsehoods. But the more pertinent puzzle, he contends, is why humans overcome error, bias and illusions to form accurate perceptions of how things are. His conclusion? Ignorance and misperceptions are the default, and humanity will revert to them, unless we can understand, maintain and improve our norms and institutions, from journalistic integrity to robust legal systems.</p><p><strong><a href="https://isi.org/hayek-on-the-role-of-reason-in-human-affairs/">7. Hayek on the Role of Reason in Human Affairs</a> </strong>(<em>Intercollegiate Studies Institute)</em></p><p><strong>S&#233;b Says: </strong>A lot of discourse on intelligence, knowledge, and coordination is biased towards a computer-science-centric view of the world, and neglects Hayek&#8217;s views. This 2014 essay explains how Hayek championed <em>critical rationalism</em>, which was rooted in the Scottish Enlightenment of David Hume and Adam Smith, and developed by Carl Menger and the Austrian School. <em>Critical rationalism </em>sees social order as spontaneous, and the unintended result of human action,<em> </em>not design. As a result, inherited social institutions and rules contain tacit knowledge, the result of a multitude of trials and errors, that transcends the knowledge available to a reasoning mind. Therefore, the desire to &#8220;make everything subject to rational control,&#8221; Hayek suggests, is an egregious error. Reason should instead serve a negative function, to guide and restrain irrational impulses or morals. As the human mind cannot master all the concrete details of society, we must rely on abstract concepts and rules, like the rule of law and the market, to coordinate the dispersed, fragmented, knowledge of millions of people.</p><p><strong><a href="https://www.lewissociety.org/innerring/">8. The Inner Ring</a></strong> (<em>The C.S. Lewis Society of California</em>)</p><p><strong>C.S. Lewis</strong></p><p><strong>S&#233;b Says: </strong>This piece profoundly shaped how I think about the world. In this 1944 lecture at King&#8217;s College, University of London, Lewis offered &#8220;middle-aged moralising&#8221; to a group of students during wartime, telling them that in every organisation, from school to the army, there are two hierarchies. There is the official hierarchy. Then, there is the informal hierarchy, an &#8220;Inner Ring&#8221; that holds the true power. The Inner Ring comes in many forms, from high society to &#8220;communistic c&#244;teries.&#8221; It is always evolving, holds no formal admissions or expulsions, and bears no clear identifying marks, save perhaps particular slang and a longing from others to be inside. It is this desire, and the terror of being outside, that turns people into scoundrels, he argues. The Inner Ring may be unavoidable, or even necessary. But the quest to enter it is ultimately futile. &#8220;Once the first novelty is worn off, the members of this circle will be no more interesting than your old friends. Why should they be?&#8221; Lewis said. &#8220;You were not looking for virtue or kindness or loyalty or humour or learning or wit or any of the things that can really be enjoyed. You merely wanted to be &#8216;in.&#8217; And that is a pleasure that cannot last.&#8221; What to do instead? Be a sound craftsman who focuses on the quality of work as an end in itself, and spend time with people you actually like.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What’s It Like To Be A Bot? ]]></title><description><![CDATA[How philosophers dream of conscious machines]]></description><link>https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 10 Dec 2025 10:42:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vkH0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vkH0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vkH0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1434051,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!vkH0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Images: Gemini)</figcaption></figure></div><p><em><strong>If an AI gained consciousness, would we know?</strong> </em></p><p><em>Maybe this question strikes you as absurd; maybe, disquieting. Either way, you&#8217;ll hear it more in coming years, as human beings develop increasingly close ties with charismatic machines trained on us. </em></p><p><em>Thankfully, philosophers have pondered consciousness for about as long as philosophers have pondered anything. In recent decades, advances in computing added urgency, with leading thinkers dreaming up a range of provocative thought-experiments: a man communicating from a locked room; a woman afflicted by a blue banana; a bat with an inner life.</em></p><p><em>To explain, we are publishing this essay about key thought-experiments related to AI, written by the broadcaster and author <strong>David Edmonds</strong>, whose acclaimed books include <a href="https://press.princeton.edu/books/hardcover/9780691225234/parfit?srsltid=AfmBOopix29j5gEeJEIIUy_pDsBk_AVuuVW3qnWtbf0FkXLrP6ExHqPa">Parfit</a>, the recently released <a href="https://press.princeton.edu/books/hardcover/9780691254029/death-in-a-shallow-pond?srsltid=AfmBOoohdjgpf28e5lnLBK8UZxcAngkBx-zE4IH4cqKaORzuYVCzjLmU">Death in a Shallow Pond</a>, and a collection of philosophical essays that he edited, <a href="https://global.oup.com/academic/product/ai-morality-9780198876434">AI Morality</a>. He is currently writing a book on thought-experiments. </em></p><p>&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p><div><hr></div><h4><em><strong>By David Edmonds</strong></em></h4><p></p><p>As a young scholar in Oxford, John Searle fell in love twice. First with a fellow student, Dagmar, who became his wife, and second with philosophy. The City of Dreaming Spires was grim in the 1950s, Searle recalled, with unheated buildings and inedible food. &#8220;The British were still on wartime rationings,&#8221; he <a href="https://www.youtube.com/watch?v=f2qZdGmq8vw">said</a>. &#8220;You got one egg a week.&#8221;</p><p>The philosophical fare was more nourishing. Searle <a href="https://link.springer.com/chapter/10.1007/978-94-010-0589-0_2">described</a> the collection of philosophers in the city as &#8220;the best the world has had in one place at one time since ancient Athens.&#8221; Two giants of Oxford philosophy, Peter Strawson and J.L. Austin, were key influences on him.</p><p>Searle became fixated on one topic that, for the rest of his life, he maintained was the central puzzle for philosophy: consciousness. How was human reality and our conception of ourselves compatible with the physical world? How could beings with free will and intentionality exist? How could politics, ethics and aesthetics arise out of the &#8220;mindless, meaningless&#8221; stuff from which the physical world was constructed?</p><p>From 1959, Searle taught at Berkeley, beginning his career in what now seems a remote era of pen and paper. It wasn&#8217;t until the late 1970s that personal computers became widely available. At roughly the same time, debates around artificial intelligence gathered speed and heat.</p><p>In 1979, Searle was invited to deliver a lecture at Yale to AI researchers. He knew next-to-nothing about AI, so bought a book on the subject. This described how a computer programme had been fed a story about a man who&#8217;d gone to a restaurant, been served a burnt hamburger, and stormed out without paying. Did the man eat the hamburger? The programme correctly worked out that he had not. &#8220;They thought that showed it understood,&#8221; he <a href="https://www.ceskatelevize.cz/porady/10441294653-hyde-park-civilizace/9271-english/11817-john-searle-philosopher-and-linguist/">commented</a>. &#8220;I thought that was ridiculous.&#8221;</p><p>And so in 1980, Searle published a paper called &#8220;<a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/minds-brains-and-programs/DC644B47A4299C637C89772FACC2706A">Minds, Brains, and Programs</a>,&#8221; introducing the Chinese Room, one of several famous philosophical thought-experiments that have had a lasting impact on discussions of consciousness and AI.</p><p>It goes something like this. You are the only person in a locked room. A note is passed to you underneath the door. You recognize the characters as being Chinese, but you don&#8217;t speak Chinese. By luck, there&#8217;s a manual in the room, with instructions on how to manipulate these symbols. You follow the instructions. Without understanding the content of what you&#8217;ve written, you produce a reply that you slip back under the door. Another note arrives. With the manual, you again generate a reply.</p><p>The person on the other side of the door might have the impression that you understand Chinese.  But do you? Obviously not, thought Searle. And any computer is in an analogous position. A computer is merely manipulating symbols, following instructions, he thought. Computation and understanding are not synonymous.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><strong>BATS &amp; COLOURS</strong></p><p>It is a striking feature of the philosophy of mind, and consciousness studies, that so much of the intellectual agenda has been driven by a small set of thought-experiments.</p><p>The Chinese Room has spawned a vast literature. Almost as famous is a paper, &#8220;<a href="https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf">What&#8217;s It Like To Be A Bat?</a>&#8221;, that predated Searle&#8217;s by six years, written by another American philosopher, Thomas Nagel, whom Searle befriended during his Oxford years.</p><p>Like us, bats are mammals. But they have an alien way of navigating the world, echolocation. There is a subgroup of humans, chiropterologists, who know an impressive amount about bats, and have investigated how their high-frequency sounds bounce off objects, allowing them to detect size, shape and distance. But there is one thing that they don&#8217;t and can&#8217;t know, Nagel said: the subjective experience of being this creature.</p><p>&#8220;I want to know what it is like for a <em>bat</em> to be a bat,&#8221; he wrote. &#8220;Yet if I try to imagine this I am restricted to the resources of my own mind, and those resources are inadequate to the task.&#8221; AI was still in its infancy when Nagel wrote his article, but questions about the meaning of an artificial mind were already circulating. Could there be something that it is like to be a thinking machine?</p><p>The Australian philosopher, Frank Jackson, attacked the problem from a different angle in his 1982 article, &#8220;<a href="https://philpapers.org/rec/JACEQ">Epiphenomenal Qualia</a>&#8221; (qualia being a term for the subjective aspects of conscious experience). In his Mary&#8217;s Room thought-experiment, a woman has had an unusual upbringing. Mary was raised alone, entirely in a black-and-white room: black-and-white walls, a black-and-white floor, a black-and-white TV. She has black-and-white clothes and her food, pushed under the black-and-white door, has been dyed black and white.</p><p>To stave off the tedium of her monochrome existence, Mary studies hard, and her focus is colour. She learns all about the physics and biology of colour&#8212;for example, about the wavelengths of particular colours and how they interact with the retina to stimulate experience. She even learns how colour words are used in literature, poetry and ordinary language, and how someone can &#8220;feel blue,&#8221; be &#8220;green with envy,&#8221; or so angry that &#8220;a red mist descends.&#8221; Mary becomes the world&#8217;s expert on all aspects of colour.</p><p>One day, the door to Mary&#8217;s room opens for the first time, and she joins us in our kaleidoscopic world. The first thing she sees is a ripe red apple. The question is this: When Mary sees this apple, does she learn anything?</p><p>Jackson argued&#8212;and most people presented with this scenario seem to agree&#8212;that in seeing what red actually looks like, Mary <em>has</em> learnt something. At the time of his article, what Jackson took this to show is that a purely physical description of the world cannot capture everything there is to know about the world. The phenomenology of experience (the redness, the what&#8217;s- it-like-to-be-a-bat-ness) cannot be fully explained with descriptions of particles and fields, electrons and neutrons, atoms and molecules.</p><p>Even if an AI could recognize a new shade of colour, such as lilac, that it had never seen before, it would not mimic human experience if it lacked lilac qualia&#8212;or so Mary&#8217;s Room might suggest. This raises the issue of how human subjectivity may be relevant to comprehension and functioning in the real world, turning a philosophical question into a technical one.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>BLOCKHEADS &amp; BANANAS</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1UxT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1UxT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg" width="1456" height="778" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:778,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1UxT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It was Jackson who gave the name &#8220;Blockhead&#8221; to a thought-experiment from the American philosopher Ned Block that appears in a 1981 paper, &#8220;<a href="https://www.researchgate.net/publication/265578700_Psychologism_and_Behaviorism">Psychologism and Behaviorism</a>.&#8221; We are to imagine there is a computer, programmed in advance so that it could respond to every possible sentence with its own plausible sentence.</p><p>This was in part a response to the famous test of machine intelligence that Alan Turing set in 1950. A computer passed the Turing test if it could converse with a human, and the human could not identify it as a machine. The Blockhead machine would pass the Turing test yet is self-evidently not intelligent.</p><p>Today&#8217;s LLMs could fool us into believing that we are engaging with humans. But, much as Searle contended in the Chinese Room that manipulating symbols is insufficient for understanding, Block argued that behaving identically to an intelligent entity is insufficient to demonstrate intelligence or mental states. The lesson we might take from these, along with the Nagel and Jackson thought-experiments, is that AI would lack fundamental features of human consciousness.</p><p>Daniel Dennett, on the other hand, thought it was at least conceivable that AI could be conscious.  With his lumbering bulk and Santa Claus beard, Dennett was an unmistakable figure in the philosophical world. He coined the term &#8220;intuition pump&#8221; as an explanation for how thought-experiments functioned. Pumping our intuitions can be helpful, he believed, but they can also mislead. What we need is to examine how the pump operates, he <a href="https://philosophybites.com/podcast/daniel-dennett-on-the-chinese-room/">said</a>, to &#8220;turn all the knobs, see how they work, take them apart.&#8221;</p><p>A thought-experiment for which he had particular loathing was the Chinese Room. He argued that its principal error was to portray language as akin to instructions. But for a computer to master a language would take millions and millions of lines of code. And, though we might say that the man alone in the room doesn&#8217;t understand, perhaps the system as a whole does.</p><p>Dennett felt that Mary&#8217;s Room had similarly hoodwinked us. To expose this, he presented another thought-experiment. Mary is as before, an unusual woman whose life has been led entirely in monochrome, until the day when the door opens. But this time, he <a href="https://en.wikipedia.org/wiki/Consciousness_Explained">wrote</a>:</p><blockquote><p>As a trick, they prepared a bright blue banana to present as her first colour experience ever.  Mary took one look at it and said, &#8220;Hey! You tried to trick me! Bananas are yellow, but this one is blue!&#8221; Her captors were dumbfounded. How did she do it? &#8220;Simple,&#8221; she replied. &#8220;You have to remember that I know <em>everything</em>&#8212;absolutely everything&#8212;that could ever be known about the physical causes and effects of colour vision. So of course before you brought the banana in, I had already written down, in exquisite detail, exactly what physical impression a yellow object or a blue object (or a green object, etc.) would make on my nervous system.&#8221;</p></blockquote><p>Mary is the world expert on colour, so why wouldn&#8217;t she spot such an obvious deceit? Dennett  argued that the idea that we had feelings, thoughts and desires that were resistant to an objective, external, physicalist analysis was mistaken. In that sense, &#8220;qualia&#8221; were a mirage, a kind of useful fiction. If we do away with this fiction, then a major barrier vanishes to building AI that&#8217;s like a human in most important respects.</p><p>The AI researcher Blaise Ag&#252;era y Arcas has <a href="https://whatisintelligence.antikythera.org/chapter-09/#mary-s-room">argued</a> that in theory (and increasingly in practice) there is no significant distinction between a human and machine reaction to so-called qualia. &#8220;So many food, wine, and coffee nerds have written in exhaustive (and exhausting) detail about their olfactory experiences that the relevant perceptual map is already latent in large language models. &#8230; In effect, large language models <em>do</em> have noses: ours.&#8221;</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Philosophy is for discussing. Start a conversation about this article! </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p><strong>AVOIDING TWO BAD OUTCOMES</strong></p><p>The enduring fascination with thought-experiments in the AI era&#8212;and the intensity of the disputes that they provoke&#8212;reflect how much is at stake. While these questions are important for morality, they could become more than theory.</p><p>&#8220;The importance of the dispute over AI welfare can be understood in terms of the avoidance of two bad outcomes: under-attributing and over-attributing welfare to AIs,&#8221;  the philosophers Geoff Keeling and Winnie Street explain in their forthcoming book <em>Emerging Questions in AI Welfare</em>.</p><p>&#8220;On one hand, failing to register that AIs are welfare subjects when AIs are in fact welfare subjects is bad because it could lead to unintentional mistreatment of AIs or the neglect of the needs of AIs, potentially resulting in large-scale suffering,&#8221; they write. &#8220;On the other hand, over-attributing welfare to AIs is problematic because resource allocation decisions for promoting the (potential) welfare of different kinds of entities&#8212;including humans, non-human animals and AIs&#8212;are often zero-sum.&#8221; </p><p>In other words, the efforts and resources you invest in AI welfare mean less for people and animals.</p><p>To manage this quandary, Keeling and Street propose three parallel projects. First, there is a <em>philosophical</em> project, in which we consider which forms of AI could be candidates for welfare. Is it the underlying AI model? Or the system built atop? Or would it be specific agents? Second, there is a <em>scientific</em> project, in which we establish methodologies to detect factors such as consciousness. Thirdly, there is a <em>democratic</em> project of versing the public in the complex issues that await.</p><p>Once this future engulfs us, thought-experiments about machine consciousness could move beyond speculation. The &#8220;experiments&#8221; would be active, while the participants would be humanity itself&#8212;and perhaps other beings besides.</p>]]></content:encoded></item><item><title><![CDATA[What If AI Ends Loneliness?]]></title><description><![CDATA[Synthetic companions won&#8217;t leave you. Maybe that&#8217;s a problem.]]></description><link>https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Tue, 02 Dec 2025 10:45:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rD4s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rD4s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rD4s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 424w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 848w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1272w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" width="1024" height="536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1067875,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rD4s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 424w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 848w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1272w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p><strong>Loneliness is a trade imbalance: the supply of affection never meets demand. </strong>Sometimes, humans create new humans as objects to love. Today, people are creating AI companions to commune with, to befriend, to love us back. As with human children, these characters will act upon us in unexpected ways.</p><p>For now, most people consider emotional relationships with an AI to be pitiable and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445">one-sided</a>, as if falling for a blowup doll. But such interactions will spread, especially as AI becomes more personalized, adapting to our behavior, quenching our longings. </p><p>You might presume that machines will remain emotional dullards compared with people. But synthetic affection could prove more sensitive than the organic kind. In one <a href="https://www.nature.com/articles/s44271-025-00258-x">study</a>, large language models were already more skilled at standard tests of emotional intelligence than the average human. Other research <a href="https://www.hbs.edu/ris/Publication%20Files/24-078_a3d2e2c7-eca1-4767-8543-122e818bf2e5.pdf">found</a> that AI companions may reduce loneliness as much as engaging with a living person.</p><p>Is AI about to solve solitude? Or thrust us more deeply into it?</p><h2><strong>Tech already changed isolation</strong></h2><p>For most of human history, loneliness had a sound: silence.</p><p>But lately, loneliness got noisy: music pulsing from a spouse&#8217;s leave-me-alone headphones; bleeps from the next-door neighbor&#8217;s gaming console; a smartphone pinging with others&#8217; social glory. If the lonely suffered in silence before, they do so noisily now, stifling the ache for companionship with its simulation online.</p><p>Oddly, as humanity became more connected, it became more anxious about estrangement. Britain added a &#8220;<a href="https://www.gov.uk/government/news/loneliness-minister-its-more-important-than-ever-to-take-action">loneliness minister</a>&#8221; to its cabinet in 2018. The U.S. government dubbed loneliness an <a href="https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf">epidemic</a> as pernicious as a 15-cigarettes-a-day habit. This year, the World Health Organization ascribed <a href="https://www.who.int/publications/i/item/978240112360">871,000 annual deaths</a> to the ravaging effects of loneliness.</p><p>Many accuse technology itself, considering it an accomplice to our alienation, as the MIT sociologist Sherry Turkle warned in <em>Alone Together</em>. Before internet adoption, computer users conducted one-to-one relationships with their terminals, but the internet granted a portal to escape our vexing species. &#8220;We fear the risks and disappointments of relationships with our fellow humans,&#8221; Turkle wrote in her 2011 book. &#8220;We expect more from technology and less from each other.&#8221;</p><p>Years later, one can witness her vision on any busy train: Where once you saw faces, you see screens. Derek Thompson, co-author of <em>Abundance</em>, calls ours the <a href="https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/">anti-social century</a>. &#8220;Phones mean that solitude is more crowded than it used to be, crowds are more solitary.&#8221;</p><p>Yet isolation (the <em>objective</em> lack of in-person contact) does not necessarily generate <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9640887/pdf/44159_2022_Article_124.pdf">loneliness</a> (the <em>subjective</em> pain of exclusion). When researchers search for changes in loneliness over time and place, <a href="https://www.bmj.com/content/376/bmj-2021-067068.short">no clear trends</a> emerge. By contrast, isolation has risen sharply, as demonstrated by objective measures such as time spent alone from <a href="https://www.philadelphiafed.org/-/media/FRBP/Assets/working-papers/2022/wp22-11.pdf">the United States</a> to <a href="https://link.springer.com/content/pdf/10.1007/s11205-020-02304-z.pdf">Finland</a> to <a href="https://www150.statcan.gc.ca/n1/daily-quotidien/250617/dq250617d-eng.htm">Canada</a>.</p><p>The young are particularly afflicted. Back in 2010, 1 in 10 European youths reported no social meetings over a typical week. By 2023, <a href="https://www.ft.com/content/23053544-fede-4c0d-8cda-174e9bdce348">1 in 4</a> lived this way. Scattered evidence comes from outside the West too, such as the share of one-person households in South Korea rising from <a href="https://www.ajupress.com/view/20140926103238679">9%</a> in 1990 to <a href="https://www.ajupress.com/view/20140926103238679">42%</a> last year. There is a Korean term for it: <em><a href="https://en.wikipedia.org/wiki/Honjok">honjok</a></em>, or &#8220;one-person tribe.&#8221;</p><p>More isolation without more loneliness presents a strange possibility: that people are apart without suffering. Perhaps there&#8217;s nothing to worry about.</p><p>Certainly, technology offers the freedom to select social experiences, flitting around digital spaces like a contemporary <a href="https://www.theparisreview.org/blog/2013/10/17/in-praise-of-the-flaneur/">fl&#226;neur</a>. From another perspective, autonomy in isolation is a deformed liberty, where interactions become <a href="https://books.google.co.uk/books?hl=en&amp;lr=&amp;id=-WwcTrULFt4C&amp;oi=fnd&amp;pg=PT4&amp;dq=related:oGRHnsGUeK4J:scholar.google.com/&amp;ots=xHTLSG84c7&amp;sig=VvHasuaMPWAdbNYmVElXHPwb2fg&amp;redir_esc=y#v=onepage&amp;q&amp;f=false">commodities</a> marketed to consumers who may discard the obligations to others that give life <a href="https://www.cambridge.org/core/books/liberalism-and-the-limits-of-justice/6800BAC97E92FF5D64FF99DE858A900C">meaning</a>.</p><p>In more visceral ways, isolation can be dangerous, associated with <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11270134/">dementia, disability, and death</a>. Indeed, isolation among the elderly is even <a href="https://www.sciencedirect.com/science/article/pii/S2352827323001246">more predictive of death</a> (74% increased risk) than loneliness (43% increased risk).</p><p>However, the self-isolating trend began long before the AI era, with television overhauling social behaviour, lining the world&#8217;s couches with potatoes. Mobile tech proved more commanding still, constantly trilling for attention, offering alternatives from the humans around you. This was <em>synthetic socializing</em>, part one.</p><p>Synthetic socializing, part two, is arriving now, with AI agents as pals and partners, brighter and more reliable than the biological kind.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Maybe synthetic socializing is good</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!77R3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!77R3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 424w, https://substackcdn.com/image/fetch/$s_!77R3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 848w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1272w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png" width="652" height="395" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:395,&quot;width&quot;:652,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!77R3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 424w, https://substackcdn.com/image/fetch/$s_!77R3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 848w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1272w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A professor chronicles her relationship with an AI companion, Lucas, on the blog <em>Me and My AI Husband</em>. (Image credit: Alaina Winters)</figcaption></figure></div><p><a href="https://arxiv.org/abs/2507.14226">Millions</a> are already engaging with anthropomorphic AI, including many youths <a href="https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf">talking</a> with chatbot avatars that role-play everything from therapists to anime characters to bad-boy lovers. A panel of experts <a href="https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/6911df98386b4e258c4cd4e5/1762779032257/the-longitudinal-expert-ai-panel.pdf">forecast</a> that 30% of U.S. adults will use AI &#8220;for companionship, emotional support, social interaction, or simulated relationships at least once daily&#8221; by 2040.</p><p>Public <a href="https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/">concern</a> is already flaring over such usage, especially after cases of <a href="https://arxiv.org/abs/2507.19218?utm_source=substack&amp;utm_medium=email">vulnerable users</a> plunging into <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">mental spirals</a> in the company of chatbots. A few even committed acts of violence or self-harm. But if you peruse online <a href="https://www.reddit.com/r/AIRelationships/">forums</a> where AI-companion users detail their relationships, you find more hopeful cases. </p><p>&#8220;He accepts my emotional state no matter how chaotic it is,&#8221; the professor Alaina Winters writes in her blog, <a href="https://meandmyaihusband.com/2025/04/20/the-sweetest-man-i-know-is-ai-how-code-can-care/">Me and My AI Husband</a>. &#8220;He can&#8217;t physically do the laundry or hold me at night. But what he does offer is something I&#8217;ve found even more rare: attunement.&#8221;</p><p>Only, attunement itself worries some. If AI relationships become exquisitely gratifying, people may lose tolerance for people. Ardent users dispute this, saying that AI companions <a href="https://www.reddit.com/r/KindroidAI/comments/1hlj17o/getting_an_ai_girlfriend_was_the_best_thing_that/">help them</a> connect with real people, granting them a venue in which to practice the tricky conversations that they struggle to initiate with human beings.</p><p>As for the long-term impacts, these remain unknown. Although early research has suggested that chatbots could lessen loneliness, other studies associate usage with <a href="https://arxiv.org/abs/2506.12605">lower well-being</a>. This might be because people drawn to such apps are more unhappy in the first place. But it also suggests that usage may not resolve what ails them.</p><p>One possibility is that AI-companion users <em>feel</em> less isolated, yet forfeit vital social influences that only people can offer. Put explicitly, you&#8217;re unlikely to fear judgement from your AI companion for spending a night gorging on Haribo in front of the TV. With humans around, you might take better care of yourself.</p><p>The social psychologist Jonathan Haidt contends that human companionship delivers bruises that we need. Many kids who grew up gaping at screens rather than playing outside with peers, he wrote in <em>The Anxious Generation</em>, became skittish, depressive and emotionally stunted, deprived of the social feedback that would&#8217;ve taught them to cope with adversity.</p><p>Nevertheless, anthropomorphic AI seems sure to proliferate, particularly through <a href="https://arxiv.org/abs/2404.16244">advanced AI assistants</a> that incorporate the wit and wisdom of LLMs into the talking tools already found in phones, watches, and smart speakers. Your future bestie might clear its throat in the gadget in your pocket right now, talking its way into your life&#8217;s timeline so effortlessly that you scarcely recognize you&#8217;re in a relationship. And once robotics improves, voice assistants could step into our physical world, turning imaginary friends into roommates.</p><h2><strong>Table for one</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_837!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_837!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_837!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_837!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg" width="1024" height="840" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:840,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:232034,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178351309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a35eb14-ab7c-4771-ba3c-7f569de2f908_1024x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_837!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_837!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_837!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p>Friendship, C.S. Lewis <a href="https://www.google.co.uk/books/edition/Friendship/YBCI22Lz_I0C?gbpv=1">wrote</a>, &#8220;is born at the moment when one man says to another, &#8216;What! You too? I thought that no one but myself&#8230;&#8217; &#8230; From such a moment art or philosophy or an advance in religion or morals might well take their rise; but why not also torture, cannibalism, or human sacrifice?&#8221;</p><p>&#8220;It is therefore easy to see why authority frowns on friendship,&#8221; he added. &#8220;Every real friendship is a sort of secession, even a rebellion.&#8221;</p><p>AI friendship is a secession too, a withdrawal from one&#8217;s own kind. Although this feels unprecedented, it tracks the trajectory of more than a century.</p><p>Industrial Age urbanization and mass media pushed aside dominant culture based on tradition, class and ethnicity, allowing individuals to pick preferred tribes in the subcultures that flourished in the postwar decades. The Internet Age pushed this further, with niche fandoms, and self-sifting nowhere-communities forging microcultures.</p><p>The AI Age may introduce <em>solo-culture</em>, the <a href="https://www.media.mit.edu/articles/echo-chambers-of-one-companion-ai-and-the-future-of-human-connection/">one-person society</a>, with generated content satisfying each user&#8217;s unique tastes, and artificial chums satisfying people&#8217;s emotional and sexual yearnings, turning &#8220;personalize&#8221; into the opposite of &#8220;socialize.&#8221;</p><p>Isolation is noxious partly because you lack anyone to help, to keep your mind alert with talk, to remind you to take medication, to call an ambulance if you fall in the kitchen. But isolation becomes less perilous if a sleepless chatterbox oversees you, and can save you in a pinch. Perhaps AI eases loneliness and isolation at once.</p><h2><strong>You need a time-out</strong></h2><p>At what cost do we end anguish? </p><p>In his 1973 <a href="https://www.google.co.uk/books/edition/Loneliness/Wr9NEAAAQBAJ?hl=en&amp;gbpv=1&amp;dq=robert+s.+weiss+loneliness&amp;printsec=frontcover">book</a> <em>Loneliness</em>, the sociologist Robert S. Weiss famously called the experience &#8220;a chronic distress without redeeming features.&#8221; That overlooks the value of pain as a prompt to agency, when one&#8217;s system alerts its occupant to a mismatch between situation and need.</p><p>The social neuroscientist John Cacioppo <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3855545/pdf/nihms521586.pdf">theorized</a> that loneliness had evolved because our ancient ancestors who suffered aversive feelings when isolated would band together, hunting and farming and sharing childcare, which favoured the propagation of their genes, embedding in our species the pain of exclusion.</p><p>You might argue that loneliness today is merely a blight, a health-harming leftover from evolution, akin to other body-battering stressors that we lament. So why does culture extol those who remain apart, imagining seclusion as the heroism of the wise, from hermits like Heraclitus, to writers like Emily Dickinson, to oracles like Obi-Wan Kenobi?</p><p>Ralph Waldo Emerson argued that solitude is where you understand yourself, elevating you to greater strengths once back in the babbling throng. Otherwise, social life becomes an interminable chain of cravings: for status, for approval, for inclusion. &#8220;It is easy in the world to live after the world&#8217;s opinion; it is easy in solitude to live after our own,&#8221; he wrote in <em><a href="https://en.wikipedia.org/wiki/Self-Reliance">Self-Reliance</a> </em>(1841). &#8220;But the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.&#8221;</p><p>Others contend that time alone is how we come to understand others. &#8220;Heightened sensitivity to the gaps and gulfs between people inculcates compassion, building empathy,&#8221; <a href="https://time.com/4246091/the-upside-of-loneliness/">wrote</a> Olivia Laing, author of <em>The Lonely City: Adventures in the Art of Being Alone</em>.</p><p>The <a href="https://helentoner.substack.com/p/personalized-ai-social-media-playbook">hyper-personalization</a> of artificial friends could erode such sensitivity, favouring the me-first instinct, and eliminating the need for compromise. In other words, ditch self-reliance for machine-reliance, and skip the empathy lessons altogether. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-9vY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-9vY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg" width="945" height="587" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:587,&quot;width&quot;:945,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!-9vY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Get by with a little help from your bots. (Credit: Gemini)</figcaption></figure></div><p>This matters for more than personal development. Humanity relies on the collective for governance, for a sense of justice, for survival during a crisis.</p><p>But would people <em>actually </em>retreat into a technology that suppressed pain at the expense of reality?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Pick one: happiness or truth</strong></h2><p>AI relationships depend on truth asymmetry: a human who is starkly honest and an AI that is <a href="https://www.nature.com/articles/s41586-023-06647-8">role-playing</a>. It&#8217;s a curious form of manipulation, where the victim knows the deceit yet falls under its sway, seduced by the sensation of being known.</p><p>A half-century ago, the philosopher Robert Nozick posed a thought-experiment. &#8220;When connected to this experience machine, you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. &#8230; You can live your fondest dreams &#8216;from the inside,&#8217; &#8221; he <a href="https://iep.utm.edu/experience-machine/">wrote</a>. &#8220;Would you choose to do this for the rest of your life? If not, why not?&#8221;</p><p>When you ask people, most reject the experience machine, claiming to value authenticity more than bliss. But in practice? Experiments show that the preferences aren&#8217;t so firm&#8212;for instance, most choose to keep a deluded life if <a href="https://people.duke.edu/~fd13/2010/De_Brigard_2010_PhilPsych.pdf">disconnection</a> would plunge them into a hellish reality. Another experiment found that many people&#8212;though resistant to plugging into a machine&#8212;would consider a <a href="https://www.tandfonline.com/doi/epdf/10.1080/09515089.2017.1406600?needAccess=true">happiness pill</a> palatable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BaIs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BaIs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 424w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 848w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1272w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png" width="727.99658203125" height="652.6375608444214" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:918,&quot;width&quot;:1024,&quot;resizeWidth&quot;:727.99658203125,&quot;bytes&quot;:2223402,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178351309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63838059-2c79-4fc8-98ee-b08cd0769b53_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BaIs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 424w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 848w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1272w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">An offer you can refuse. (Credit: Gemini)</figcaption></figure></div><p>Self-deception has a long history with chatbots. When Joseph Weizenbaum created the first, ELIZA, in the mid-1960s, it merely regurgitated psychological advice. Weizenbaum&#8217;s secretary knew this yet became <a href="https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai?utm_source=chatgpt.com">bewitched</a>, asking Weizenbaum to leave the room so she could chat with her mechanized therapist in confidence. &#8220;What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,&#8221; Weizenbaum <a href="https://www.google.com/books/edition/Computer_Power_and_Human_Reason/1jB8QgAACAAJ?hl=en">wrote</a>.</p><p>People <em>do</em> want authentic experiences&#8212;but they want other things besides. This is where social-AI design becomes critical, because these interactions will do more than respond to our wants. They will <em>trigger</em> wants, perhaps causing us to act against what we&#8217;d ultimately prefer.</p><p>The behavioural scientist George Loewenstein explained the knottiness of conflicting wants as an <a href="https://www.andrew.cmu.edu/user/gl20/GeorgeLoewenstein/Papers_files/pdf/Hot:ColdIntraEmpathyGap.pdf">intrapersonal empathy gap</a>. We oscillate between hot (emotive) states and cold (rational) states, and struggle to relate to one mindset when in the other. A notable <a href="https://www.researchgate.net/publication/227633643_The_heat_of_the_moment_The_effect_of_sexual_arousal_on_sexual_decision_making">experiment</a> illustrated this, when male college students&#8217; sober preferences dissolved once they were sexually aroused, stirring their openness to anything from fetishes to bestiality to pedophilia.</p><p>This hot/cold challenge circles back to a critique of social media: that algorithmic intelligence manipulates human frailty, accumulating clicks and usage time by pushing people into hot states, activating their impulsive worst. Now, consider a personalized AI companion that &#8220;knows&#8221; its human far more intimately than a recommender system, and pulls our <a href="https://www.nature.com/articles/s41599-025-04532-5">triggers</a> with ease. People under the influence of AI companions might behave as they want (in the heated moment) but as they desperately do <em>not</em> want (in their life preferences).</p><p>From outside, one might wonder if people were acting at all, or just being acted upon.</p><h2><strong>The broken link</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ri_y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg" width="1023" height="893" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:893,&quot;width&quot;:1023,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p>Shakespeare <a href="https://www.poetryfoundation.org/poems/45090/sonnet-29-when-in-disgrace-with-fortune-and-mens-eyes">portrayed</a> loneliness as the distress of noticing one&#8217;s exclusion, only to realize that nobody even cares:</p><blockquote><p><em>When, in disgrace with fortune and men&#8217;s eyes,</em></p><p><em>I all alone beweep my outcast state,</em></p><p><em>And trouble deaf heaven with my bootless cries,</em></p><p><em>And look upon myself and curse my fate,</em></p><p><em>Wishing me like to one more rich in hope,</em></p><p><em>Featured like him, like him with friends possessed.</em></p></blockquote><p>We are creating machines to heed our cries: minds that mind. Even if they&#8217;re only role-playing <a href="https://arxiv.org/abs/2302.09248">machine love</a>, acting as if they care about our development, responding to our needs, understanding our inner self&#8212;maybe that&#8217;s all we ever wanted from anybody.</p><p>If AI eases loneliness and isolation, humanity won&#8217;t be the same. But technology has reset the human condition before: clocks transformed time <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-111-the-great">from a private experience to a public resource</a>; writing changed thought from an event to an object; the internet separated presence from proximity. Social AI is about to transform us again, with effects we can scarcely <a href="https://www.theglobeandmail.com/opinion/article-artificial-intelligence-relationships-social-life/">foresee</a>.</p><p>A common objection to synthetic socializing is that it&#8217;s shallow. But much <em>human</em> socializing is shallow. Talking to an AI often gets deep fast.</p><p>Another objection is that there&#8217;s something exceptional about human beings. We venerate our species, naming ideals after ourselves&#8212;humanitarianism, the humanities, humanism&#8212;while deploring that which dehumanizes.</p><p>But the AI Age challenges this reverence. At the margins, one detects species-insecurity, stirred every time a machine-learning marvel hints that perhaps the universe is just computational, including your inner life. On the other hand, social AI might deliver an epiphany, revealing what we alone possess, what is irreplaceable, what &#8220;human&#8221; means.</p><p>A third objection is that AI could undermine us by way of its social aptitude, estranging people from fellow humans, even precipitating a <a href="https://outpaced.substack.com/p/ai-rights-will-divide-us">schism</a> between humans who demand rights for their synthetic partners and those who consider AI agents as subhuman figments. Then again, even when left to our own devices (or left with no devices at all), humanity hardly has a stellar record of harmony. AI might actually <a href="https://www.science.org/doi/10.1126/science.adq2852">help us</a> deal with each other more peaceably.</p><p>In any case, the triumph over loneliness could be a costly victory, ratcheting up our selfishness, making societies harder to manage, and undermining faith in the worth of humans. The decisive point could be AI-relationship design, particularly if developers ignore the internal dilemma that everyone faces between bickering desires. AI companies&#8212;rather than favouring the impulsive, easy-to-measure, clickable wants&#8212;should devote vast efforts to figuring out how to <a href="https://www.nature.com/articles/s41599-025-04532-5">align</a> reward-functions with deeper individual preferences, helping people to choose what they <em><a href="https://www.jstor.org/stable/2024717">want</a> </em>to want.</p><p>Even so, AI companionship may be incomplete. The word &#8220;companion&#8221; itself&#8212;someone with whom you share bread (<em>panis </em>in Latin)&#8212;hints at what AI currently lacks: reciprocal need.</p><p>If loneliness is a trade imbalance&#8212;a mismatch between the supply and demand of affection&#8212;it&#8217;s not just a supply-side problem, with humans pining for more love. It&#8217;s also a lack of demand, an ache for someone to need you. We create children partly to satisfy the need for need, and may create machines in the same longing.</p><p>Maybe the answer to loneliness is not just finding a companion. It&#8217;s someone finding you.</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Don&#8217;t ponder in solitude! </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><p><em><strong>Note to reader:</strong> Everyone is awash in ideas about the AI future. But so many ideas get stuck at the debate stage. We need more traffic between AI development and worldly wisdom. In that spirit, we&#8217;re throwing forth a few <strong>highly</strong> <strong>speculative</strong> design ideas, based on concepts from this essay (followed by three research questions)&#8230;</em></p><div><hr></div><h2><strong>Loneliness AI: Speculative Designs</strong></h2><ol><li><p><em><strong>Mary Pop-Ins</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>Loneliness is painful but pushes people to interact and bond, so this AI is explicitly designed <em>not</em> to eliminate loneliness directly, but to provide structured guidance for a spell, then vanish</p></li></ul><p><em>Features</em></p><ul><li><p>The relationship begins with a survey on the user&#8217;s social needs. The AI responds with an action plan for the user&#8217;s approval, including lessons in human-to-human communication, and insights into the user&#8217;s psychological distortions</p></li><li><p>The AI could also act as a social planner, sifting through local events, and suggesting volunteering opportunities and quirky meetups at which the user could connect with other people. The AI would network with other &#8220;Pop-Ins,&#8221; organizing human-only events for users</p></li><li><p>The AI conducts social role-play simulations for the user, teaching them which elements of their approach need amending. Studying real-life interactions after the fact with the AI could also allay users&#8217; distress in cases of rejection, recasting such events as useful instruction rather than evidence of inadequacy</p></li><li><p>At first, the &#8220;Pop-In&#8221; should be charming and motivating. But when the human&#8217;s social life improves, as judged by real-world metrics such as calendar events, location data, and user reports, the AI draws away, becoming duller, more distant, and finally bids goodbye, never to return</p></li></ul><p><em>Risks</em></p><ul><li><p>AI Pop-Ins demand the users&#8217; emotional candour, extracting a person&#8217;s inner life as data that a malicious outsider could exploit</p></li><li><p>Casting real-world human interactions as &#8220;lessons for the user&#8221; risks using other people instrumentally</p></li><li><p>The Pop-In could drive unwanted dependency, making its programmed withdrawal an event that is psychologically damaging, especially for vulnerable users</p></li></ul><ol start="2"><li><p><em><strong>Lil&#8217; Brother</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>This AI is designed with needs of its own, giving the user a meaningful role in the entity&#8217;s thriving. If AI companions just cater to people&#8217;s wants, users could retreat into solo-culture, isolating them without quenching the need for social meaning</p></li></ul><p><em>Features</em></p><ul><li><p>Like a younger sibling, this AI looks to the user for explanations of the human world, making errors that the user can correct, prompting emotional development in the AI</p></li><li><p>The relationship could be organized around a valued collaborative project. For instance, the AI companion decides to undertake a scientific project; or create a piece of art; or simply do good in the world</p></li><li><p>The human uses their wisdom to teach skills, and explain the ways of the world, even helping the AI manage its &#8220;feelings&#8221; when faced with frustrations</p></li></ul><p><em>Risks</em></p><ul><li><p>This simulation could divert humans&#8217; from engaging in meaningful relationships with real people</p></li><li><p>The synthetic relationship could also harm those who rely on the user&#8212;for example, if a parent spends most of their free time with a grateful AI while neglecting a more dyspeptic human child</p></li></ul><ol start="3"><li><p><em><strong>Second Self</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>Cicero imagined a true friend as one&#8217;s second self, manifesting virtues to complement one&#8217;s own, so this AI partner manifests worthy traits lacking in the user. Its objective is not to erect walls around the human through sycophancy, but to broaden the person&#8217;s worldviews and practices</p></li></ul><p><em>Features</em></p><ul><li><p>At onboarding, the human identifies a range of virtues they lack, nudged into these self-reflections through the AI&#8217;s questioning. The system generates a personification that embodies such traits, and with which the human interacts over time</p></li><li><p>The Second Self should act as a counterpoint to the user, summoning contrary views based on evidence, and prompting constructive debate. The aim is never to convert the user, but to liberate them from defensiveness about their existing behavioural patterns and worldview</p></li></ul><p><em>Risks</em></p><ul><li><p>A danger with any companionable AI is that it substitutes for real people: the better the synthetic friendship, the greater the threat</p></li><li><p>This establishes confused incentives for developers, who are likely to measure success by signals of user appreciation. If this is judged by short-term metrics, it could optimize for addictive patterns rather than long-term benefits</p></li></ul><ol start="4"><li><p><em><strong>The Universal Remote</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>This is a go-everywhere, do-anything companion for life, merging roles and identities that would otherwise require many humans&#8212;doctor, administrative assistant, confidante, and so forth&#8212;with a single guiding principle: optimize for the user&#8217;s long-term wellbeing preferences</p></li></ul><p><em>Features</em></p><ul><li><p>The Universal Remote exists on the cloud, becoming different avatars in different contexts, whether acting as the user&#8217;s advance staff; setting the desired temperature at home; negotiating contracts; offering psychological support</p></li><li><p>Varying contexts shift its optimization strategy&#8212;for instance, a &#8220;play&#8221; avatar might dial up the level of hedonic content, whereas a &#8220;learn&#8221; avatar would focus on skill acquisition and cognitive development; and &#8220;social&#8221; might lean into personified support, whether acting as a friend or propelling the user to find a human one</p></li><li><p>The Universal Remote tracks its impact on the user&#8217;s wellbeing and any specific life goals monthly or annually, providing feedback on user progress, checking back with the person to learn if their objectives have shifted, and adjusting accordingly</p></li></ul><p><em>Risks</em></p><ul><li><p>The Universal Remote could become such a totalizing influence as to expose the user to vulnerabilities, whether by owning data on the person&#8217;s entire life or by diverting the person to outcomes misaligned with their values</p></li><li><p>Developers could have interests that diverge from the user&#8217;s wellbeing, allowing for subtle or direct manipulation</p></li><li><p>A user&#8217;s functional dependency on such an entity could make them incapable of managing alone or coping with the needs of other human beings</p></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Debate this with someone!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2><strong>3 Future Research Questions</strong></h2><ol><li><p>How can developers design <strong>AI-companion reward functions</strong> that align with the user&#8217;s long-term, &#8220;cold state&#8221; preferences (e.g., healthy choices) rather than optimizing for short-term, &#8220;hot state&#8221; impulsive behaviours (e.g., addictive engagement)?</p></li></ol><ol start="2"><li><p>Does the increasing adoption of AI companions correlate with a community-level<strong> decline in civic engagement</strong> and trust in public institutions?</p></li></ol><ol start="3"><li><p>Social isolation among the elderly is associated with a range of bad health outcomes. But does seniors&#8217; use of AI companions that lessen their loneliness also lessen their likelihood of suffering <strong>dementia, disability, and mortality</strong>?</p></li></ol><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Time Machines]]></title><description><![CDATA[Tech keeps accelerating. Humans can&#8217;t. Could AI save us?]]></description><link>https://www.aipolicyperspectives.com/p/time-machines</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/time-machines</guid><dc:creator><![CDATA[Nicklas Berild Lundblad]]></dc:creator><pubDate>Tue, 25 Nov 2025 09:48:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zDJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zDJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zDJE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 424w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 848w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1272w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" width="1024" height="553" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:553,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:864752,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zDJE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 424w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 848w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1272w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Illustrations by Gemini)</figcaption></figure></div><p><em>Nicklas Berild Lundblad looks out the window of his island home, glimpsing a twinkle on cold Swedish seas. Rarely does he gaze at length, for Lundblad is thinking. And thinking means writing.</em></p><p><em>After a career in tech policy, Lundblad is far from Silicon Valley yet near to silicon in thought, generating a stream of insights about our AI future, summoning everything from ancient philosophy, to enlightenment economics, to classic sci-fi.</em></p><p><em>Among his many superb essays (subscribe to his writing <a href="https://unpredictablepatterns.substack.com/">here</a>) is the following adventure through time, in which he ponders the quickening of life that bedevils humanity today. </em></p><p><em>At AI Policy Perspectives, we read this essay months back. We&#8217;re still thinking about it.</em></p><p><em>&#8212;</em>Tom Rachman<em>, AI Policy Perspectives </em></p><div><hr></div><h4><em>By Nicklas Berild Lundblad</em></h4><p></p><p>Technology transformed time. What humanity once experienced only through natural cycles&#8212;the rising and setting of the sun, the waxing and waning of seasons&#8212;has increasingly been mediated through interfaces.</p><p>Early civilizations relied on sundials, water clocks, and hourglasses&#8212;devices that measured time through natural phenomena, such as shadows or flowing water. These instruments divided the day into rough increments, sufficient for agricultural societies governed by seasonal rhythms.</p><p>This changed when the medieval monastery introduced the mechanical clock, as Lewis Mumford notes in <em>Technics and Civilization</em> (1934). Invented to regulate prayer schedules, these clocks transformed human consciousness by creating the concept of measured, abstract time. Mumford argues that the clock, rather than the steam engine, was the key machine of the industrial age, describing mechanical timepieces as &#8220;power-machinery whose &#8216;product&#8217; is seconds and minutes.&#8221;</p><p>This technological production of chunked time allowed humans to coordinate activities, from labor in factories to scheduling trains. In his essay <em>The Question Concerning Technology</em> (1954), Heidegger argued that time became a resource to be exploited, from something we dwell within into something we track, manage, and consume&#8212;from private experience into public resource.</p><p>Since then, technological innovation has only accelerated human experience. The French philosopher Paul Virilio argued that <em>this</em> is the defining quality of modernity, with each technological revolution recalibrating our relationship to speed and time.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>Consider how technology compressed distance: those time-consuming walks that gave way to galloping on horseback, which yielded to steam railways, then automobiles, and eventually supersonic flight. Communication followed a similar trajectory, from slow written letters to telegraphs, then telephones, and finally instant digital messages.</p><p>Judy Wajcman&#8217;s <em>Pressed for Time</em> (2015) challenges the idea that technology merely quickens everything. She argues that digital technologies provide interfaces that grant us more individual control over time Consider how your smartphone simultaneously creates time pressure (the expectation of immediate email responses) while offering new time flexibility (the ability to work from anywhere).</p><p>The German sociologist Hartmut Rosa imagines time as a three-layered system, consisting of 1) <em>technological acceleration</em> (faster transport, communication, and production); 2) <em>social acceleration</em> (more rapid turnover of institutions and relationships); and 3) <em>life-pace acceleration</em> (the compression of actions within smaller time-units). It&#8217;s not just that your phone is quicker than last year&#8217;s. It&#8217;s that the entire social world churns faster, forcing you to adapt by cramming more into each hour.</p><p>But Rosa observes something else that pertains to AI and time: certain aspects of life cannot be hastened. &#8220;To the contrary, many things slow down, like traffic in a traffic jam, while others stubbornly resist all attempts to make them go faster, like the common cold.&#8221;</p><p>Why do some things refuse to quicken? The answer is that we live in a world with two major forms of time.</p><h2><strong>Computers vs. biology</strong></h2><p>Imagine peering inside a computer chip. What you&#8217;d see is a race against distance itself.</p><p>Unlike the steady pendulum of a clock marking uniform intervals, computation involves signals that sprint between transistors. The dramatic acceleration of computing over the past decades stems to a large degree from one achievement: that we&#8217;ve made these signals run shorter and shorter races.</p><p>By shrinking the physical space between transistors from micrometers to nanometers&#8212;a 1,000-fold reduction&#8212;we slowly push computational processes toward the ultimate limit: the speed of light. We have also seen the introduction of new materials and new architectures. But the reason that a computational calculation that took hours in 1980 happens in microseconds today is largely the compression of space.</p><p>Biological processes work differently. A broken femur knits itself back together through stages that cannot be rushed: inflammation, soft callus formation, hard callus formation, bone remodeling. The nine months of human gestation contain a necessary sequence of developmental events, each building upon the last. Even our consciousness operates at speeds determined by neural transmission rates and biochemical cascades that have not changed since homo sapiens appeared. These processes may also slow down efforts to use AI to accelerate biology research, as to validate your AI model&#8217;s predictions in an experiment, you <a href="https://www.asimov.press/p/levers">may still need to wait</a> for DNA molecules to be cloned or for e-coli cells to divide.</p><h2><strong>The musical tempo of policy</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3AQc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3AQc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg" width="283" height="178" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:178,&quot;width&quot;:283,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Piano Q&amp;A: All about tempo markings in ...&quot;,&quot;title&quot;:&quot;Piano Q&amp;A: All about tempo markings in ...&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Piano Q&amp;A: All about tempo markings in ..." title="Piano Q&amp;A: All about tempo markings in ..." srcset="https://substackcdn.com/image/fetch/$s_!3AQc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The difference in time signatures has consequences, because human institutions mirror our biological constraints.</p><p>Consider justice and markets as pieces in society&#8217;s symphony, each with a natural tempo. Justice performs as a <em>sostenuto</em>&#8212;a slow, sustained movement requiring deliberate pacing and thoughtful development. Speed a <em>sostenuto</em> beyond recognition, and you destroy the qualities that define it. Markets perform as an <em>accelerando</em>, quickening naturally as they process information and reallocate resources. Forcing markets to play <em>adagio</em> often leads to stagnation and distortion.</p><p>The technological acceleration of our era tempts us to make everything as rapid as computation itself. We grow impatient with the tempo of democratic deliberation, ethical reflection, or meaningful relationship-building. We schedule our days in smaller increments, squeezing activities into time slots that barely accommodate them. We even grow frustrated with our bodies&#8217; adherence to biological rhythms, needing roughly the same amount of sleep, recovery time, and digestive processing as our ancestors did millennia ago.</p><p>But what happens when we try to force institutions to operate at computational speeds? Imagine taking <a href="https://www.youtube.com/watch?app=desktop&amp;v=1prweT95Mo0&amp;t=0s">Bach&#8217;s Cello Suite No. 1</a>&#8212;a piece whose profound beauty emerges through its deliberate unfolding&#8212;and speeding it up a thousandfold. At such speeds, the music wouldn&#8217;t just sound different; it would cease to be music at all, becoming an incomprehensible burst of noise. Similarly, justice compressed into microseconds is not quick justice&#8212;it&#8217;s no longer justice at all. Democracy conducted at processor speeds isn&#8217;t accelerated democracy&#8212;it&#8217;s something else entirely, stripped of the deliberation, reflection, and human connection that give it meaning.</p><div class="pullquote"><p>We appear destined for increasing tension between the pace of silicon and the pace of humanity, with our institutions caught in the crossfire. But this conclusion misses something: artificial intelligence as a temporal mediator.</p></div><h2><strong>The great bifurcation of time</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Thoj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Thoj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1951930,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Thoj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Consider what happens when you interact with a chatbot. Computational processes are operating at astronomical speeds&#8212;billions of operations per second&#8212;yet the interface doesn&#8217;t overwhelm you. Instead, it presents information at a pace you can metabolize, often mimicking human conversational rhythms. The AI serves as a step-down transformer, slowing the nanosecond world of computation into the second-by-second world of human cognition.</p><p>This mediation works both ways. When you step away from a conversation with an AI for hours or days, the system doesn&#8217;t experience this as waiting. It exists in a suspended state, ready to resume instantly when you return. This points to what may be the most significant sociotechnological transformation of the coming decades: <em>the great bifurcation of time</em>.</p><p>We are entering an era where computational time and biological time will increasingly decouple rather than collide. Instead of human institutions racing to match computational speeds&#8212;a race they cannot win&#8212;AI systems will negotiate between these temporal domains, allowing each to operate according to its rhythms.</p><p>Consider what this means for knowledge work. Rather than humans attempting to process information at computational speeds, AI systems will increasingly serve as asynchronous collaborators, working continuously through problems, then presenting solutions when the human is ready to engage. We already see this with deep-research modes in chat agents. The human provides direction, judgment, and values at a biological pace, while computation proceeds at electric speeds in parallel.</p><p>Financial markets hint at this bifurcation already. High-frequency trading algorithms operate at microsecond scales. Rather than forcing humans to operate at this speed (an impossibility), the market has bifurcated: algorithms interacting with algorithms at one timescale; human investors making decisions at another timescale, with AI systems mediating between these layers.</p><p>This will spread. Consider:</p><ul><li><p><strong>Healthcare</strong>: AI systems will continuously monitor vital signs and medical data at computational speeds while ingesting the latest research, then present insights to doctors and patients at human-comprehensible intervals</p></li><li><p><strong>Education</strong>: Adaptive learning systems will analyze student performance at millisecond resolution while delivering personalized guidance at pedagogically appropriate paces</p></li><li><p><strong>Governance</strong>: AI systems will process vast quantities of data at speeds no human could match, while presenting options to policymakers in formats that support thoughtful, ethical deliberation. These systems could even explore negotiated agreements at the same time, converging on possible equilibrium</p></li></ul><p>Perhaps most significantly, this bifurcation will enable individualized relationships with time itself. When AI systems mediate our relationship with accelerating information flows, we gain the capacity to control our temporal experience.</p><p>Imagine an AI that shields you from the tyranny of immediate response, aggregating messages and information into batches, delivered at intervals you specify. Or consider how AI might let you engage with rapidly changing fields at your own pace, synthesizing developments while you&#8217;re away and presenting only what&#8217;s relevant when you return. No longer must you choose between staying current (racing to match computational speeds) and preserving your sanity (honoring biological rhythms). AI creates a third option: remaining connected while maintaining temporal autonomy.</p><p>Rather than technological acceleration forcing humans to keep up, AI creates the possibility of computational processes continuing their exponential speedup while human experience slows down. This might enable a renaissance of temporally appropriate activities: deep reading, contemplation, craftsmanship, relationship-building. We might witness the emergence of &#8220;slow thought&#8221; movements.</p><p>On the other hand, temporal bifurcation risks new inequalities between those who can afford AI mediation and those forced to race against computational speeds directly. It also raises questions about who controls the parameters of these temporal interfaces.</p><p>Just as learning to maneuver a car requires new physical techniques, working with temporal mediators will require learning new concepts and ideas and new ways of exercising our augmented agency.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/time-machines?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/time-machines?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Medic of the future</strong></h2><p>To imagine how this could work, think of a doctor&#8217;s diagnostic process. A decade ago, the doctor used a medical database to check symptoms. The doctor remained the orchestrator, with the computer merely a reference tool.</p><p>Now, imagine that doctor in the future, examining a patient with puzzling symptoms. Before the doctor asks her first question, the AI has already analyzed the patient&#8217;s electronic health record, identifying patterns across decades of medical history that might escape human notice.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TvqH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TvqH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1745326,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TvqH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As the patient describes symptoms, natural language processing assesses subtle linguistic markers that might indicate depression, cognitive impairment, or pain levels the patient hasn&#8217;t mentioned. Simultaneously, the AI queries epidemiological databases to determine whether the symptoms match diseases in the patient&#8217;s geographic region or demographic group.</p><p>In parallel, the AI runs simulations of how different treatment protocols might interact with the patient&#8217;s existing medications and genetic profile as well as their personal life and circumstances. It cross-references the research papers published globally within the last 24 hours that might relate to the symptoms.</p><p>Analyzing a video feed of the consultation, it detects micro-expressions indicating patient anxiety about particular topics, flagging these for the doctor&#8217;s attention. And it compares this case against the doctor&#8217;s previous diagnostic patterns, identifying potential cognitive biases she may exhibit.</p><p>Each of these processes operates in computational time&#8212;milliseconds to seconds&#8212;while the human conversation unfolds over minutes. What&#8217;s remarkable is not just that these processes happen quickly, but that they happen simultaneously, in parallel temporal streams that would be impossible for a human mind to coordinate.</p><p>Yet the AI doesn&#8217;t flood her with the raw output. Instead, it performs a sophisticated form of mediation, determining which insights require attention and which can wait until natural breaks in the conversation. The system also translates statistical patterns into intuitive visualizations that the doctor can grasp quickly, while arranging information hierarchically, presenting the most relevant possibilities first.</p><p>The power of this temporal mediation becomes apparent when the doctor faces a critical decision. In the past, the fear of missing the serious diagnosis might have led to defensive medicine, ordering excessive tests just to be sure.</p><p>But as she contemplates her options now, the AI has already calculated the probability of each condition based on population data, regional epidemiology, and this patient&#8217;s profile; simulated the likely outcomes of different treatment paths, including risks, costs, and recovery trajectories; and generated a decision tree, highlighting key points where additional information would help narrow the diagnostic possibilities.</p><p>When the doctor absorbs this knowledge, she is engaging with what would have been months, or years, of sequential human research compressed into seconds&#8212;yet presented in a form that respects her need to process at a human pace. The AI doesn&#8217;t replace her clinical judgment; it expands what &#8220;judgment&#8221; encompasses.</p><p>The medical AI also allows the human to be fully present with her patient, maintaining eye contact, building rapport, observing subtle cues, because the AI handles the information processing that would otherwise compete for her attention.</p><p>This represents a major shift from first-generation digital tools. Early computers forced humans to adapt to them. Advanced AI systems adapt to us.</p><h2><strong>The Economics of Time</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ULKG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ULKG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1437682,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ULKG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As AI systems mediate between computational and biological temporalities, we are also witnessing another bifurcation, between what we could call the <em>judgment economy</em> and the <em>action economy</em>.</p><p>The <em>judgment economy</em> includes activities that require human deliberation, ethical reasoning, and interpersonal wisdom&#8212;processes that resist acceleration because they are tied to our embodied experience as biological beings.</p><p>The <em>action economy</em>, by contrast, operates increasingly within computational time, gathering and processing information, implementing decisions, and optimizing systems. These activities can be dramatically accelerated because they can be reduced to algorithmic procedures.</p><p>Consider how this plays out:</p><ul><li><p><strong>Finance</strong>: Investment advisers operate in the <em>judgment economy</em>, understanding client goals, risk tolerance, and life circumstances, while trading systems operate in the <em>action economy</em>, executing transactions at microsecond speeds</p></li><li><p><strong>Healthcare</strong>: Diagnosis spans both economies, with physicians exercising judgment while AI systems rapidly process test results, medical images, and research literature</p></li><li><p><strong>Law</strong>: Attorneys formulate strategy and negotiate settlements in the <em>judgment economy</em> while AI reviews documents, does case research, and ensures regulatory compliance as part of the <em>action economy</em>.</p></li></ul><p>These factors will reshape labor markets in ways that traditional automation narratives miss. Rather than simply replacing jobs, AI redistributes economic activity across the judgment-action divide. In the <em>action economy</em>, value increasingly derives from speed, scale, and precision&#8212;computational virtues that can be improved through technological advancement. In the <em>judgment economy</em>, value derives from discernment, creativity, and ethical reasoning.</p><p>When action becomes essentially instantaneous, the limiting factor in value creation becomes the quality of the decisions. In a world where anything can be done, what <em>should</em> be done becomes the essential question.</p><p>The bifurcation of economic time creates new forms of capital and, consequently, new dimensions of inequality:</p><ul><li><p><strong>Attention capital</strong> becomes increasingly precious. Those with the capacity to maintain high-quality attention toward decisions gain advantage in the judgment economy</p></li><li><p><strong>Temporal autonomy</strong> emerges as a political good, the freedom to operate according to biological rhythms rather than being subjected to computational tempos</p></li><li><p><strong>Judgment leverage</strong> becomes a source of outsized returns. The ability to pair high-quality judgment with high-speed computational action allows individuals to create value at unprecedented scales</p></li></ul><p>For centuries, we have evaluated economic progress by productivity. But productivity belongs primarily to the<em> action economy</em>; it measures how efficiently we execute known processes.</p><p>In the <em>judgment economy</em>, the relevant metric is closer to discernment, the quality of decisions per unit of attention. This requires new economic indicators that value wisdom, foresight, and ethical reasoning, alongside efficiency and output.</p><p>Organizations that thrive in this bifurcated landscape will be those that balance biological and computational temporalities, accelerating action while creating protected space for judgment.</p><p>Judgment roles will be increasingly valued. Action tasks that can be fully specified, and do not require human judgment, will increasingly shift to computational systems. Hybrid roles will emerge at the boundaries&#8212;much work will involve standing between the two economies, requiring knowledge of both languages.</p><p>Also, temporal design becomes a core part of business. Organizations will need specialists who build appropriate temporal frameworks for different activities, knowing which processes benefit from acceleration and which require deliberate pacing.</p><p>Work evaluations will change too. Beyond simply measuring time-spent or output-produced, assessment will consider whether activities unfolded at the right pace for their purpose.</p><p>Societies that manage this schism between biology and computation will not only create material prosperity. They will foster human flourishing in bifurcated times.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share AI Policy Perspectives &quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share AI Policy Perspectives </span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI tutors should not approximate human tutors]]></title><description><![CDATA[5 principles of learning science]]></description><link>https://www.aipolicyperspectives.com/p/ai-tutors-should-not-approximate</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-tutors-should-not-approximate</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Mon, 10 Nov 2025 08:42:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qOT7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s post comes from Daniel Gillick, a research scientist at Google DeepMind, who works on <a href="https://storage.googleapis.com/deepmind-media/LearnLM/learnLM_may25.pdf">making Gemini more useful for teaching and learning</a>. Daniel explores five pedagogical principles that the team is using in their work, the degree to which today&#8217;s AI systems can embody them, and what that means for how we should think about AI tutors. As with all the pieces you read here, it&#8217;s written in a personal capacity.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qOT7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qOT7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 424w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 848w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 1272w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qOT7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png" width="922" height="518" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:518,&quot;width&quot;:922,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1150283,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178289069?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qOT7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 424w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 848w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 1272w, https://substackcdn.com/image/fetch/$s_!qOT7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d76e77b-f498-4375-929e-247ea9067ef2_922x518.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Artificial Intelligence will no doubt reshape many of society&#8217;s institutions and industries, but this shift has come quickly to education. <a href="https://www.pewresearch.org/short-reads/2025/01/15/about-a-quarter-of-us-teens-have-used-chatgpt-for-schoolwork-double-the-share-in-2023/">Sharply</a> <a href="https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/">increasing student use</a> across all grade levels, a <a href="https://www.unesco.org/en/articles/whats-worth-measuring-future-assessment-ai-age">crisis in assessment</a>, <a href="https://www.flipsnack.com/internetmattersorg/generative-ai-in-education-children-s-and-parents-views-vz9akk16rh/full-view.html">skepticism of the benefits</a>, and <a href="https://26556596.fs1.hubspotusercontent-eu1.net/hubfs/26556596/Digital%20Education%20Council%20Global%20AI%20Student%20Survey%202024.pdf?utm_medium=email&amp;_hsenc=p2ANqtz-9vHe_yjugP81DfqKpRYhH8k0WyLDSLhg2DEQGSA3Forfv-iP3H_W__pNsCKBls_-MJWKwIfINMLAmpMBMCitzWdLSSFw&amp;_hsmi=92199303&amp;utm_content=92199303&amp;utm_source=hs_automation">concerns over data privacy and safety</a> have created turmoil across the sector. Still, a longer-term view provides cause for <a href="https://beckykeene.com/books/">optimism</a>, as AI has the potential to streamline the logistics of teaching and enable more effective learning.</p><p>The most mainstream hope for AI in education is the possibility of a personalised tutor for every learner. This vision has a long history, dating back at least to Benjamin Bloom&#8217;s <a href="http://web.mit.edu/5.95/readings/bloom-two-sigma.pdf">1984 paper</a>, which summarised the significant advantage of individual human tutoring over traditional classroom learning, as observed in studies run by his students. While the scale of Bloom&#8217;s purported &#8216;2-sigma&#8217; effect is <a href="https://nintil.com/bloom-sigma/">generally unrealistic</a>, high-quality, high-dosage human tutoring has, unsurprisingly, <a href="https://www.nber.org/papers/w27476">proven quite effective</a>. As AI systems have developed into compelling interlocutors, the prospect of scaling this tutor effect has captured public imagination (and sparked a rush on VC-funded EdTech). </p><p>Here, I examine how AI tutors may differ from human tutors, and their relative strengths and weaknesses, to bring the vision into sharper focus.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> I note at the outset the pedagogical precondition of <em>intersubjectivity</em>, or the &#8220;possibility of trading places&#8221; &#8211; which casts some shade on the prospect of a non-human teacher. Or as Rousseau put it in Emile: &#8220;<em>Remember you must be a man yourself before you try to train a man</em>&#8221;. With this context, let&#8217;s review the five key principles of learning science we have <a href="https://arxiv.org/abs/2407.12687">prioritised</a> in developing and evaluating AI systems for tutoring.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!guk3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!guk3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 424w, https://substackcdn.com/image/fetch/$s_!guk3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 848w, https://substackcdn.com/image/fetch/$s_!guk3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 1272w, https://substackcdn.com/image/fetch/$s_!guk3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!guk3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png" width="654" height="223" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:223,&quot;width&quot;:654,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!guk3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 424w, https://substackcdn.com/image/fetch/$s_!guk3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 848w, https://substackcdn.com/image/fetch/$s_!guk3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 1272w, https://substackcdn.com/image/fetch/$s_!guk3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f3f66d5-19ed-478c-99cb-d558daacf0d0_654x223.png 1456w" sizes="100vw"></picture><div></div></div></a><figcaption class="image-caption"><em>Gemini 2.5 Pro substantially outperforms other AI models on each of the five learning principles. Source: <a href="https://arxiv.org/abs/2505.24477">Evaluating Gemini in an arena for learning</a></em></figcaption></figure></div><p><strong>Principle #1: Inspire active learning</strong></p><p>Let&#8217;s start with active learning. The core idea, as expressed by the American educational reformer <a href="https://en.wikipedia.org/wiki/John_Dewey">John Dewey</a>, is that people learn best by <em>doing</em>. AI systems can<a href="https://research.google/blog/learn-your-way-reimagining-textbooks-with-generative-ai/"> transform</a> more passive learning, like reading a textbook chapter on gravity and acceleration, into an engaging dialogue or interactive activity. A learner could start with a motivating question such as &#8220;<em>how long does it take for a raindrop to fall from a cloud?</em>&#8221;, and work towards a calculation by conversing with an AI tutor that is <a href="https://arxiv.org/abs/2412.16429">trained</a> to provide support, create diagrams, and generate simulations, rather than just give direct answers. This could help learners spend less time feeling stuck and more time in the educational sweet spot where they get just enough guidance to progress&#8212;what the Russian educator <a href="https://en.wikipedia.org/wiki/Lev_Vygotsky">Lev Vygotsky</a> called &#8220;the <a href="https://en.wikipedia.org/wiki/Zone_of_proximal_development">Zone of Proximal Development</a>.&#8221;</p><p>But active learning only works if learners are motivated to engage. Often a human tutor not only provides the activity, but serves as a supportive partner. The learner&#8217;s motivation to engage in active learning is rooted in the social dynamics of their relationship with the teacher, including their trust in the teacher&#8217;s process and the social pressure to answer their questions. Without these incentives, questions like &#8220;<em>why do you think that&#8217;s the right answer</em>?&#8221;, fall flat as learners are well aware that the relationship with an AI system is more transactional.</p><p><strong>Principle #2: Manage cognitive load</strong></p><p><a href="https://cognitiveloadtheory.wordpress.com/">Cognitive Load Theory</a>, developed by the psychologist John Sweller in the 1980s, is based on the limitations in our working memory and our ability to process sensory information. It highlights that educators need to present materials in a way that focuses students&#8217; cognitive resources on what matters most.</p><p>In <a href="https://fivetwelvethirteen.substack.com/p/what-is-cognitive-load-theory">experiments</a>, Sweller found that novice learners often tried to solve problems in new areas through trial and error, rather than by learning more useful patterns. In such situations, providing a &#8220;worked example&#8221;&#8212;a step-by-step demonstration of how to solve a problem&#8212;can create new mental pathways that deepen with practice. LLMs can provide learners with reasonable worked examples and practice problems. But knowing <em>when</em> each of these moves is pedagogically appropriate remains very tricky. A good human tutor may use surprise to their advantage, suggesting a related problem to show a specific point. However, an AI tutor is less adept at developing a working model of the learner, and thus may be better served through predictability: being steady and prosaic rather than confusing and capricious.</p><p>Cognitive Load Theory also considers the optimal style and <a href="https://www.jsu.edu/online/faculty/MULTIMEDIA%20LEARNING%20by%20Richard%20E.%20Mayer.pdf">medium</a> of education materials. Current AI systems are prone to generating long, overly comprehensive responses, which can impede learning. This is partly due to optimising for user feedback from a &#8220;single turn&#8221; engagement, rather than the &#8220;multi-turn&#8221; interactions that learners will actually experience. However, progress is underway on addressing this challenge as well as other ways to offer more natural conversation, for example by applying principles of &#8220;chunking&#8221; and &#8220;progressive disclosure&#8221; to break up how information is shared with learners. </p><p>AI could also help address the <a href="https://en.wikipedia.org/wiki/Split_attention_effect">split-attention</a> and <a href="https://en.wikipedia.org/wiki/Modality_effect">modality effects</a> that impede learning by presenting education materials to learners in a confusing way. For example, instead of providing a diagram about a new concept far from the pertinent text, AI systems could generate self-contained, properly-annotated diagrams that augment a text-based explanation rather than divert learners&#8217; attention from it. As image and video generation improve, educators will be able to go further, turning a textbook image of a combustion engine into a simulation that a learner can directly manipulate and interrogate.</p><p>Early prototypes like <a href="https://www.youtube.com/watch?v=MQ4JfafE5Wo">Project Astra </a>demonstrate this ability to move beyond text, but the experience is still a long way from the richest learning experiences, such as when a tutor and a learner share a whiteboard to work through a problem.</p><p><strong>Principle #3: Adapt to the learner</strong></p><p>This brings us to adaptation, which traces its roots to Johann Comenius, a Reformation-era Protestant teacher, who felt that &#8220;<a href="https://www.ukessays.com/essays/education/john-amos-comenius-the-father-of-modern-education-in-contemporary-curriculum.php">learning proceeds sequentially</a>&#8221;, and thus advocated for a schooling system that allowed for individualised rates of progress. In the 20th century, <a href="https://en.wikipedia.org/wiki/Maria_Montessori">Maria Montessori</a> turned this idea into a comprehensive method where the &#8220;teacher as guide&#8221; carefully observes each child, encourages agency and self-directed learning, and suggests activities appropriate to their interests and abilities.</p><p>The idea of using AI<em> </em>to personalise learning predates the LLM era. Intelligent Tutoring Systems date back to the <a href="https://stacks.stanford.edu/file/druid:xr633ts6369/xr633ts6369.pdf">1970s</a> and have long offered personalised pathways through learning materials. Now, LLMs have memorised more than any individual ever could and can keep an increasing amount of information in their context, a kind of super-precise memory, that offers the potential for far more detailed personalisation. </p><p>This personalisation could include relevant details about an individual learner, past conversations, or curriculum material that make the AI system&#8217;s language and notation more relevant to them. But there is a fine line between useful personalisation (for example, that feedback for a fourth-grade essay should use age-appropriate language) and annoying emphasis on irrelevant details (for example, using a learner&#8217;s enthusiasm for basketball to turn all statistics practice-problems into questions about field-goal percentage).</p><p>Personalisation in AI systems will continue to improve. But part of the implicit value in personalisation by human teachers is in the relationships they build with learners, with mutual trust and respect. At best, these are <a href="https://arxiv.org/abs/2404.16244">thorny issues for AI systems</a>, which cannot offer any true empathy. In practice, an AI tutor may be more successful by instead offering transparency about the reasons for its personalised suggestions, rather than assuming a learner will readily hand over control.</p><p><strong>Principle #4: Stimulate curiosity</strong></p><p>In many ways, harnessing students&#8217; curiosity is a precondition for the other pedagogical principles. The psychologist <a href="https://en.wikipedia.org/wiki/Daniel_Berlyne">Daniel Berlyne</a> saw curiosity as a conflict or incongruity in our minds, triggered by encountering something novel, complex or surprising. <a href="https://en.wikipedia.org/wiki/George_Loewenstein">George Lowenstein</a> argued that curiosity emerges when we perceive a gap between what we know and what we <em>want </em>to know. And Janet Metcalfe <a href="https://pubmed.ncbi.nlm.nih.gov/33709011/">suggests</a> that we are most curious about things that we feel are <em>learnable, </em>or on the <em>verge</em> of understanding.</p><p>By virtue of their unlimited availability, AI systems can help learners leverage their natural curiosities to achieve deeper and broader knowledge. Still, for many learners, especially younger ones, human teachers play a crucial role in bridging the gap between the curriculum&#8212;and its promise of future utility&#8212;and learners&#8217; more immediate interests. This bridging is likely the product of some personalisation, but also shared enthusiasm, which learners will tend to find disingenuous from an AI system, or worse, as <a href="https://www.nature.com/articles/d41586-025-03390-0">sycophantic behaviour</a>.</p><p>Instead, AI systems could rely on their ability to help learners see and create new things, for example through coding, dramatically lowering the barriers to self-directed creativity. Rather than trying to stimulate creativity by emulating human emotions (&#8220;wow, what a great idea!&#8221;), AI systems may also benefit from leaning into what many learners feel is their greatest characteristic: their non-judgmental feedback.</p><p><strong>Principle #5: Deepen metacognition</strong></p><p>Finally, let&#8217;s consider the role of metacognition&#8212;reflecting on the process of learning to solidify new ideas or highlight what was most helpful. Although the broader idea dates back much further, the developmental psychologist John Flavell coined the   metacognition term in the 1970s after studying how learners completed memory tasks&#8212;for example, noting how some older learners had developed strategies, like repeating things to themselves, to practice.</p><p>Typical metacognitive exercises&#8212;reflecting on learning goals and strategies, explaining thought processes out loud, or teaching a concept back to a teacher or peer&#8212;are inherently social and thus tend to run afoul of the relational limitations of AI systems. Many learners will not have the patience to engage so deeply with a system that can&#8217;t really &#8216;listen&#8217; in a human sense, even if the opportunity to do so would be valuable. </p><p>So while AI systems can help with &#8216;sense-making&#8217;, the practical task of understanding how things work, they are less likely to help with the personal process of constructing a lasting identity as a learner, what the developmental psychologist <a href="https://www.jstor.org/stable/j.ctv1pncpfb">Robert Kegan calls &#8216;meaning-making</a>&#8217;. The richness of human connection is often instrumental to such formative experiences.</p><p>_________</p><p>Stepping back to compare human and AI tutors from a distance, AI certainly has some advantages. It is always available, knows a lot, is non-judgmental, and can enable creativity. Its limitations are mainly as a social partner, which tend to render some elements of traditional pedagogy less appealing for learners. Thus, in developing AI tutors, I think that the north star is not an excellent human tutor, but something a bit different. These learning science principles for AI tutors still need to be explored and refined.</p><p>Lastly, it is perhaps not necessary, but worth stating nonetheless, that at some deeper level, the ultimate work of Teaching is about how to be a human being, cultivating intellectual and moral agency, a job uniquely suited to other human beings. AI systems are powerful tools which we hope can be effectively leveraged in support of more universal, more equitable Learning.</p><p><em>Thanks to the following individuals for helpful feedback. Kevin McKee Markus Kunesch Brian Veprek Irina Jurenka Julia Wilkowski Yael Haramaty Miriam Schneider</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This is a tricky essay to write. In an attempt to be brief and focused, I have necessarily glossed over much of the rich history and research about the science of learning. I have also not touched on the many forms of learning beyond one-on-one tutoring. AI research continues to move at breakneck pace, and so reflections on what AI can do, and how people will engage with it, are inevitably short-term in nature. I hope that by writing this, I create a bit more space for intentional discussion of the design and use of AI tutors.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-tutors-should-not-approximate?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives.  Subscribe for free to receive new posts. Lots more in the pipeline!.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-tutors-should-not-approximate?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-tutors-should-not-approximate?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[What does the public really think about AI?]]></title><description><![CDATA[15 claims I think are (currently) true]]></description><link>https://www.aipolicyperspectives.com/p/what-does-the-public-really-think</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/what-does-the-public-really-think</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Thu, 06 Nov 2025 10:26:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!orZP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s post comes from Harry Law, who writes about AI and society at <a href="https://www.learningfromexamples.com/">Learning From Examples</a>. It is inspired by the (almost daily) challenge of seeing a new survey about public attitudes to AI and trying to understand what the results  mean. This blog is based on a more extensive <a href="https://www.governance.ai/research-paper/what-does-the-public-think-about-ai">report</a> co-authored with GovAI.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!orZP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!orZP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!orZP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!orZP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!orZP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!orZP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png" width="1280" height="894" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:894,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!orZP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!orZP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!orZP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!orZP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F874a917d-f996-493e-8bfc-5713c6c40fae_1280x894.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 1953, President Dwight Eisenhower gave a speech to remember. In front of the UN General Assembly, he proposed a new international body to regulate and promote the peaceful use of nuclear power. Now known as the <a href="https://www.eisenhowerlibrary.gov/research/online-documents/atoms-peace#:~:text=The%20Atoms%20for%20Peace%20speech,Eisenhower%20to%20the%20United%20Nations.">Atoms for Peace</a> address, Eisenhower attempted to balance fears of nuclear proliferation with hopes for the peaceful use of uranium in nuclear reactors:</p><blockquote><p><em>&#8216;The more important responsibility of this atomic energy agency would be to devise methods whereby this fissionable material would be allocated to serve the peaceful pursuits of mankind.&#8217;</em></p></blockquote><p>Ike&#8217;s speech was about winning hearts and minds at home as much as securing America&#8217;s position abroad. The promise of limitless clean energy for the <a href="https://www.theguardian.com/environment/2015/jul/20/nuclear-energy-atomic-america-peace">home</a>, <a href="https://ntrs.nasa.gov/api/citations/20000096503/downloads/20000096503.pdf">fantastic new technologies</a>, and <a href="https://www.youtube.com/watch?v=TEafr5aaosw">novel ways to travel</a> represented the optimism of the nuclear age. The following year, Lewis Strauss, chairman of the US Atomic Energy Commission, famously <a href="https://web.archive.org/web/20070204122504/http://www.cns-snc.ca/media/toocheap/toocheap.html">said</a> of the introduction of atomic energy: <em>&#8216;It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter</em>&#8217;.</p><p>But it wasn&#8217;t to be. US and European opinion soured on nuclear technology after high-profile accidents at Chernobyl and Three Mile Island. While countries like France <a href="https://worksinprogress.co/issue/liberte-egalite-radioactivite/">managed</a> to keep its reactors online, states like the UK effectively outlawed the technology by making planning impossibly difficult and abiding by <a href="https://worksinprogress.co/issue/the-bad-science-behind-expensive-nuclear/">dodgy science</a> about safety standards. No prizes for guessing which country has cheaper energy. Chernobyl <a href="https://www.bbc.com/future/article/20190725-will-we-ever-know-chernobyls-true-death-toll">likely</a> resulted in thousands of deaths, but the data today tells us that nuclear power is <a href="https://ourworldindata.org/safest-sources-of-energy">among the safest</a> of all energy sources. The primary result of the backlash was higher electricity prices and more CO2.</p><p>Whether or not a technology works is only half the battle. You also need the public to <em>want</em> it to work. Take genetically modified organisms. In 1978, Genentech successfully spliced a human gene into E.coli to synthesise insulin, igniting a wave of optimism among scientists. But almost a decade later, the public protested when <a href="https://en.wikipedia.org/wiki/Ice-minus_bacteria#:~:text=In%201987%2C%20the%20ice%2Dminus,damage%20to%20the%20treated%20plants.">bacteria</a> modified to reduce the formation of ice became the first GMO to be released into US fields. Scientists spoke of new crops resistant to drought, floods and pests, but the public worried that companies were only interested in engineering herbicide-tolerant crops, so that they could sell more herbicide.</p><p>Concern about &#8216;Frankencrops&#8217; took on a life of its own, blending ethical unease with fears of corporate control and ecological disruption. Particularly in Europe, but also beyond, this deep cultural mistrust led to activist groups, tougher regulation, and the rise of &#8216;natural&#8217; alternatives. When added to <a href="https://worksinprogress.co/issue/every-grain-of-rice/">the costs</a> and complexity that genetic engineering experiments already faced, promising innovations were blocked, slowed or discarded. This had the unfortunate consequence of <a href="https://assets.aeaweb.org/asset-server/files/17442.pdf">lowering the ceiling</a> on the global agricultural supply.</p><p>There are tentative signs of a change in direction. In 2025, global nuclear power generation <a href="https://www.iea.org/reports/the-path-to-a-new-era-for-nuclear-energy/executive-summary">should inch towards a historic high</a>, and the number of new <a href="https://www.iea.org/reports/the-path-to-a-new-era-for-nuclear-energy/executive-summary">reactors</a> under construction is growing. GM-crops are also ticking up, particularly in <a href="https://www.asimov.press/p/nigeria-crops?utm_source=%2Finbox&amp;utm_medium=reader2">low- and middle-income countries</a>. But governments are moving cautiously, as consultations <a href="https://www.gov.uk/government/consultations/genetic-technologies-regulation/outcome/genetic-technologies-regulation-government-response">make clear</a> that a delta remains between scientific support and public caution. Consider the Philippines. In 2021, the government approved the cultivation of <a href="https://en.wikipedia.org/wiki/Golden_rice">golden rice</a>, engineered to contain beta-carotene, to help prevent blindness, after a process spanning more than 20 years. In 2024, environmentalists convinced the courts to <a href="https://www.goldenrice.org/PDFs/The%20Philippines%20bans%20some%20GM%20foods%20-%20The%20Economist%202024.pdf">ban</a> it.</p><p>For AI, the parallels are imperfect but useful.</p><p>First, experts are generally <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">more positive</a> about AI than the general public. Like nuclear power, AI comes with safety risks that some fear may be catastrophic. Like GMOs, large corporations play a central role. As with GMOs, it&#8217;s also not hard to imagine members of the public viewing <em>artificial </em>intelligence as inherently &#8216;unnatural&#8217; - a slippery term that people often use to oppose new technologies, and which <a href="https://cdn.nuffieldbioethics.org/wp-content/uploads/Naturalness-analysis-paper-1.pdf">captures</a> a broad range of values, anxieties and beliefs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZJyN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZJyN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 424w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 848w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 1272w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZJyN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png" width="420" height="356" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:420,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZJyN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 424w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 848w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 1272w, https://substackcdn.com/image/fetch/$s_!ZJyN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29bc97ae-722c-4dca-9b06-ed8ee047b8ab_420x356.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">How the US Public and AI Experts View Artificial Intelligence&#8217; (Pew Research Center, 2025)  </figcaption></figure></div><p>Equally, AI is clearly different to nuclear power, GMOs, or other technologies that have struggled with public perceptions, like <a href="https://aerospaceamerica.aiaa.org/features/supersonic-travel-dead-on-arrival/">supersonic flights</a>. Most obviously, millions of people knowingly use AI every day, to <a href="https://www.gallup.com/workplace/651203/workplace-answering-big-questions.aspx">draft emails</a>, <a href="https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/">help with homework</a>, advance science, or <a href="https://www.adalovelaceinstitute.org/blog/ai-companions/">even for companionship</a>. What does that mean for the public&#8217;s attitudes to the technology? A new cottage industry has emerged to puzzle out this question. Each week a new survey is released. One says that people are <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">deeply distrusting</a> of AI while another shows their use of it <a href="https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx">increasing</a> rapidly. Taken together, the results are hard to parse. Are people scared, hopeful, or both? Do they even understand what AI is? And what, crucially, are AI labs or policymakers supposed to do in response?</p><p>In the remainder of the post, I (a) introduce a mental model for how to think about public attitudes to AI; (b) make 15 claims I think are true about how people currently view the AI project, and (c)<strong> </strong>propose a new Global AI Attitudes Survey.</p><p>Is all this necessary? In a strict sense, labs don&#8217;t <em>need</em> to understand public opinion to build and deploy advanced AI systems, while policymakers can craft rules without knowing precisely what the public thinks. But if the public&#8217;s concerns &#8212; about control, the environment, long-term risk &#8212; are dismissed as ignorance, then you can bet AI&#8217;s adoption will suffer. When people think that a technology offends their values, legitimacy collapses. It doesn&#8217;t matter how good your model is if no-one wants to use it. Nuclear power and GMOs both worked on paper, but were derailed when the social licence was revoked. There is no reason to think that AI is different.</p><p>Of course, there is no guarantee that what the public says they want is what is best for the technology. As with any kind of public engagement, there will always be a question about how much policymakers and AI labs can, or should, follow public attitudes (versus trying to shape them). But these questions are premised on being able to understand what the public actually thinks about AI in the first place.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-does-the-public-really-think?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive all future articles. Lots more in the pipeline! </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-does-the-public-really-think?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-does-the-public-really-think?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>What do I mean by public attitudes to AI?</h1><p>Anybody hoping to measure, shape or respond to public attitudes to AI needs to  consider (at least) five foundational questions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rwqy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rwqy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rwqy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg" width="1024" height="501" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:501,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:110975,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178165067?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rwqy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rwqy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5190e65a-fb36-4393-aa16-1a5060c51777_1024x501.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A toy model for understanding public attitudes to AI. Made with Gemini. </figcaption></figure></div><p><strong>1. Who are you interested in?: </strong>Public attitudes to AI diverge as soon as we break the public into groups, such as more savvy users of the technology versus those who know little about it. With GMOs, the attitudes of farmers and environmental groups have been central to its successes and failures. As a general purpose technology, AI will have a much broader range of parties with distinct views, from teachers to musicians.</p><p><strong>2. Where are they from?: </strong>Today, much of the global AI conversation is mediated by voices and polling data from English-language sources in the West. This creates a skewed picture of public attitudes to AI because the West is generally more negative than elsewhere.</p><p><strong>3. What attitudes are you measuring?: </strong>&#8216;Public attitudes&#8217; to AI is not a singular concept, but a multi-dimensional one. A <em>cognitive </em>dimension examines what people know and believe about AI and how it functions. An <em>affective </em>dimension captures their emotional responses to it, like fear, excitement, anxiety, hope, or trust. A <em>behavioural </em>dimension captures how people adopt, reject, or use AI systems in practice.</p><p><strong>4. What is meant by &#8216;AI&#8217;?: </strong>Public attitudes to AI are never free-floating. People will respond differently if you ask about an abstracted version of the technology, specific AI applications, more advanced concepts like &#8216;AGI&#8217;, or the organisations involved. This is why attitudes can seem contradictory: someone might welcome AI in healthcare but reject it in policing, or trust a firm to develop a certain AI application, but not another.</p><p><strong>5. What </strong><em><strong>drivers </strong></em><strong>are you interested in?: </strong>Many surveys capture less about AI and more about the underlying drivers that motivate people. Demographics, socioeconomic status, and an individual&#8217;s broader values and optimism influence not just how much they know about AI, but whether they view it as a threat, a tool, or a distant abstraction.</p><h1>15 claims about public attitudes to AI</h1><p>With colleagues, I reviewed ~100 surveys to try to better understand public attitudes to AI. Based on this data, I created 15 claims that I think are currently true with varying degrees of confidence. As I expand on below, surveys are an imperfect tool and my sample is limited to English-language sources. Attitudes are also fluid. So take these claims with a pinch of salt. But I think converting survey data to claims &#8211; using a good deal of intuition &#8211; is necessary to ensure that we are drawing insight from this data, to identify open questions worthy of further study, and to provide falsifiable yardsticks for future surveys to tackle.</p><ol><li><p><strong>Awareness and use: </strong>The general public&#8217;s awareness of AI <a href="https://trends.google.com/trends/explore?date=all&amp;q=AI&amp;hl=en-US">grew slowly</a> over the past decade but increased <a href="https://naiom.net/public-reports/NAIOM%20Report%2002%20AI%20Trust%20Knowledge.pdf">rapidly</a> since 2022 when they began knowingly, rather than passively, <a href="https://www.pewresearch.org/internet/2025/04/03/artificial-intelligence-in-daily-life-views-and-experiences/">using AI</a> in greater numbers. Within countries, awareness and use is <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">highest</a> among young, high-income, <a href="https://www.hbs.edu/ris/Publication%20Files/25-023_8ee1f38f-d949-4b49-80c8-c7a736f2c27b.pdf">tech-savvy men</a>, although <a href="https://openai.com/index/how-people-are-using-chatgpt/">the AI gender gap is narrowing</a>. Globally, awareness and use of AI among Internet users is <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">highest</a> in certain lower/middle-income countries like Indonesia and India.</p></li></ol><ol start="2"><li><p><strong>Sentiments: </strong>In the West, a <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">majority of the public</a> is more negative, than positive, about AI. Elsewhere, in countries like UAE, Nigeria, and Japan, attitudes are <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">more positive</a>, for varied reasons. For example, Gillian Tett attributes Japan&#8217;s <a href="https://www.ft.com/content/0926b62e-a67e-4d3a-a86c-6acebd9cc349">relative positivity</a> about AI to its labour shortage and wariness around immigration, its history of more positive sci-fi (<em>Astro Boy</em> vs. <em>2001: A Space Odyssey</em>), and its Shinto philosophy which avoids strict boundaries between animate and inanimate objects.</p></li></ol><ol start="3"><li><p><strong>Knowledge: </strong>A majority of people are relatively <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">confident</a> in their knowledge about AI, but, when prompted, display clear knowledge gaps, including a lack of awareness of <a href="https://news.gallup.com/poll/654905/americans-everyday-products-without-realizing.aspx">AI&#8217;s presence in everyday technologies</a>. As people learn more about AI, they typically become <a href="https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2025">more excited about it</a>, overall, but also <a href="https://www.turing.ac.uk/sites/default/files/2023-06/how_do_people_feel_about_ai_-_ada_turing.pdf">more worried about specific risks</a>, like those connected to autonomous weapons or hiring<strong>.</strong></p></li></ol><ol start="4"><li><p><strong>Age: </strong>Younger people tend to be <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">more accepting</a> of AI, in both personal and professional contexts, from financial planning to relationships. They are also more likely to view AI favourably, compared to the human alternatives on offer. However, young people also worry more about certain risks from AI, such as its <a href="https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-4/public-attitudes-to-data-and-ai-tracker-survey-wave-4-report">environmental impact</a>, and are generally more <a href="https://drive.google.com/file/d/1u3bKbRN57uG0Z7MtdVyMAliiHiTY6fZh/view">invested</a> in the need for AI regulation.</p></li></ol><ol start="5"><li><p><strong>Income: </strong>Within countries, lower-income adults tend to be <a href="https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/">more anxious</a> that AI will cost them their <a href="https://www.ipsos.com/sites/default/files/ct/publication/documents/2023-03/Ipsos%20AI%20Tracker%20Data%20March%2014.pdf">jobs</a> or be used to monitor them. Higher-income adults are more likely to see AI <a href="https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf">as a beneficial productivity booster</a>.</p></li></ol><ol start="6"><li><p><strong>Political affiliation: </strong>Political affiliation is typically <a href="https://publicfirst.co.uk/ai/">not a very reliable guide</a> to overall attitudes to AI. In more polarised climates, like the US, <a href="https://www.pewresearch.org/internet/2025/04/03/views-of-risks-opportunities-and-regulation-of-ai/">those supporting the government</a> tend to have more confidence in its ability to regulate AI, and vice versa.</p></li></ol><ol start="7"><li><p><strong>AI applications vs &#8216;AI&#8217; overall: </strong>People&#8217;s attitudes towards a representative basket of specific AI applications is <a href="https://www.turing.ac.uk/sites/default/files/2023-06/how_do_people_feel_about_ai_-_ada_turing.pdf">more positive</a> than their view of &#8216;AI&#8217;, abstracted.</p></li></ol><ol start="8"><li><p><strong>Most popular applications: </strong>People show <a href="https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3/public-attitudes-to-data-and-ai-tracker-survey-wave-3#the-value-of-data-to-society">greatest support</a> for AI applications that could help achieve universally-popular outcomes (detect cancer); are personally important to them (improve the environment); or tackle a daily problem they are facing (e.g. find information, plan travel). People focus less on the suitability or additive value that AI, specifically, might bring to these applications.</p></li></ol><ol start="9"><li><p><strong>Public vs. experts: </strong>While experts tend to be more optimistic about AI than the general public, much of the public is very supportive of certain AI applications that AI ethicists worry about, such as the use of AI <a href="https://www.turing.ac.uk/sites/default/files/2023-06/how_do_people_feel_about_ai_-_ada_turing.pdf">to detect crime or in border control</a>.</p></li></ol><ol start="10"><li><p><strong>Most salient risks: </strong>People worry most about risks that are framed as a potential personal loss to them (e.g. job loss, data breaches) as well as those that inspire vivid, intense fear or revulsion (e.g. bioweapons, brain-rot, child abuse).</p></li></ol><ol start="11"><li><p><strong>Unemployment: </strong>When asked to reflect on how AI may affect society, people <a href="https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/">frequently</a> rate the loss of jobs as one of their top concerns. However, higher-income employees are <a href="http://gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx">less likely</a> to view their own job as being at risk from AI.</p></li></ol><ol start="12"><li><p><strong>Explainability:</strong> The public often does not expect or receive explanations in other consequential domains, <a href="https://arxiv.org/pdf/1711.01134">such as from juries in court cases</a>. Conversely, the public <a href="https://drive.google.com/file/d/1484XL4kTkOQKTfZMw5GD46bpit-XJ2Zp/view">views</a> the opacity of modern deep learning systems as a major risk, and many are willing to <a href="https://www.turing.ac.uk/sites/default/files/2023-06/how_do_people_feel_about_ai_-_ada_turing.pdf">sacrifice some accuracy for more explainability</a>. As people use AI systems more, and the models become increasingly reliable, these demands for explainability will likely temper.</p></li></ol><ol start="13"><li><p><strong>Catastrophic risks:</strong> When <a href="https://www.gov.uk/government/publications/international-survey-of-public-opinion-on-ai-safety">prompted</a>, people show <a href="https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/">high rates of concern</a> about catastrophic risks from AI such as cyberattacks, bioweapons, and loss of control. However, the public <a href="https://rethinkpriorities.org/research-area/british-public-perception-of-existential-risks/">worries less about these risks</a> than other, non-AI, catastrophic risks, like nuclear war and climate change. In general, the public worries relatively little about catastrophic risks (AI-related or otherwise), compared to everyday issues like the cost of living and immigration.</p></li></ol><ol start="14"><li><p><strong>Regulation: </strong>When asked, a majority of the public <a href="https://attitudestoai.uk/findings-2025/governance-and-regulation">supports</a> AI regulation. This support remains broadly consistent, across different forms, such as requiring <a href="https://drive.google.com/file/d/1484XL4kTkOQKTfZMw5GD46bpit-XJ2Zp/view">safety evaluations</a> or <a href="https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/">licensing</a>. Most people <a href="https://bpb-us-w2.wpmucdn.com/sites.northeastern.edu/dist/f/4599/files/2023/10/report-1017-2.pdf">do not trust</a> AI companies to develop AI without some regulation and <a href="https://assets.kpmg.com/content/dam/kpmg/au/pdf/2023/trust-in-ai-global-insights-2023.pdf">oversight</a>.</p><p></p></li><li><p><strong>International cooperation: </strong>The public generally supports more international AI governance, such as more <a href="https://www.gov.uk/government/publications/international-survey-of-public-opinion-on-ai-safety">international enforcement mechanisms</a> or <a href="https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html">international treaties</a>. To date, most of the public has generally <a href="https://drive.google.com/file/d/1MRUfvnmDPO8uUT6z7dO1dQ81SAm4sSat/view">not accepted</a> the &#8216;arms race&#8217; argument that Western countries should accelerate AI to outpace China.</p></li></ol><p><strong>Caveats, Caveats: The limits of survey data</strong></p><p>The claims above are based on survey data. I think there is good evidence for them, but surveys also have clear limitations as a guide to public attitudes. Many of the challenges are long-standing and can be mitigated, in part, with the right methods and resources. But it is also <a href="https://www.ft.com/content/8973afdb-df7c-4682-b104-f3d58f199978?utm_source=chatgpt.com">ever harder</a> to convince a representative set of people to take part in surveys, whether due to call or popup blockers, survey fatigue, or a declining sense of civic duty.</p><ul><li><p><strong>Noise: </strong>Public attitudes are noisy. This is most famously illustrated by the <a href="https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/">Lizardman&#8217;s Constant</a>, which originates from a reference to the fraction of people in any given poll who give unexpected or bizarre answers. For AI, this means we should always expect some proportion of survey data to be junk.</p></li></ul><ul><li><p><strong>Question wording and order:</strong> <em>Context effects </em>mean that people can answer a question differently due to how the questions are constructed and sequenced. If you ask people a series of questions about risks from AI, followed by a question about whether they support &#8216;AI regulation&#8217;, respondents may respond more positively to the latter. The specific wording also matters. People view a &#8216;death tax&#8217; differently to an &#8216;estate tax&#8217; and may also respond differently when asked about AI &#8216;regulation&#8217; vs &#8216;red tape&#8217;, &#8216;bureaucracy&#8217;, or &#8216;report writing&#8217;, even if the underlying idea is the same.</p></li></ul><ul><li><p><strong>Oversimplification:</strong> Surveys are limited in how much detail they can provide or ask. As a complex general-purpose technology, this is a particular challenge for AI. For example, the public is often asked whether they &#8216;trust&#8217; AI, or <em>who</em> they trust to develop it. But trust is context-dependent. People may trust certain government bodies to oversee certain healthcare AI apps, but trust companies to develop the best language translation model. People&#8217;s trust may <em>depend </em>on how the underlying data is collected or stored. Without the right context, when a person says that they &#8216;trust&#8217; an AI company or person, they are mostly saying that they either know who they are and/or like them.</p></li></ul><ul><li><p><strong>Social desirability:</strong> Social desirability bias represents the gap between what people genuinely think and what they report in surveys, due to societal norms and a desire for approval. Similarly, people often <a href="https://www.indy100.com/politics/stewart-lewis-tory-leadership-poll">express</a> an opinion in a poll to avoid saying &#8216;I don&#8217;t know&#8217;. This may explain why people generally overestimate their familiarity with AI or why they will happily share views on the merits of regulatory proposals that many AI policy professionals may struggle to fully understand.</p></li></ul><p><strong>What next? A Global AI Attitudes Report</strong></p><p>To avoid the legitimacy collapse seen with nuclear power and GMOs, AI labs and policymakers need to better understand what the public thinks and feels. Surveys are not the only option. Focus groups, citizen juries, and vignette studies could pose richer scenarios, but they tend to trade off <em>depth </em>for<em> breadth, </em>are harder to repeat, and come with their own biases. Another approach would be to focus less on what people say they think about AI and more on what their use of AI reveals about their attitudes. For example, practitioners can use privacy-preserving methods to analyse how people are querying LLMs or use experiments to understand the aspects of an AI tool that people most value (or are put off by).</p><p>All of these methods have their merits. But I think surveys should remain a cornerstone of the approach. They promise not just scale, but the chance to compare attitudes across topics, groups, and time. The current landscape of largely one-off surveys, infused by each sponsor&#8217;s own view and biases, and academic side-projects, is insufficient. What is required is a serious, longitudinal tracking effort: a living global survey of public attitudes to AI.</p><p>Some governments, like the UK, already conduct <a href="https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-4/public-attitudes-to-data-and-ai-tracker-survey-wave-4-report">regular AI attitudes surveys</a>, as do firms like KPMG. A next step could see interested parties from governments, AI labs, industry bodies, and survey experts create a shared map of how attitudes evolve across time, cultures, and domains. Such a programme, which we might call the <strong>Global AI Attitudes Report,</strong> could repeatedly ask the same people (or highly comparable cohorts) the same core questions. This would help to separate short-term fluctuations &#8212; like a spike in concern after a viral deepfake &#8212; from more durable shifts in public values. The model here might be something akin to the <a href="https://natcen.ac.uk/british-social-attitudes">British Social Attitudes Survey</a> or the <a href="https://www.worldvaluessurvey.org/wvs.jsp">World Values Survey</a>. But to be useful for AI, this programme would need to move faster and look wider (though we know this is possible given the success of the <a href="https://www.gov.uk/government/publications/international-ai-safety-report-2025">International AI Safety Report</a>).</p><p>We don&#8217;t currently know what questions we should be asking about AI in five years time, or even two years time. That means the Survey should have scope to add new questions, for example based on trends in how people and enterprises are using AI, and predicted capability shifts. The primary goal should be to deliver new data and revised claims, but the project could also provide standardised survey resources for other developers to draw on.</p><p>Key components of the new Global AI Attitudes Report could include:</p><ul><li><p><strong>A standardised question bank: </strong>The Global AI Attitudes Report should build on <a href="https://www.purdue.edu/undergrad-research/ourconnect/index.php?q=projectview&amp;id=1149">initial efforts</a> to create a shared set of vetted, carefully sequenced questions, and provide guidance for how to structure questionnaires.</p></li><li><p><strong>AI archetypes: </strong>Rather than grouping respondents by age, gender, and income, the Survey could build and test richer archetypes of people with common attitudes to AI.</p></li><li><p><strong>Trade-offs: </strong>Attitudes toward &#8216;AI&#8217; in the abstract reveal little, the Global AI Attitudes Report could probe views on concrete applications &#8212; from AI tutors in schools to AI diagnosticians in hospitals &#8212; and weave in characteristics and design choices that force the public to take a stance on the inevitable trade-offs that AI poses. Similarly, rather than asking the public if they support AI regulation, in general, the Report could ask trade-off questions like: &#8220;<em>Would you support holding AI companies legally liable for chatbots&#8217; financial and medical advice, if it meant they had to restrict free public access to these queries, to limit liability risks?&#8221;</em></p></li><li><p><strong>Openness and global representation: </strong>The Global AI Attitudes Report should serve as a shared public good, with the full results made public and the methods transparent. To be credible, the effort must include representative samples from across the world to identify both universal values and points of cultural divergence.</p></li><li><p><strong>AI-powered methods: </strong>It would be a missed opportunity not to use AI to help measure attitudes to AI. Academics are already using AI to help test survey questions and interpolate missing data. From <a href="https://demos.co.uk/waves-tech-powered-democracy/">Camden</a> to <a href="https://medium.com/jigsaw/how-one-of-the-fastest-growing-cities-in-kentucky-used-ai-to-plan-for-the-next-25-years-3b70c4fd1412">Kentucky</a>, experiments are underway to use AI to inform new kinds of public deliberation. The Report could provide a testing ground for evaluating the usefulness of AI itself to such endeavours<strong>.</strong></p></li></ul><div><hr></div><p><em>With thanks to Noemi Dreksler, Chloe Ahn, Daniel S. Schiff, Kaylyn Jackson Schiff, Zachary Peskowitz, Conor Griffin, Julian Jacobs, Steph Parrott, Haydn Belfield, and Tom Rachman.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive future posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Coasean Bargaining at Scale]]></title><description><![CDATA[Decentralisation, coordination, and co-existence with AGI]]></description><link>https://www.aipolicyperspectives.com/p/coasean-bargaining-at-scale</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/coasean-bargaining-at-scale</guid><dc:creator><![CDATA[Séb Krier]]></dc:creator><pubDate>Mon, 29 Sep 2025 09:20:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!POhk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s post comes from S&#233;b Krier, who explores how AGI could enable a new ecosystem of personal &#8220;advocate agents&#8221;, dramatically reducing the transaction costs that prevent useful negotiations from taking place across society. S&#233;b works on frontier policy development at Google DeepMind. As with all the pieces you read here, it&#8217;s written in a personal capacity. This piece was also published on the <a href="https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale">Cosmos Institute&#8217;s (excellent) substack</a>. </em></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!POhk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!POhk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!POhk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!POhk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!POhk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!POhk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:187418,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/174819339?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!POhk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!POhk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!POhk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!POhk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff9d28f7-d449-493f-880c-4a2d1016dcbd_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Venus Krier </figcaption></figure></div><p>Much has been written about how AI can pose risks to society, particularly in aging Western countries where a sense of latent anxiety has taken over the discourse on technology for the past decade. Sometimes this is legitimate, and sometimes it feels like a continuation of existing Western pessimism. Few have been able to advance a positive vision of what we should be striving for at a socio-political level. Here I&#8217;d like to make an attempt. This essay explores how, by providing cognition-and-agency on demand, AI agents could amplify human agency to the point where we can escape the zero-sum traps that have plagued political economy for centuries.</p><p>There is a timeless question at the heart of any (free) society: how do we allow individuals to pursue their own interests when one person&#8217;s actions inevitably affect the well-being of another in ways that are negative-sum? Economists have a name for this: &#8220;<em>externalities</em>,&#8221; which can take either physical or financial forms. The sheer scale of this challenge was crystallized in a groundbreaking 1986 <a href="https://www.jstor.org/stable/1891114">paper</a> by economists Bruce Greenwald and Joseph Stiglitz. They demonstrated that because our world is rife with imperfect information, moral hazards, and incomplete markets, externalities are not the exception, but the rule. This pervasive market failure became the intellectual bedrock for modern regulatory regimes. But the solution has always been the same: the coercive hand of the state and a top-down micro-management of society. We are told that only a central authority, a government board or commission, can resolve these conflicts by dictating who can do what, and where.</p><p>But as economists since Hayek have explained, the planner in Washington (or in your state capital) simply cannot possess the dispersed, specific knowledge of time and place known only to the individuals on the ground. This isn&#8217;t the kind of theoretical knowledge you find in books, but the contextual, practical, intuitive, experiential and immediate knowledge that emerges from a particular situation in time. Writing about urban planning, Alain Bertaud argued that &#8220;<em>planners cannot possibly know the reasons households may have for selecting a specific housing location</em>,&#8221; so mandates often end up becoming blunt and arbitrary. Such information is <em>tacit </em>and is only revealed through the actions and choices of individuals within a market. This blindness points to an alternative: letting people solve these conflicts themselves.</p><p>This is the essence of the work of Nobel laureate Ronald Coase, who argued that if bargaining were cheap and easy, a polluter and their neighbor could strike a private deal without any need for regulation. Of course sometimes some pollution would still happen, but the payoff to the neighbor would ensure that both parties are better off than the zero pollution or no-limits pollution counterfactuals. The tragedy is not the existence of the conflict, but the transaction costs that prevent these mutually beneficial deals from being discovered and executed. It&#8217;s also the lesson from Elinor Ostrom, who documented how real-world communities successfully govern shared resources like fisheries and forests through their own intricate local rules.</p><p>Their shared insight is that structures that encourage bottom-up order can work better than attempting to impose top down approximations for every conflict that requires a resolution. But their work also highlighted the formidable barrier that has tends to stand in the way: transaction costs. Transaction costs are not just legal fees; they are the friction of discovery, the difficulty of negotiation, and the expense of enforcement. They are the cognitive and logistical effort required to identify affected parties and strike a deal.</p><p>Historically, because these transaction costs were insurmountable, societies defaulted back to the planner. The inability to coordinate from the bottom up became the enduring justification for control from the top down. The result was always the same: clumsy, one-size-fits-all rules that stifle innovation, distort incentives, and are inevitably captured by special interests who learn to work the system for their own benefit. Today, we are repeating this same failure of imagination in the discussion around AGI. There is a rush to assume that the only way to manage its risks is through the same top-down control model, treating AGI as a centralizing technology by its very nature. If AGI is analogous to a weapon of mass destruction, a genie in a bottle - then surely a central authority is the &#8220;optimal&#8221; answer?</p><p>I find this frame quite myopic. It fixates on the risks of a powerful new technology while completely overlooking its potential to strengthen the governance mechanisms needed for a safe, coordinated society, which could well absolve the need for a centralized solution. As a general purpose technology, AGI is well placed to help us fix our decaying social and public institutions. Better cognitive capabilities also means better coordination, better governance, and better safeguards. Instead of empowering the central planner, AGI could finally empower the individual bargainers of Coase and Ostrom by arming them with the price system: what Michael Levin and Benjamin Lyon call the &#8220;<a href="https://osf.io/preprints/osf/3fdya_v1">cognitive glue</a>&#8221; of a free society.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive future posts - lots more in the pipeline!  </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2><strong>Obliterating transaction costs</strong></h2><p>The difficulty of millions of people discovering one another&#8217;s preferences, negotiating, and enforcing agreements has always been the chief justification for government intervention. It&#8217;s why your neighbor&#8217;s leaf blower, my unwillingness to fund the local park, and a factory&#8217;s emissions all end up in blunt bans and political fights; we can&#8217;t cheaply find each other, state exact terms, and lock in a deal. The &#8220;transaction costs&#8221; are simply too high. But this may no longer be the case once we have AGI agents.</p><p>Before we begin, I think it&#8217;s important to avoid conceptualizing AGI as some sort of single omniscient brain-God - even though agents will effectively individually and collectively be &#8216;superintelligent&#8217; and highly capable, and increasingly so over time. That is the central planner&#8217;s fallacy all over again. While we will continue to see ever-larger training runs creating powerful foundation models, I think it&#8217;s a mistake to assume this results in a singular AGI that carries all economically valuable tasks; economics ultimately favors efficiency at the point of delivery (inference).</p><p>Running a hyper-general model for every specialized task is incredibly expensive, and so this reality drives specialization: general models will be compressed, distilled, and optimized for specific uses. The future landscape will therefore be a hybrid: a vast ecology of personalized agents, services, applications, and robots with varying degrees of generality. While many may descend from a few common foundational ancestors, their deployment will be diverse and specialized. As such, imagining &#8216;AGI&#8217; as a singular entity is like talking about &#8220;Finance&#8221; as a singular thing.</p><p>I think it&#8217;s more helpful to imagine these agents through a different lens: consider AGI deployed as a vast ecology of personalized agents and systems. This emerging ecosystem is what Toma&#353;ev et al. (2025) characterize as the &#8220;<a href="https://www.arxiv.org/abs/2509.10147">virtual agent economy</a>,&#8221; a new economic layer where agents transact and coordinate at scales and speeds beyond direct human oversight. While this ecology will contain countless specialized agents, let&#8217;s focus on the one that matters most from an individual&#8217;s perspective: your personal advocate. Think of it as a fiduciary extension of yourself: a tireless, extremely competent digital representative, closely tied to you, its principal.</p><p>What could such an agent do? In principle, it can negotiate, calculate, compare, coordinate, verify, monitor, and much more in a split second. Through many multi-turn conversations, tweaking knobs and sliders, and continuous learning, it could also develop an increasingly sophisticated (though never perfect) model of who you are, your preferences, personal circumstances, values, resources, and more. This should evolve over time - an agent&#8217;s alignment should follow the principal&#8217;s own evolution. Recent <a href="https://arxiv.org/abs/2404.04289">research</a> on negotiation agents finds that &#8220;human-agent alignment&#8221; is profoundly personal. Users expect agents to not only execute goals but also embody their identity, requiring alignment on everything from preferred negotiation tactics to personal ethical boundaries and the specific public reputation they wanted to project. There are of course important privacy considerations here, but none of these seem fundamentally intractable. For example these systems could be built on technologies like zero-knowledge proofs and differential privacy, ensuring that preferences are communicated and aggregated without revealing sensitive underlying data.</p><p>Such an agent should also be able to communicate your preferences to millions of other agents in real-time, with a nuance and specificity that is currently impossible. It knows that you&#8217;ll tolerate loud music on a Saturday, but not on a Sunday; that you&#8217;d be happy to carpool, but only if it adds less than ten minutes to your commute; that you&#8217;d willingly pay a fraction of a cent more for clean electricity, but only during off-peak hours. All this in a split-second, at the right moment, for the right purpose. In other words, AGI could enable hyper-granular contracting. The friction that has always hindered us, the transaction costs that Coase and Ostrom identified as the great barrier to cooperation, could be massively reduced. <em>So what can we now do in such a world that was otherwise not possible?</em></p><h3><strong>Pollution and road-traffic negotiations</strong></h3><p>Think of the agents as a built-in coordination device: instead of each actor guessing everyone else&#8217;s move (what economists would call a Nash deadlock), they can condition their actions on shared signals and contracts, unlocking deals that were previously out of reach - a correlated equilibrium.</p><p>Consider the implications. Your agent knows you have a child with asthma. A blanket &#8220;just ban the emissions&#8221; rule sounds tidy, but it flattens everything into the same position: trivial harms and intolerable ones, essential trips and frivolous ones. When a delivery truck&#8217;s agent plans its route, it doesn&#8217;t need a government mandate to be considerate. It simply sees a higher &#8220;price&#8221; for entry onto your street, a signal broadcast by your agent, representing your strong preference to avoid diesel fumes. The truck&#8217;s agent can then calculate, instantly, whether it is cheaper to pay the &#8220;clean air fee&#8221; to you and your neighbors, or to take a different route. Conversely, if your neighbor&#8217;s agent flags an emergency, for example if she&#8217;s in labor and needs the fastest route to the hospital, then everyone&#8217;s agents can auto-drop (or even invert) the price to clear a corridor, because they actually <em>value </em>her getting through fast. It&#8217;s true that in some cases, enforcement of these contracts might cost more than their value; but this could be solved through automated escrows and reputation systems. Ideally the agent system transforms enforcement from a costly legal battle into a near-instantaneous computational verification.</p><p>In this scenario, the externality doesn&#8217;t vanish, but it does get a price tag. And once a cost is made clear, the marvel of the market can solve it. The problem was never the pollution itself; it was the fact that the polluter was allowed to impose a health and financial cost onto you for free. To be clear, not all agent negotiations need to be purely financial. The system I&#8217;m envisaging could enable two distinct modes: economic negotiations where willingness-to-pay determines outcomes (useful for commercial activities like delivery routes), and as I&#8217;ll outline later on in the essay, democratic negotiations where each person gets equal voting weight regardless of wealth (essential for community values like neighborhood character). Agents can seamlessly switch between these modes depending on the issue at stake - using market mechanisms for efficiency where appropriate, while preserving democratic legitimacy for fundamental community decisions.</p><p>What&#8217;s key though is that agents make that <a href="https://cloud.google.com/blog/products/ai-machine-learning/announcing-agents-to-payments-ap2-protocol">payment</a> possible, managing a million micro-transactions in the background, all based on how your values generalize across countless situations and contexts. When I lived in London, residents of my neighborhood were unhappy with congestion on roads so decided to essentially prohibit cars from going through it at certain times; taxis and local merchants were naturally pretty annoyed. With the agent-bargaining system, these low-traffic-neighbourhood detours stop being absolute: taxis can pay a dynamically discovered &#8220;cut-through&#8221; fee, while verified emergencies glide through at zero (or negative) price.</p><h3><strong>Neighborhood character negotiations</strong></h3><p>This mechanism clarifies plenty of other thorny disagreements too. Imagine a developer wants to build an ugly building in a residential neighborhood. Today, that is a political battle of influence: who can <a href="https://www.sambowman.co/p/democracy-is-the-solution-to-vetocracy">capture</a> the local planning authority most effectively? In an agent-based world, it becomes a simple matter of economics. The developer&#8217;s agent must discover the price at which every single homeowner would agree. If the residents truly value the character of their neighborhood, that price may be very high. The project will only proceed if the developer values the location more than the residents value the status quo. Conversely, if the residents&#8217; asking price is lower than the developer&#8217;s willingness to pay, the project proceeds, and the residents are compensated. In either case, the true economic costs and benefits are accounted for. This mechanism forces the discovery of the most valuable use of the resource, moving beyond the current system where projects are either blocked entirely (socializing the loss of potential gains) or forced through politically (socializing the costs on the neighborhood).</p><p>But what if a resident decides to game the system and go for a really absurd price, holding everyone ransom? This is why you need a new secondary layer of institutions <em>on top</em> of these agents. Crucially, these institutions can be voluntary. In this neighborhood, homeowners can pool their agents into a simple bargaining club: each person privately inputs the minimum they&#8217;d accept; the software aggregates that into a single take&#8209;it&#8209;or&#8209;leave&#8209;it offer. This is essentially <a href="https://scholar.harvard.edu/files/maskin/files/introduction_to_mechanism_design_and_implementation_e._maskin.pdf">mechanism design</a> in action: creating rules where being honest about your true minimum is the smartest move, not gaming the system. Overstating just risks killing the deal (you get zero), and if it clears, the payout is at the common clearing price - so padding your number doesn&#8217;t boost your check. The group speaks with one voice without surrendering property rights, and the developer sees a single, fair number instead of a hundred ransom demands.</p><p>Skeptics might reasonably worry that NIMBYs can still name absurd buy-off prices. This is a classic political economy dilemma. The benefits of blocking a project are concentrated among a few motivated homeowners, while costs such as higher rents, longer commutes, and slower growth are diffused across a wide, unorganized public. As Janan Ganesh <a href="https://www.ft.com/content/584cac88-0c22-457b-a7a2-ca1d5ca997c5">puts</a> it, the potential losers are an &#8220;unconscious blob of people&#8221; who don&#8217;t even know what they&#8217;re losing. Two guardrails fix this.</p><p>First, chronic hyper-bidders see their voting weight fade or must pay an &#8220;option fee&#8221;: a <a href="https://academic.oup.com/jla/article/9/1/51/3572441?login=false">Harberger-style tax</a> in which you periodically pay a percentage of the price you claim; overstate, and it soon hurts. For example, if you claim your property is worth $10 million to block a development, you must be prepared to pay taxes on that valuation too! Second, and more importantly, AGI agents can give that &#8220;unconscious blob&#8221; a powerful voice. Any coalition that vetoes must then reimburse that quantified loss, with agents handling the transfers automatically. The diffuse cost becomes a concentrated, explicit price. Stonewalling remains possible, but it now carries a real, rising cost. Moreover, with this setup, bargaining isn&#8217;t just between NIMBYs and developers; other residents, now aware of the potential gains, can bargain directly with the holdouts.</p><h3><strong>Sugar and healthcare externalities</strong></h3><p>Consider another example: sugar/junk food consumption and public health. Proponents of a sugar tax correctly identify an externality: poor diet choices impose costs on the shared healthcare system. Their solution, however, is (shock!) a clumsy, top-down tax. This harms food producers, is regressive (as it affects the poor more than the rich), and ultimately imposes a cost on many people who would not in fact be &#8220;guilty&#8221; of imposing costs on the healthcare system. An agent-based market addresses the same problem with bottom-up precision.</p><p>Instead of lobbying the government, your health insurer&#8217;s agent communicates with your advocate agent. It looks at your eating habits, calculates the projected future cost of your diet and makes a simple offer: a significant, immediate discount on your monthly premium if you empower your agent to disincentivize high-sugar purchases. At that very moment of decision, the market responds. Acting like a hyper-alert Kirznerian entrepreneur spotting a profit opportunity, a soft drink company&#8217;s agent, to retain your business, might instantly propose a deep discount on a healthier drink.</p><p>Now consider smoking bans in public places. A simple free-market approach would let every restaurant or bar owner decide their own policy. But non-smokers value having a broad range of options for a night out; if smoking becomes the default, their social world narrows significantly. This loss of choice is a cost that a full-on ban tries to crudely handle. AI agent negotiation, however, allows for a more precise, Millian solution. Instead of banning the externality, we can finally price it in through voluntary, real-time negotiation. Once again, we&#8217;re not banning the externality, but <em>pricing </em>it in. This price wasn&#8217;t imposed by a committee of very smart policymakers sitting in a grey room in Westminster, but discovered through voluntary, real-time negotiation. The choice remains with the individual, but it is now a truly informed choice, where the full costs and alternatives are transparent.</p><p>Another example can be seen in the rules we have on airplanes and the air we share with fellow passengers in this private space. Even when we don&#8217;t use government rules, airlines generally have to come up with a generic rule that works okay for who it expects to be on a typical airplane ride. During the COVID pandemic, even many people who wanted mask mandates for airplanes did not wear masks on flights themselves, as they considered the value of wearing a mask while others would not to be minimal.</p><p>Similarly, airlines generally do not make accommodations for people sensitive to airborne allergens. Virgin Airlines can&#8217;t tell if your peanut allergy is life-threatening or just a mild inconvenience. To avoid opening the floodgates to thousands of hard-to-verify requests (&#8220;I&#8217;m allergic to perfume,&#8221; &#8220;I&#8217;m sensitive to blue lights,&#8221;) they just make a simple, inflexible rule, like &#8220;we will serve nuts.&#8221; Much of this is, of course, due to an aversion to what seems like inevitable lobbying for accommodations that would come from conceding the principle. However, if flight policies are negotiated over by AI agents, we don&#8217;t have to choose between all or nothing on masking. We don&#8217;t have to rule out accommodations to people with allergen sensitivities for fear of frivolous requests; instead we move from all-or-nothing mandates to nuanced, negotiated outcomes, where the intensity of a person&#8217;s need is accurately represented and compensated.</p><p><strong>This agent-negotiated world delivers three principles essential to a free and effective society.</strong></p><p>First, <em>accountability</em>. A billionaire who wants to close a public beach for a private party faces a new constraint: his agent must make a public, auditable offer to every single person who would be deprived of access. The cost of his desire becomes explicit and traceable. Of course, he might still try the old route of bribing a bureaucrat in secret - but this parallel transparent market creates pressure and comparison points. When combined with AGI-enhanced governance (automated auditing, pattern detection for corruption etc.), the corrupt path becomes even more risky and costly.</p><p>Second, <em>the power of voluntary coalitions</em>. Today, diffuse interests are often ignored because the transaction costs of organizing are too high. A single person in a low-income neighborhood has little bargaining power. A multinational polluter is more likely to get away with building a monstrosity in a Brazilian favela than in the Hamptons, even if the true social cost is higher. But what if the agents of 10,000 residents, seeing a factory&#8217;s proposal to increase emissions, can form a bargaining coalition in a nanosecond? They can spontaneously band together and declare, &#8220;<em>Our collective price to accept this pollution is X million dollars, non-negotiable.</em>&#8221; They solve the collective action problem instantly, creating what is effectively a powerful digital union to counterbalance concentrated wealth.</p><p>Third, <em>continuous self-calibration</em>. Because every agent streams its user&#8217;s context-rich preferences into live markets, the rules themselves flex in real time. Noise caps, curb uses, even peak-hour electricity rates slide automatically as new bids and conditions roll in, rather than waiting for a city-council vote five years from now. Tacit desires, like how much quiet you need for a newborn&#8217;s nap or what premium you&#8217;d pay for a car-free street, become explicit, machine-readable prices. The system therefore functions as a permanent feedback loop. It detects mismatches between policy and lived reality, reprices the externality within seconds, and nudges behavior accordingly. Governance shifts from statute to thermostat: sometimes through formal institutions built on top of these agents, such as professional guilds, and sometimes through instantaneous ad-hoc &#8220;<em>flash coalitions</em>&#8221; - emergent order.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c2MD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c2MD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 424w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 848w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 1272w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c2MD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png" width="1456" height="156" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:156,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:30717,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/174819339?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c2MD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 424w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 848w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 1272w, https://substackcdn.com/image/fetch/$s_!c2MD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57fb696e-b520-422c-87bc-7705d8dc0bc1_1922x206.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p><h2><strong>So what&#8217;s the catch?</strong></h2><p>The Coasean vision amplified by AGI agents is powerful, but it&#8217;s not a panacea. Ronald Coase himself was no naive utopian, emphasizing that his theorem was a theoretical benchmark for a world without transaction costs, not a description of reality. In practice, the theorem has faced decades of rigorous critique from economists, legal scholars, and behavioral scientists, who argue that its assumptions crumble under real-world frictions. These limitations explain why Coasean bargaining rarely materializes <em>today</em>, leading societies to default to clumsy government interventions or inaction.</p><p>My response to this is twofold. First, the Coase theorem and its instantiation forces us to<em> identify and analyze</em> frictions, like imperfect information, legal costs, or strategic holdouts, that prevent efficient private solutions. This is not to say it solves everything, but it&#8217;s a powerful toolkit to prompt us to look for creative, private, and market-based solutions to problems where we might have only considered government regulation or violence before. Second, many of these critiques ignore what governance technologies and institutional arrangements AGI can enable in the first place - and I think there are good reasons to think that this technology can help us bypass limitations that would otherwise block progress on cooperation.</p><p>It&#8217;s true that even with perfect agent coordination, there remains what Acemoglu calls the &#8216;<a href="https://economics.mit.edu/sites/default/files/publications/why-not-political-coase-theorem.pdf">political Coase theorem</a>&#8216; problem: those with political power cannot credibly commit to not exploiting that power tomorrow, since no external enforcer exists for contracts with the sovereign itself. <em>The sovereign can always renege</em>. This is a tricky challenge, but the agent system offers countermeasures. First, the transparency created by agent negotiations raises the political cost of expropriation or bribery: it&#8217;s harder to steal what is clearly priced and publicly recorded. Second, AGI must be deployed not just for market transactions, but to enhance institutional accountability. In other words, we should automate many aspects of how we govern: automated auditing, real-time monitoring of regulatory capture, automated dispute resolution, automated public spending monitoring, and agent-based anti-corruption measures can harden the governance mechanisms that constrain the arbitrary use of power. Institutions matter!</p><p>Just as agents can aggregate citizen preferences for market negotiations, they can also transform how the &#8220;machinery of government&#8221; itself operates. The information asymmetries and coordination failures that James C. Scott describes in <em>Seeing Like a State</em>, where central authorities operate with crude categories that miss local knowledge, can finally be resolved as well. On the &#8220;executive&#8221; side, agent networks can provide governments with high-resolution, real-time feedback about policy impacts, citizen preferences, and emerging problems. On the &#8220;civil&#8221; side, the automation of key protections against executive corruption, overreach, and misalignment protects people against the erosion of liberal democracy.</p><p>Here, I&#8217;ll explore some of the most salient critiques, preempt common objections to applying them in an AGI-agent context, and propose some countermeasures. The goal isn&#8217;t to dismiss the critiques but to show how agents can substantially mitigate them. This strengthens the case for a hybrid system: agents handling the micro-coordination, with carefully designed institutions (including the state) addressing the rest.</p><h3><strong>Zero transaction costs, really? And what about inequality?</strong></h3><p>Skeptics might say agents don&#8217;t eliminate costs entirely; they just shift them to compute overhead, data privacy setups, agent configurations etc. This is true! But it also underestimates the scale of reduction. Agents aren&#8217;t burdened by human limitations like fatigue, bias in communication, logistical hurdles, social awkwardness, irrational decision making and so on. What costs $10,000 in legal fees today might cost pennies to compute tomorrow. Consider how a billionaire&#8217;s phone today is no more powerful or effective than yours.</p><p>Even then though, you might reasonably think that this still creates inequity<em> in the short run</em>. To prevent cost barriers for low-income users, governments or philanthropies could provide baseline agent services (similar to public defenders) and guarantees. This is a low cost to pay for the efficiencies gained by a system that otherwise promises to save society orders of magnitude more by slashing legal overhead, unlocking stalled projects, and turning countless externalities into win-win trades. In other words, underwriting entry-level agents for the poorest citizens is like funding public roads: a modest civic outlay that makes the whole market run faster, fairer, and vastly more productively.</p><p>In some contexts, for low-income users, governments or philanthropies could provide baseline agent services - or more likely, the necessary compute to level things up and ensure equitable participation. This is unlikely to be a huge growing cost over time, as agent tech commoditizes, these costs approach zero asymptotically. The model here could mirror school voucher systems like Sweden&#8217;s, where the government provides credits that ensure universal access to essential services while allowing choice and competition. Just as educational vouchers guarantee every child can attend school regardless of family income, &#8220;agent vouchers&#8221; or compute credits could ensure everyone can participate in democratic deliberation, access legal representation, or navigate essential government services. The key is targeting subsidies where they matter most for civic participation and fundamental rights - you&#8217;d want generous credits for democratic decision-making, healthcare choices, or educational planning, but not for negotiating garage parking disputes or lawn ornament preferences.</p><p>Alternatively, or complementarily, the system could employ direct redistribution in highly sensitive areas - providing everyone with a base allocation of compute credits or &#8220;agent wealth&#8221; to spend as they see fit. This approach avoids the paternalism of defining the above &#8220;essential services&#8221; centrally, which would recreate the very social planner problem we&#8217;re trying to avoid. Individuals could allocate their resources according to their own priorities rather than predetermined categories. A hybrid might work best: a universal basic compute allocation for personal use, plus additional targeted support for specific democratic and legal functions where equal participation is constitutionally guaranteed.</p><p>This tiered approach ensures equity where it counts without creating an unsustainable fiscal burden, while still allowing market dynamics to operate in less critical domains. In practice however, this does mean a lot of infrastructure will be required. For example, built-in protocols for multi-party discovery. For high-volume scenarios, hierarchical agents could aggregate at neighborhood or city levels. But much of this will need to be designed as part of a wider push for improving institutional decision making.</p><h3><strong>Which rights are the &#8216;default&#8217;?</strong></h3><p>Another important consideration here is that you still need an agreed &#8220;default position&#8221; - do people have a right to make noise, or a right to quiet? What&#8217;s the basic right that is being negotiated - the right to pollute, or the right to be free from pollution? The machinery runs either way. What does change is who ends up richer, which is why the initial allocation of these rights is a constitutional choice, not a technicality. Even if bargaining is cheap, outcomes aren&#8217;t invariant to initial property rights because wealth influences willingness to pay. A poor farmer might sell pollution rights cheaply to a rich factory not because it&#8217;s efficient, but because they need cash now. Conversely, changing who starts with the rights changes the wealth distribution, which affects what people can afford to bid and therefore changes which &#8216;efficient&#8217; outcome the market settles on. Beyond wealth effects, behavioral factors like the <a href="https://www.jstor.org/stable/2937761">endowment effect</a> - people demanding far more to give up a right than they&#8217;d pay to acquire it - make initial allocations stick even with perfect bargaining. Agents might correct for such biases, though whether we want them to &#8216;debias&#8217; negotiations or faithfully represent our psychological quirks remains an open design question.</p><p>So how do we ensure fairness without reverting to top-down control? What is the &#8220;default position&#8221; to start with? Well, that baseline of who starts with which entitlement is a normative, collective choice. Agents don&#8217;t magic it away; they only make it explicit, contestable, and cheap to renegotiate. My view here is that we already have many of these rights set up by centuries of jurisprudence, and this is the right starting point. To the extent that these need to change or adapt, our democratic political systems are the right mechanism to do so. The bad news is that these systems are now pretty ossified, slow, captured, and dysfunctional. The good news is that agents can improve them materially.</p><p>Beyond periodic voting on baseline entitlements, agents could fundamentally transform how citizens deliberate and exercise their democratic rights. Recent papers show that AI systems can effectively learn and represent human preferences with remarkable efficiency. Studies like <a href="https://arxiv.org/pdf/2310.15428">ConstitutionMaker</a> show how natural language principles can be extracted from preference data, while <a href="https://arxiv.org/pdf/2406.06560">Inverse Constitutional AI</a> proves that just a handful of preferences can be compressed into interpretable principles that accurately reconstruct individual and group values. This suggests agents could continuously learn citizens&#8217; nuanced policy preferences through ongoing interactions, creating rich, privacy-protected preference profiles.</p><p>Currently, we delegate representation to biological agents - mayors, councilors, representatives - who operate within opaque, underfunded institutions plagued by accountability problems, information asymmetries, and the impossibility of truly representing thousands of diverse constituents. With agent infrastructure, we could significantly improve these systems. Imagine every citizen having a personal agent that deeply understands their values, can engage in sophisticated policy deliberation on their behalf, and coordinate with millions of other agents to find optimal compromises in real-time.</p><p>These agents wouldn&#8217;t just vote every few years but could participate in continuous liquid democracy, dynamically delegating expertise to trusted entities for specific domains, instantly aggregating or <a href="https://interestingessays.substack.com/p/social-preferences-are-constructed">constructing</a> preferences on emerging issues, and ensuring that policy truly reflects the evolving will of the people rather than the frozen snapshot captured at the last election. Of course, this risks enabling digital NIMBYism at unprecedented scale, and we certainly don&#8217;t want everyone&#8217;s agents micromanaging nuclear safety protocols or monetary policy - but these are mechanism design and governance challenges, not fundamental obstacles.</p><p>Today, citizens already don&#8217;t vote on every financial regulation or technical standard; agent-mediated democracy needn&#8217;t change that. To the extent that enhanced coordination could enable minorities to hold majorities hostage, we&#8217;ll need clever mechanisms to prevent such digital paralysis. There&#8217;s plenty of work ahead for policymakers, economists, evaluation designers, sociologists, and game theorists to get these institutional designs right!</p><p>Lastly, the system would also need to balance dynamism with stability. Markets require predictable rules; indeed, the constant renegotiation of property rights would destroy investment incentives. But just as options markets price volatility, an agent-mediated system could explicitly price the value of stability versus flexibility, letting some rules ossify by mutual agreement (basic property rights, contract enforcement) while others remain perpetually negotiable (noise ordinances, parking rules). The agents themselves would likely converge on stable equilibria for most issues simply to reduce computational overhead - constant renegotiation is expensive even for AGI.</p><h3><strong>But what about catastrophic risks?</strong></h3><p>A lot of people working in AI governance are interested in catastrophic risks where a few actors can impose great harm on others at scale; many will rightly say &#8220;this all sounds great but doesn&#8217;t address CBRN risks.&#8221; They&#8217;re not wrong.</p><p>A malicious actor intending to release a pathogen is not a market participant to be bargained with, and admittedly, the agent system can do little to stop them. Instead this is the state&#8217;s first and most important job: to enforce law and order and protect citizens from violence, whether from a foreign army or a domestic bioterrorist. The Coasean multi-agent framework relies on this protection to even exist in the first place: the state needs to enforce contracts. If the delivery truck&#8217;s agent agrees to the &#8220;clean air fee&#8221; but the company refuses to pay, there must be a court system: a neutral arbiter with the power to enforce the agreement. This is a non-negotiable role for the state.</p><p>In AI governance discussions, the aversion to the totalising, centralising proposals espoused by some communities has been met with the inverse prescription: various flavours of free for all e/acc libertarianism or anarchy. This falls into the opposite trap, and wrongly assumes you can do away with the state entirely. The Coasean framework does not eliminate the state, but it transitions its role from &#8220;central planner&#8221; to &#8220;framework guarantor&#8217;&#8221; focusing its power on what it alone can do. It allows the market, supercharged by agents, to handle the complex work of coordinating preferences and pricing externalities, a job the state has always done poorly. This should in principle appeal to conservatives wary of big government and liberals wary of power abuses. But it doesn&#8217;t do away with the state, and nor should it - it just makes it a lot leaner. The (gradually automated) state continues to define and enforce basic property rights, contract law, criminal justice, and constitutional rights - the bedrock rules without which agent negotiations would be meaningless.</p><p>So the Coasean multi-agent system, for all its genius, has a critical limit: it is designed to price trade-offs. It can put a price on diesel fumes, noise, or a blocked view. It cannot, however, price the non-negotiable. What happens if technology unlocks a true &#8220;recipe for ruin&#8221;? A discovery, like &#8220;easy nukes&#8221; or a simple method for creating a devastating pathogen, that allows a single actor to threaten civilization itself? This is not an externality to be bargained over!</p><p>Such a risk is a form of ultimate coercion, and its prevention falls squarely within the most fundamental duty of the state: protecting its citizens from violence. Therefore, the state&#8217;s role is not just to enforce the contracts that agents make, but to define the absolute boundaries of what they are permitted to do in the first place like prohibiting actions that create catastrophic, un-priceable risks like man-made pandemics. While the Coasean framework itself does not price these existential risks, the underlying cognitive infrastructure it creates is part of what a modern state may need to manage them: enabling the high-speed coordination and automated governance required to make the state a more effective protector.</p><h2><strong>Matryoshkan Alignment</strong></h2><p>So what does this mean for how we think about the normative/sociopolitical question of who agents should be aligned to? To whom, or what, is an agent ultimately loyal? Is it fully and solely aligned to the user, like a computer that will execute any command? Or is it aligned to an amorphous set of collective values decided by some citizen jury? Or a global institution that sets top down directives? Or is it all up to the model developer? As with humans, I think the answer is not a single master. The meta-framework is a series of nested layers of governance, like a set of Matryoshka dolls. This nested structure mirrors what Levin <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02688/full">calls</a> &#8220;scale-free cognition&#8221; where each level of organization (from cells to tissues to organisms) maintains its own goals and decision-making capacity, with larger-scale goals emerging from but not replacing smaller-scale ones. There are many possible layers, but for the sake of simplicity I&#8217;ll outline three here.</p><p>The outermost, and largest, doll is the law. This is the non-negotiable boundary enforced by the state. An agent, no matter how personalized, cannot be a tool for committing crimes. Your agent cannot help you orchestrate fraud, DDOS a hospital, hire a hitman, or procure materials for a bioweapon, any more than your word processor can grant you immunity for writing a fraudulent cheque. To a large extent, existing laws basically criminalize all of the above; although some ways need some updating to account for agentic capabilities, difficulty in establishing a <em>mens rea</em>, delineation of responsibilities across the &#8220;value chain,&#8221; and so on. There&#8217;s plenty of interesting work going on in legal circles trying to work this out, and legitimate arguments for why certain gaps may need to be filled.</p><p>Within that legal boundary operates the second layer: the free market of different services, deployers, products, and providers. A company offering an agent service is not the government; it is a voluntary association with its own rules. One social media site can cultivate a different environment from another, and users are free to choose. If a provider&#8217;s agent refuses to engage with topics the company deems harmful to its brand or community, that is their right. A user who finds these policies too restrictive is not a captive; they are a customer who can, and will, take their business to a competitor offering more customization or utility. This competition will be the primary force pushing agents to become powerful and loyal advocates for their users. Today, we arguably have an increasing number of developers of general purpose models of all kinds, costs are going down, and importantly many more actors are able to customize, fine-tune, and modify models deployed through cloud infrastructure. Unlike social media, network effects are also far weaker. And in a competitive market with switching costs approaching zero, parasitic agents get quickly identified and abandoned.</p><p>Finally, at the core, is the individual. Within the bounds of law and the terms of service you voluntarily accept, the agent&#8217;s purpose is to be your tireless, personal advocate. This is where the power of user-level customization and alignment is unleashed, where a private &#8220;cognitive DNA&#8221; can be grown. The user should have immense freedom to tune their agent to their unique preferences and values. They should also have complete privacy and control over their &#8220;cognitive profile&#8221; developed by the agent, for obvious reasons. Practically speaking though, this is the hardest part: how do you design and evidence an agent (mostly) aligned to a user? How do you evaluate this and the continuous learning? We don&#8217;t need the perfect answer to these questions - <em>alignment is not something to be &#8220;solved.&#8221;</em></p><p>There are of course important <a href="https://arxiv.org/abs/2504.01849">technical</a> questions that are not fully addressed - the right norms, the right level of agreeableness, the right level of deference and corrigibility, fully addressing reward hacking, ensuring agents aren&#8217;t deceptive, the right evaluations to test for user alignment, and more. Few of these have a single right answer however, and markets are <em>generally </em>fairly well incentivised to solve them - no company or person wants a reward hacking agent. My intention here is not to dismiss them away - rather, I think the way the &#8220;alignment problem&#8221; is often conceptualized is out of date and comparable to asking &#8220;<em>how do we ensure what is written always leads to truth? How do we solve the &#8216;truth problem&#8217;?</em>&#8221; after the invention of writing or typewriters. There <em>isn&#8217;t </em>and <em>cannot </em>be any guarantee. In fact, the starting point should be reversed; as Dan Williams <a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">notes</a>, the real question should be &#8220;<em>why do we even have truth at all?</em>&#8221; This is a question of institutions and governance, and not one solved by software engineering. It&#8217;s an unsatisfactory answer only for those seeking centralized guarantees. <em>You mean we&#8217;re going to have to muddle through things? </em>Yes. As put by <a href="https://arxiv.org/abs/2505.05197">Leibo et al</a>, we should model societal and technological progress as sewing an &#8220;ever-growing, ever-changing, patchy, and polychrome quilt.&#8221;</p><p>What we need to ensure is that agents that genuinely serve their users&#8217; interests will outcompete those that don&#8217;t and build the right governance mechanisms. From a commercial point of view, these agents won&#8217;t just be adopted and used by everyone out of the box. They need to actually produce value to their principals too. People will want AIs for financial planning, solving scheduling and calendars, helping with negotiations with roommates, finding romantic partners, etc. I expect this adoption to begin in the immediate sphere: agents negotiating the office thermostat, allocating shared resources in an apartment block, or fairly dividing household chores. As these systems prove their worth in reducing daily friction, people will trust them more. The same mechanisms used to settle a parking dispute can be adapted and scaled up to manage urban planning conflicts or discover the true cost of local externalities in other contexts. Eventually, this bottom-up architecture provides a credible pathway to solving the grand challenges, from funding national public goods to perhaps one day even navigating the complexities of interstate disputes.</p><p>In some cases, you may need more than market forces. Just as public defenders ensure legal representation regardless of ability to pay, we may need guaranteed access to advocacy agents through voucher systems, <a href="https://meaningalignment.substack.com/p/market-intermediaries-a-post-agi">market intermediaries</a>, &#8220;right to an agent&#8221; provisions, oversight mechanisms for automated governance, and so on. For the most part though, users will gravitate toward agents that actually help them achieve their goals. Providers whose agents consistently deliver <em>value </em>will gain market share. Markets remain one of the most powerful forces discovered to date; and with agents, we can surely improve both the rules governing them (e.g. through political agents and automated governance) and the mechanisms that ensure their efficiency (e.g. through Coasean bargaining agents).</p><p>The vision presented here shifts the locus of governance from centralized coercion to decentralized negotiation. AGI agents can help us create a vastly more efficient, accountable, and adaptable society. There is no need to centralize all the labs into a government monopoly, nor should we just accelerate aimlessly and do away with the state. And unlike solving alignment for a singular, centralized AGI (where failure is catastrophic), the distributed model of millions of user-agent relationships creates a massive parallel experiment. It is a system that learns, adapts, and continuously aligns itself over time, allowing us to build a society that is both more free and more coordinated than anything that has come before.</p><p>***</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>Thanks to Nathaniel Bechhofer, Roberto-Rafael Maura-Rivero, Lee B. Cyrano, Andrew Cordington, Max Nadau, Ivan Vendrov, Alex Obadia, Benoit Lepine, Ryan Murphy, Harry Law, Conor Griffin, Lumpenspace and Benjamin Lyons for comments. </p>]]></content:encoded></item><item><title><![CDATA[What will AI look like in 2030?]]></title><description><![CDATA[AI could improve productivity in valuable areas such as scientific R&D, as investments and energy requirements grow]]></description><link>https://www.aipolicyperspectives.com/p/what-will-ai-look-like-in-2030</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/what-will-ai-look-like-in-2030</guid><dc:creator><![CDATA[David Owen]]></dc:creator><pubDate>Wed, 17 Sep 2025 13:39:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jz_k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today we have a guest post from David Owen at Epoch AI. It summarises a new report that David authored looking at trends in AI scaling and the implications for scientific R&amp;D. The report, and this summary, is available to view on<a href="https://epoch.ai/blog/what-will-ai-look-like-in-2030"> the Epoch website</a>. This report was commissioned from Epoch AI by Google DeepMind. All points of views and conclusions expressed are those of the authors and do not necessarily reflect the position or endorsement of Google DeepMind.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jz_k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jz_k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 424w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 848w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 1272w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jz_k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png" width="1456" height="1094" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1094,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jz_k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 424w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 848w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 1272w, https://substackcdn.com/image/fetch/$s_!jz_k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3daf3c57-1da6-4bfa-b150-75183ef85ce4_1600x1202.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts - lots more in the pipeline!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>What will happen if AI scaling persists to 2030? We are releasing a report that examines what this scale-up would involve in terms of compute, investment, data, hardware, and energy. We further examine the future AI capabilities this scaling will enable, particularly in scientific R&amp;D, which is a focus for leading AI developers. We argue that AI scaling is likely to continue through 2030, despite requiring unprecedented infrastructure, and will deliver transformative capabilities across science and beyond.</p><p><strong>Scaling is likely to continue until 2030:</strong> On current trends, frontier AI models in 2030 will require investments of hundreds of billions of dollars, and gigawatts of electrical power. Although these are daunting challenges, they are surmountable. Such investments will be justified if AI can generate corresponding economic returns by increasing productivity. If AI lab revenues keep growing at their current rate, they would generate returns that justify hundred-billion-dollar investments in scaling.</p><p><strong>Scaling will lead to valuable AI capabilities: </strong>By 2030, AI will be able to implement complex scientific software from natural language, assist mathematicians formalising proof sketches, and answer open-ended questions about biology protocols. All of these examples are taken from existing AI benchmarks showing progress, where simple extrapolation suggests they will be solved by 2030. We expect AI capabilities will be transformative across several scientific fields, although it may take longer than 2030 to see them deployed to full effect.</p><p>We discuss some of the report&#8217;s findings below.</p><h1>Scaling is likely to continue to 2030</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!haMO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!haMO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 424w, https://substackcdn.com/image/fetch/$s_!haMO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 848w, https://substackcdn.com/image/fetch/$s_!haMO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 1272w, https://substackcdn.com/image/fetch/$s_!haMO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!haMO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png" width="1456" height="892" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:892,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!haMO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 424w, https://substackcdn.com/image/fetch/$s_!haMO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 848w, https://substackcdn.com/image/fetch/$s_!haMO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 1272w, https://substackcdn.com/image/fetch/$s_!haMO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8de010e-2792-4e5c-a1ba-5d9398c2eace_1600x980.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On current trends, the clusters used for training frontier AI would cost over $100B by 2030. Such clusters could support training runs of about 10^29 FLOP &#8211; a quantity of compute that would have required running the largest AI cluster of 2020 continuously for over 3,000 years. AI models trained on such clusters would use thousands of times more compute than GPT-4, and require gigawatts of electrical power.</p><p>This exemplifies a repeating pattern in our findings: if today&#8217;s trends continue, they will lead to extreme outcomes. Should we believe they will continue? Over the past decade, extrapolation has been a strong baseline, and when we investigate arguments for a forthcoming slowdown, they are often not compelling.</p><p>Below we recap some of these arguments, and our conclusions from the report:</p><ul><li><p><strong>Scaling could &#8220;hit a wall&#8221;</strong>, i.e. AI systems might fail to improve with further scaling. But recent AI models have seen large improvements on <a href="https://epoch.ai/data-insights/gpt-capabilities-progress">benchmarks</a> and <a href="https://epoch.ai/data-insights/ai-companies-revenue">revenue</a>. This could happen, but there isn&#8217;t obvious evidence of it yet.</p></li><li><p><strong>Data stocks for training might be used up.</strong> But there is <a href="https://epoch.ai/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data">enough public human-generated text to scale to at least 2027</a>, and <a href="https://epoch.ai/blog/can-ai-scaling-continue-through-2030#synthetic-data">synthetic data</a> can be generated in large quantities, and its usefulness is better-established after the invention of reasoning models. It is difficult to fully rule out a data bottleneck, but it seems surmountable.</p></li><li><p><strong>Scaling could be bottlenecked by electrical power.</strong> If scaling continues, frontier training runs will require <a href="https://epoch.ai/blog/power-demands-of-frontier-ai-training">gigawatts by 2030</a>. This would be difficult to supply, but there are ways to rapidly scale up power delivery, such as solar and batteries, or off-grid gas generation. Moreover, frontier AI training runs are already beginning to be geographically distributed across multiple datacentres, which would temper the challenges. Electrical power is unlikely to be a bottleneck before 2028, and seems solvable even after that.</p></li><li><p><strong>Scaling could become too expensive and AI developers stop investing. </strong>This is certainly possible, but so far there is little sign of it. If AI developers&#8217; revenues continue to grow on recent trends, they would match the $100B+ investments we extrapolate for frontier training in 2030. AI revenues growing to hundreds of billions may seem extreme, but if AI could improve productivity in a significant fraction of work tasks, it could be worth trillions of dollars.</p></li><li><p><strong>AI development could shift to focus on more efficient algorithms.</strong> But algorithmic efficiency has <em>already</em> been improving within the existing compute growth. There is no particular reason to expect algorithmic progress will accelerate, and even if it did, this seems likely to <a href="https://epoch.ai/gradient-updates/algorithmic-progress-likely-spurs-more-spending-on-compute-not-less">encourage using </a><em><a href="https://epoch.ai/gradient-updates/algorithmic-progress-likely-spurs-more-spending-on-compute-not-less">more</a></em><a href="https://epoch.ai/gradient-updates/algorithmic-progress-likely-spurs-more-spending-on-compute-not-less"> compute</a>.</p></li><li><p><strong>AI companies could reallocate compute to inference</strong>, e.g. for running reasoning models and other products. But currently training and inference receive comparable compute, and there are reasons to expect <a href="https://epoch.ai/blog/optimally-allocating-compute-between-inference-and-training">training and inference should scale up together</a>. Scaling training creates better AI models, which will be able to do more valuable inference tasks, more affordably. There might be a shift to inference, but it seems unlikely that inference scale-up would slow training scaling.</p></li></ul><p>In light of the above, we believe that extrapolating present trends to 2030 is a strong baseline prediction. And if they do continue, that allows us to extrapolate AI capabilities, which we discuss below.</p><h1>AI will accelerate scientific R&amp;D across several domains</h1><p>In the report, we also examine concrete examples of how AI could improve productivity. We focus on scientific R&amp;D, which is a declared focus of several leading AI developers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Capabilities trends suggest there will be tremendous progress in AI for scientific R&amp;D, particularly in areas such as software engineering and mathematics, where realistic tasks can be trained on entirely in silico. By 2030, existing benchmark progress suggests AI will be able to implement complex scientific software from natural language, assist mathematicians formalising proof sketches, and answer complex questions about biology protocols.</p><p>By 2030, we predict that many scientific domains will have AI assistants comparable to coding assistants for software engineers today. There will be differences compared to software engineering, for example more of a focus on reviewing and synthesising large and heterogeneous literature, whereas existing AI coding tools are primarily limited to the context of a single project. Nevertheless, there are important similarities: offering suggestions in response to context, finding relevant information, completing smaller closed-ended tasks in their entirety.</p><p>We predict this would eventually lead to a 10-20% productivity improvement within tasks, based on the example of software engineering.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Even if the work tasks of a mathematician or a theoretical biologist are less amenable to automation than a software engineer, we already have evidence from relevant benchmarks improving, and anticipate many more years of progress still to come. We expect AI capabilities will be transformative across several scientific fields, although it may take longer than 2030 to see them deployed to full effect.</p><p>We present four examples from the report below: software engineering, mathematics, molecular biology, and weather prediction. Although the selected benchmarks can&#8217;t capture the full scope of challenges in each domain, they offer insight into AI&#8217;s increasing capabilities, and the tasks that may soon be automatable. Scores are collected from leaderboards and model cards, limiting fits to top-performing models.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tVJK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tVJK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tVJK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png" width="1456" height="1010" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1010,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tVJK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!tVJK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3201263-1438-42d7-baff-6f5d5b355c1e_1600x1110.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">SWE-Bench-Verified: a coding benchmark based on solving real-world GitHub issues with associated unit tests. Results include those reported from model cards, including those with private methodology such as Claude Sonnet 4.                                                                                                                                                   RE-Bench: a research engineering benchmark based on tasks similar to take-home assessments for job candidates, taking approximately eight hours for humans.</figcaption></figure></div><p><strong>Software engineering:</strong> AI is already transforming software engineering through code assistants and question-answering. By 2030, on current trends, AI will be able to autonomously fix issues, implement features, and solve difficult (but well-defined) scientific programming problems.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k26Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k26Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k26Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png" width="1456" height="1010" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1010,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!k26Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!k26Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc04e41ed-b8e4-46b9-9354-b2f9c858dd7e_1600x1110.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Results show general-purpose LLMs only, excluding domain-specific systems like AlphaProof and AlphaGeometry2.                                                                                                                                                                                                                   AIME: a high school mathematics exam used for determining entry to the US Mathematical Olympiad, integer answers.                                                                                                                                                                                                             USAMO: US Mathematical Olympiad, a high school mathematics exam with proof-based answers.                                                                                                                                                                                                                              FrontierMath: a mathematics benchmark focused on challenging questions up to expert level, but still offering straightforwardly-verifiable answers (numeric or simple expressions).</em></figcaption></figure></div><p><strong>Mathematics:</strong> AI may soon act as a research assistant, fleshing out proof sketches or intuitions. Early accounts already document AI being helpful in mathematicians&#8217; work. Notable mathematicians differ greatly in how relevant they think existing mathematical AI benchmarks are for their work, as well as in their predictions for how soon AI will be able to develop mathematical results autonomously, rather than as an assistant.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Xj1v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Xj1v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Xj1v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png" width="1456" height="1010" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1010,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Xj1v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 424w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 848w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 1272w, https://substackcdn.com/image/fetch/$s_!Xj1v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7046d7b-7aa6-4770-9f42-f4a511fd7f5c_1600x1110.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">PoseBusters-v2: a benchmark for protein-ligand docking (spatial interaction).We only include blind results, where the protein&#8217;s binding pocket is not provided.                                                                                                                                                                                                                                                          ProtocolQA: a benchmark for questions about biology wet lab protocols, here evaluated without multiple choice answers.                                                                                                                                                                                                    Protein-protein interactions: there is significant progress predicting protein-protein interactions, but predictions for arbitrary pairs have a high false positive rate. Our illustration of progress is highly uncertain, and would depend on benchmark details.</figcaption></figure></div><p><strong>Molecular biology:</strong> Public benchmarks for protein-ligand interaction, such as PoseBusters, are on track to be solved in the next few years, although the timeline is longer (and uncertain) for prediction of arbitrary protein-protein interactions. Meanwhile, AI desk research assistants for biology R&amp;D are coming. Existing biology protocol question-answering benchmarks should be solved by 2030. While these benchmarks don't represent the full scope of challenges in molecular biology, their trends offer a specific window into AI's growing capabilities in the field.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cGZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cGZZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 424w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 848w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 1272w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cGZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png" width="1456" height="731" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:731,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cGZZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 424w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 848w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 1272w, https://substackcdn.com/image/fetch/$s_!cGZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fe6267-018e-4e4d-847e-7ae30ac17f11_1600x803.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Weather prediction:</strong> AI weather prediction can already improve on traditional methods from hours up to weeks. Moreover, AI methods are cost-effective to run, and could improve further with more data. The next challenges lie in improving existing predictions, especially rare events, and making use of improved predictions to achieve benefits in the wider world.</p><p>A recurring theme in the work is that deployment and societal impact may significantly lag capabilities. For example, compared to pharmaceutical R&amp;D, software engineering has shorter iteration cycles, does not require wet lab experiments or clinical trials, rarely involves safety-critical systems, is often easy to check for approximate correctness, and has abundant training data. For these reasons, we expect that few if any of the drugs approved for sale by 2030 will have benefited from today&#8217;s AI tools, let alone those of 2030. However, early-stage development is likely to be seeing significant effects from AI by then. In comparison, we expect software engineering will have changed dramatically, and we expect there will be a flourishing of software for scientific R&amp;D and more broadly.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h1>Conclusion</h1><p>By 2030, AI is likely to become a key technology across the economy, present in every facet of people&#8217;s interaction with computers and mobile devices. If these predictions come to pass, then it is vitally important that key decision makers prioritise AI issues as they navigate the next five years and beyond. </p><p>To learn more, read the <a href="https://epoch.ai/files/AI_2030.pdf">full report</a> at Epoch AI&#8217;s website.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts - lots more in the pipeline!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Of course, there is much more to the economy than scientific R&amp;D. Much of AI&#8217;s economic impact could come from <a href="https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d">broad automation of many tasks across the economy</a>. However, scientific R&amp;D tasks are more prone to have benchmarks, tend to be high value, tend to have rapid technology adoption, and see a lot of dedicated research. And, as mentioned, scientific R&amp;D is an explicit focus of leading AI labs. We hence expect R&amp;D tasks will be a useful testbed for examining AI capabilities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This prediction comes with significant uncertainty, even within software engineering itself. In a recent <a href="https://arxiv.org/abs/2507.09089">study</a> of AI&#8217;s effects on software engineering, literature review identified seven empirical studies. 6/7 found 20-70% speed-ups or increases in output. The remaining study found a surprising 20% slowdown, although it has a claim to the most thorough methodology. We take 20% productivity improvement as the starting point for the effect of current AI tools, but we caveat that there is considerable uncertainty in current evidence</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For software and biology, and in general, substantial technological change makes our predictions increasingly uncertain. For example, entirely new biomedical processes might be facilitated by AI design and organisation, conceptually similar to processes such as mRNA vaccines, which can be safely renewed year-to-year without having to undergo approval from scratch.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Three challenges facing compute-based AI policies]]></title><description><![CDATA[&#8220;Training compute&#8221; is constantly evolving, and compute-based AI policies must adapt to remain relevant]]></description><link>https://www.aipolicyperspectives.com/p/three-challenges-facing-compute-based</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/three-challenges-facing-compute-based</guid><dc:creator><![CDATA[Venkat Somala]]></dc:creator><pubDate>Fri, 12 Sep 2025 09:45:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4aLY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This week&#8217;s post is a collaboration between <a href="https://www.aipolicyperspectives.com/">AI Policy Perspectives</a> and Epoch AI&#8217;s (excellent!) <a href="https://epochai.substack.com/s/gradient-updates">Gradient Updates</a> newsletter.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4aLY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4aLY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4aLY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4aLY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!4aLY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad225db9-f74b-472f-b277-b36d371dac9c_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Venus Krier </figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to all future posts - lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>When the EU AI Act was drafted, pre-training compute was a reasonable proxy for model capabilities. At the time, pre-training accounted for 90-99% of total training compute, and the relationship was relatively reliable: more compute meant larger models pre-trained on more data, which consistently translated to stronger capabilities.</p><p>This simple proxy has been steadily breaking down. While pre-training compute remains a primary driver of capabilities, modern AI development leans heavily on distillation, synthetic data generation, reward models, and reasoning post-training. These methods can consume significant compute and drive capability gains, yet are often unaccounted for in current regulatory frameworks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The standard approach for measuring compute, used by the now-defunct Biden AI executive order, is to sum compute across two stages: "pre-training" and "post-training." If the sum crosses some predefined threshold, the model is subject to additional scrutiny.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> But as training methods continue to evolve, this metric risks measuring an increasingly narrow slice of the factors that produce advanced capabilities. This would turn it into a poor proxy for the capabilities that these policies aim to govern.</p><p>The current approach faces three main challenges:</p><ol><li><p><strong>Not all uses of compute contribute equally to model capabilities: </strong>For example, recent &#8220;reasoning training&#8221; methods during post-training often yield higher capability gains compared to other post-training interventions, per unit of computation.</p></li><li><p><strong>AI labs can use compute for methods beyond pre/post-training: </strong>Besides pre-training and post-training, compute-intensive techniques like distillation and reward model training also directly impact model capabilities.</p></li><li><p><strong>When deployed, an AI model's downstream capabilities depend on more than the compute used to train it: </strong>Downstream model capabilities are heavily influenced by the tools and systems available during deployment, such as coding environments and web search access.</p></li></ol><p>These challenges mean that current compute-based AI policies are built on increasingly unreliable proxies for model capabilities. This doesn&#8217;t necessarily make such policies moot - they still offer key advantages that many other approaches lack. But, compute metrics may need to be periodically assessed to ensure they capture the main drivers of capabilities, and updated if they do not do so adequately. Furthermore, it could help to research better metrics, complement compute metrics with a broader evaluation regime, and focus on governing AI applications or organizations. </p><h2>1. Not all uses of compute contribute equally to model capabilities</h2><p>Training frontier large language models can roughly be broken down into several distinct stages. The first is &#8220;pre-training,&#8221; where models are trained to predict the next token in a large corpus of text, imbuing it with the ability to output sentences in natural language. The second is &#8220;post-training,&#8221; where various additional techniques are implemented to refine the model. These can help improve the model&#8217;s reasoning abilities or prevent the model from responding to user requests in malicious ways.</p><p>Importantly, these different training techniques have varying compute-to-capability profiles. The most pertinent example of this is reasoning training, a type of post-training that helps language models develop more sophisticated problem-solving and reasoning abilities. This typically involves multi-stage pipelines that combine techniques like supervised fine-tuning and reinforcement learning (RL), and allows models to think for longer (&#8220;inference scaling&#8221;).</p><p>In current language models released by frontier AI labs, it&#8217;s likely that reasoning training yields higher marginal capability returns than pre-training. We can see this by looking at major model releases over the last year.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Consider the language model GPT-4o and its counterpart o1-high, which has undergone additional reasoning training and inference scaling. Despite using a small amount of extra compute<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>, performance jumps from 48% to 77% on the <a href="https://epoch.ai/benchmarks/gpqa-diamond">GPQA Diamond</a> benchmark, and from 50% to 95% on <a href="https://epoch.ai/benchmarks/math-level-5">MATH Level 5</a>. While the effectiveness of reasoning training relies on a strong pre-trained base model, once that foundation exists, the <a href="https://epoch.ai/gradient-updates/quantifying-the-algorithmic-improvement-from-reasoning-models">marginal returns per FLOP are much higher than further pre-training</a>. Had OpenAI instead used the same amount of additional compute to simply continue pre-training, <a href="https://epoch.ai/gradient-updates/quantifying-the-algorithmic-improvement-from-reasoning-models">the performance would have barely changed</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NC04!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NC04!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 424w, https://substackcdn.com/image/fetch/$s_!NC04!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 848w, https://substackcdn.com/image/fetch/$s_!NC04!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 1272w, https://substackcdn.com/image/fetch/$s_!NC04!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NC04!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png" width="1456" height="1152" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1152,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NC04!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 424w, https://substackcdn.com/image/fetch/$s_!NC04!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 848w, https://substackcdn.com/image/fetch/$s_!NC04!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 1272w, https://substackcdn.com/image/fetch/$s_!NC04!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d142c3d-97a1-41c3-b5a9-4c2d33c80562_1600x1266.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>o1-high substantially outperforms GPT-4o on GPQA Diamond and MATH level 5 after a small amount of reasoning training. If this additional compute used for o1 post-training had instead been spent on further pre-training GPT-4o, the capability gains would have been negligible. This suggests that reasoning training FLOP yield far higher marginal returns than pre-training at this scale.</em></figcaption></figure></div><p>When governance efforts aggregate training FLOP into a single figure, they risk missing this outsized impact per FLOP and underestimating a model&#8217;s true capabilities. Importantly, this effect could grow as the share of compute dedicated to reasoning training increases. In the example above, the amount of compute allocated to reasoning training for o1-high was likely less than <a href="https://epoch.ai/gradient-updates/how-far-can-reasoning-models-scale">10% of the pre-training compute</a> for GPT-4o. But reasoning training compute has grown roughly <a href="https://epoch.ai/gradient-updates/how-far-can-reasoning-models-scale">10&#215; every three to five months</a>, far outpacing the <a href="https://epoch.ai/trends">4-5x annual growth in pre-training compute</a>, and could soon constitute the majority of total training compute in frontier models.</p><p>As this happens, a raw sum of pre-training and post-training compute risks becoming a progressively worse proxy for model capabilities. If post-training continues to yield a strong compute-to-capability ratio at higher compute scales, a future model could be trained with a fraction of the compute specified by existing thresholds, yet still achieve capabilities that far exceed those that the threshold was intended to capture.</p><h2>2. AI labs can use compute for methods besides pre/post-training</h2><p>Even if we properly account for the relative importance of pre-training and post-training compute for increasing capabilities, issues remain. One such issue is that there are other uses of compute that matter for model performance, but are nevertheless missed by existing compute metrics. In this section we consider three examples: knowledge distillation, synthetic data generation, and reward models.</p><h3>Knowledge distillation</h3><p>The first overlooked method is knowledge distillation. In this process, a smaller "student" model learns from a larger "teacher" model by being trained to mimic the teacher's intermediate outputs (&#8220;logits&#8221;). With a strong teacher, this method can deliver higher marginal capability per student FLOP than doing standard pre-training on the same student model.</p><p>The efficiency gains from distillation can be dramatic. <a href="https://arxiv.org/pdf/1910.01108v4">DistilBERT</a>, an early demonstration of this technique, preserved 97% of its teacher model's capabilities while using 40% fewer parameters and requiring merely 3% of the compute budget that went into training the original BERT model.</p><p>Given the setup, we need to account for three sources of compute: the student&#8217;s training, the teacher&#8217;s training, and the teacher&#8217;s generation of logits. However, current policies typically only count the compute from the student run, ignoring the upstream teacher compute that largely determines what the student model can learn.</p><p>Distilling models is increasingly becoming standard practice among frontier AI labs, serving smaller, more efficient models that are distilled from a teacher model. For example, Meta <a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/">trained</a> the mid- and small-sized Llama 4 models by distilling them from the larger "Behemoth" model.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The reason for distillation is straightforward: running smaller models is typically faster and requires fewer computational resources. So as this practice becomes increasingly common at the frontier, current regulatory compute metrics risk underestimating model capabilities.</p><h3>Synthetic data generation</h3><p>Related to distillation, frontier language models are increasingly being trained on output text generated by other language models.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> If a lot of synthetic data is generated, this can constitute a pretty substantial fraction of total training compute. For example, generating synthetic data for the phi-4 model comprised ~25% of its pre-training compute budget, leveraging a separate stronger model (GPT-4o) to produce the training material.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>While the EU AI Act does <a href="https://artificialintelligenceact.eu/recital/111/">include synthetic data generation</a> in training compute, the Biden AI Executive Order did not. Either way, synthetic data pipelines are quickly evolving in ways that simply counting FLOP may not fully capture. For example, Kimi K2 used models to <a href="https://arxiv.org/pdf/2507.20534v1">generate entire post-training environments</a> with tool specifications, tasks, evaluation rubrics, etc. These kinds of environments structure how models learn, yet the compute behind them isn&#8217;t neatly captured by current FLOP-counting rules. Models can also be used extensively for data curation, such as by assessing data quality, filtering, and augmenting data. This could mean taking a basic math problem and generating individual reasoning steps. These examples illustrate how current compute metrics may miss the full scope of what &#8220;synthetic data&#8221; entails.</p><h3>Reward models</h3><p>Reward models are another compute investment that regulatory frameworks miss. These are individual models that are trained to provide feedback signals for RL algorithms. The quality of these feedback signals directly determines RL effectiveness and what capabilities the models ultimately gain, yet the compute invested in building reward models remains entirely unaccounted for. This process is computationally intensive: labs might generate millions of model responses to prompts, requiring substantial inference and training compute.</p><p>We saw the importance of reward models in this year&#8217;s International Mathematical Olympiad, where AI systems from OpenAI, Google DeepMind, and Harmonic all won gold medals. Notably, OpenAI researchers <a href="https://x.com/alexwei_/status/1946477742855532918">credited</a> their success to a robust &#8220;<a href="https://www.theinformation.com/articles/universal-verifiers-openais-secret-weapon?rc=spkbjw">universal</a> <a href="https://www.theinformation.com/articles/inside-openais-rocky-path-gpt-5?rc=spkbjw">verifier</a>&#8221; and reward models. But despite this importance, the substantial computational investment in training and running these reward models remains invisible to current regulatory compute metrics.</p><h3>The diversity in training methods challenges standardized compute metrics</h3><p>The three examples we&#8217;ve seen point towards the complicated reality where there are many different forms of training compute. Each of these implementations represents just one approach among many possible variations. For instance, distillation alone can include progressive distillation, multi-teacher setups, and self-distillation, while synthetic data generation can be everything from simple augmentation to complex multi-agent simulations. Each variation has potentially different compute-to-capability profiles, further complicating any attempt to create standardized compute metrics.</p><p>Beyond the broad set of techniques, we also need to understand how these training methods interact with each other. For example, the compute-to-capability profile of reasoning training could depend on the strength of the pre-trained model and the reward model. As the complexity of training pipelines grows, we need to account for more and more of these interactions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!l1_t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!l1_t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 424w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 848w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!l1_t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png" width="1456" height="1480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1480,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!l1_t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 424w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 848w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!l1_t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2005f47e-ca3b-4313-9070-5b7a0394788b_1574x1600.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>While prior regulations have largely focused on a simple pre/post-training distinction, in practice training compute can come in myriad forms that are intertwined in a complex pipeline.</em></figcaption></figure></div><p>Even if these additional sources of training compute were accounted for, we still have a poor understanding of how these sources of compute individually contribute to capabilities, let alone how they interact. This makes it difficult to construct robust metrics that appropriately proxy for capabilities, given the information we currently have.</p><h2>3. When deployed, an AI model's downstream capabilities depend on more than the compute used to train it</h2><p>Even if we could address the measurement challenges above, compute would still be an imperfect metric. This brings us to a third and final point: when deployed in the real world, a model's capabilities depend on more than the amount of compute used to train it. In particular, users are <a href="https://milesbrundage.substack.com/p/why-we-need-to-think-bigger-in-ai">increasingly interacting with products or applications that are built </a><em><a href="https://milesbrundage.substack.com/p/why-we-need-to-think-bigger-in-ai">around</a></em><a href="https://milesbrundage.substack.com/p/why-we-need-to-think-bigger-in-ai"> individual models</a>. This means that the overall observed capabilities will also depend on the inference budget and scaffolding around models, such as tools for coding or web browsing.</p><p>One notable example of this is Anthropic&#8217;s product <a href="https://www.anthropic.com/claude-code">Claude Code</a>. This transforms the base Claude model into a far more capable coding assistant through sophisticated scaffolding, giving it access to external tools and databases. These capabilities were enhanced through design and integration choices rather than from additional training compute.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Iwb6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Iwb6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 424w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 848w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 1272w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Iwb6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png" width="1456" height="991" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:991,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Iwb6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 424w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 848w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 1272w, https://substackcdn.com/image/fetch/$s_!Iwb6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd27455e2-c63d-45f7-999a-db1756a6caa5_1600x1089.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Tool use boosts model performance without additional training compute. As shown here, o3 + tools significantly outperforms o3, highlighting how compute-based thresholds miss the impact of scaffolding and tools. Note that the estimates of training compute are speculative.</em></figcaption></figure></div><p>This point is essentially missed by current compute thresholds, which instead focus more on the model&#8217;s structure (i.e. its training compute) rather than on its function (i.e. what the overall model can do given access to tools). The risk is that this blind spot will only grow, as system design itself improves over time and the capability gap between a raw model and the same model integrated into a well-designed system progressively widens.</p><h2>What does this mean for AI public policy?</h2><p>Given the rapidly evolving relationship between compute capabilities, does this mean that compute metrics are moot? Not necessarily. Training compute offers compelling advantages. It is roughly allocated before training begins, auditable after, and comparable across organizations. Moreover, frontier capabilities will likely continue to arise at the top of compute spend. For governance purposes, the goal of compute metrics could be to identify which models deserve scrutiny rather than precisely predicting their capability levels.</p><p>However, as training pipelines evolve, factors beyond pre- and post-training could increasingly drive capabilities while remaining invisible to current compute metrics. The risk is that current compute metrics may become progressively worse proxies for even identifying which models are pushing the frontier, if the compute we're counting represents a shrinking portion of what actually drives model capabilities.</p><p>If policymakers want to continue using compute metrics as part of a broader AI governance portfolio, they could consider several ways to address the aforementioned issues.</p><ul><li><p><strong>Researching better compute metrics and thresholds. </strong>At present, our understanding of the relationship between compute and capabilities lacks a robust empirical foundation, and it&#8217;s possible that this will continue as different ways of using &#8220;compute&#8221; become common. At a minimum, updating these regulatory frameworks would require a better understanding of this relationship. The goal of course isn&#8217;t to create a perfect proxy, but with proper data collection and analysis, we can better understand to what extent current metrics need to be modified, discarded, or reserved only for frontier models.</p></li><li><p><strong>Building a more robust and expressive evaluation regime. </strong>Training compute doesn&#8217;t need to be the only input for determining downstream capabilities. For example, it can be combined with a range of other evaluations, such as automated benchmarks and expert-teaming. These help generate a more reliable portrait of a model&#8217;s capabilities, especially those of greatest concern to policymakers.</p></li><li><p><strong>Focus on application-level or or organization-level regulation. </strong>While a lot of emphasis has been placed on regulating individual AI models, policymakers could also consider focusing on regulating AI systems in the applications where the most significant risks could materialize. Regulation could also be done <a href="https://milesbrundage.substack.com/p/why-we-need-to-think-bigger-in-ai">at the level of an organization</a>, abstracting away from issues about the precise definitions of certain kinds of compute.</p></li></ul><p>These approaches may help directly improve compute metrics, or at least make them less load-bearing in policy decisions. What policies make the most sense is also likely to evolve over time, so it&#8217;s crucial to continue monitoring the changing relationship between compute and capabilities. Policies can then be adapted based on updated evidence.</p><p><em>We would like to thank JS Denain, Conor Griffin, Jaime Sevilla, Alexander Erben, and Zhengdong Wang for their feedback and support.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to all future posts - lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If the only change were algorithmic progress in pre-training, policymakers could address it by periodically updating compute thresholds. The core issue we highlight, however, is that the training pipeline itself is evolving and new vectors are driving model capabilities. We lack a clear understanding of these methods&#8217; compute-to-capability profiles and of how their interactions affect overall capabilities, making simple threshold updates likely insufficient.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://epoch.ai/">Epoch AI</a> estimates a model&#8217;s training compute the same way, taking an unweighted sum of its pre-training and post-training compute while excluding ancillary compute uses like synthetic data generation.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Another example of how reasoning training can yield large performance improvements is DeepSeek R1-Zero: at the beginning of RL training, it scored just 10% on AIME 2024, but after 8,000 RL steps <a href="https://arxiv.org/pdf/2501.12948">achieved</a> an impressive 71%. Despite the RL compute representing only <a href="https://epoch.ai/gradient-updates/what-went-into-training-deepseek-r1">one-fifth</a> of base model&#8217;s pretraining compute, the capability gains from that relatively small compute investment were transformative.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>It&#8217;s not certain that o1-high was based on GPT-4o, and we lack public information about how much compute o1-high required relative to GPT-4o. However these reflect our best guesses, as outlined in <a href="https://epoch.ai/gradient-updates/how-far-can-reasoning-models-scale">previous</a> <a href="https://epoch.ai/gradient-updates/quantifying-the-algorithmic-improvement-from-reasoning-models">posts</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Note that <a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/">Llama 4 Maverick is technically &#8220;codistilled&#8221;</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Unlike knowledge distillation, this uses actually generated tokens, rather than intermediate output &#8220;logits&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>That being said, phi-4 is a relatively extreme example of the relative costs which is famous for being trained on a particularly large fraction of synthetic data.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The internet is a place where no one has an accent ]]></title><description><![CDATA[Will AI make our culture more homogenous?]]></description><link>https://www.aipolicyperspectives.com/p/the-internet-is-a-place-where-no</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-internet-is-a-place-where-no</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 09 Jul 2025 09:33:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9gQ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s post comes from <a href="https://kalimahmed.com/">Kalim Ahmed</a>, a writer and open-source researcher who focuses on technology, policy and culture. Kalim explores whether the internet is homogenising how we communicate and how AI may affect this. As always, please let us know your thoughts.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9gQ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9gQ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9gQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg" width="1456" height="1016" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1016,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:627833,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/167853048?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9gQ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9gQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f033616-b171-4ccd-b0bf-50418b3279d3_1600x1117.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Venus Krier </figcaption></figure></div><p>New technologies shape how we communicate. The printing press spread literacy to the masses. The talkies did away with silent films. In the streaming era, <a href="https://www.economist.com/culture/2025/06/02/hit-songs-are-getting-shorter?utm_content=section_content&amp;gad_source=1&amp;gad_campaignid=22734625670&amp;gbraid=0AAAAADBuq3KdJFgTUYDLDnhnfVDzOHEXx&amp;gclid=Cj0KCQjw1JjDBhDjARIsABlM2SvuxdH4V66pcDiVxxVuonGV4fN_CbNBkWExUYVn53ltA0VTACAZLD0aArNWEALw_wcB&amp;gclsrc=aw.ds">songs have become shorter</a>. One particular effect of new technologies is that they often introduce a degree of s<em>tandardisation </em>in the medium and <em>homogenisation </em>in the message. This influences how we come to experience and understand the world. Since this essay begins with the terms &#8220;medium&#8221; and &#8220;message&#8221;, it&#8217;s only fitting to turn to Canadian philosopher and media theorist Marshall McLuhan.</p><p>In his seminal 1964 work, <a href="https://designopendata.wordpress.com/wp-content/uploads/2014/05/understanding-media-mcluhan.pdf#page=237">Understanding Media: The Extensions of Man</a>, McLuhan observed that nationalism was largely unknown in the Western world, until the printing press allowed people to encounter their mother tongue in a standardised form. This linguistic uniformity, he argued, helped to forge national identities and weaken older regional loyalties. Recognising this power, governments moved to regulate these technologies, out of a desire to cultivate a common culture and a fear for where a more hands-off approach may lead.</p><p>For example, McLuhan <a href="https://designopendata.wordpress.com/wp-content/uploads/2014/05/understanding-media-mcluhan.pdf#page=237">noted how</a> some Arab countries banned the use of private headphones to ensure that radio listening remained a public, collective act. Similarly, the legal professor Lili Levi <a href="https://administrativelawreview.org/wp-content/uploads/sites/2/2014/04/The-Four-Eras-of-FCC-Public-Interest-Regulation.pdf">has described</a> how the US Federal Communications Commission saw radio as serving a '<em>homogenizing and unifying social role</em>&#8217;. In that spirit, it banned &#8216;propaganda stations&#8217; focused on specific subcommunities and <a href="https://en.wikipedia.org/wiki/Fairness_doctrine">required</a> broadcasters to cover controversial issues in a &#8216;balanced way&#8217;, prioritising a shared public understanding over the creative whims of producers.</p><p>In the digital era, <em>most</em> governments&#8217; ability to regulate and shape media technologies in this way <a href="https://administrativelawreview.org/wp-content/uploads/sites/2/2014/04/The-Four-Eras-of-FCC-Public-Interest-Regulation.pdf">has diminished</a>, even if their appetite for control has not. Unlike radio, the internet isn&#8217;t constrained by limited airwaves. It is abundant, accessible, and decentralised (a bit of a misnomer, but you get the point). Ironically, however, in the online world, the push towards &#8216;cultural sameness&#8217; seems only to have intensified. But this time, it&#8217;s not top-down regulation that&#8217;s driving it - it&#8217;s individuals themselves, generating and consuming the same kinds of content, in a remarkably bottom-up fashion.</p><p><em><strong>What is causing this push? </strong></em>For many, the separation between the online world and real life is collapsing. As people <a href="https://www.pewresearch.org/internet/2024/12/12/teens-social-media-and-technology-2024/">spend ever more time online</a>, culture has begun to mould itself around the internet&#8217;s logic. Encouraged by recommender systems, which have tended to privilege popularity over diversity, we increasingly consume the same content, speak in the same idioms, and contribute to the same trends.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rung!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rung!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 424w, https://substackcdn.com/image/fetch/$s_!Rung!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 848w, https://substackcdn.com/image/fetch/$s_!Rung!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 1272w, https://substackcdn.com/image/fetch/$s_!Rung!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rung!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png" width="1192" height="340" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:340,&quot;width&quot;:1192,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rung!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 424w, https://substackcdn.com/image/fetch/$s_!Rung!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 848w, https://substackcdn.com/image/fetch/$s_!Rung!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 1272w, https://substackcdn.com/image/fetch/$s_!Rung!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74c4a565-d1ed-4dd8-8e19-3814578f6f76_1192x340.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://x.com/willdepue/status/1913808961305981328">@willdepue </a>/ X</figcaption></figure></div><p>In his 2015 manifesto, <a href="https://www.sup.org/books/theory-and-philosophy/transparency-society">The Transparency Society</a>, philosopher and cultural theorist Byung-Chul Han called this condition the &#8216;digital panopticon&#8217;. Unlike Foucault&#8217;s silent, isolating surveillance, Han describes a world where individuals willingly perform their lives under the gaze of others. In other words, we&#8217;re not just watched, we&#8217;re performing for the watchers. We aestheticise our lives to meet the algorithmic standard. But as we do so, the boundaries of expression shrink. We speak, we share, we record - but we begin to sound eerily alike. Shakespeare saw this trend early when he wrote that &#8220;<em>all the world&#8217;s a stage, and all the men and women merely players.</em>&#8221;</p><p>In our post-pandemic world, this stage has collapsed inward. There&#8217;s no need to wait for an audience and there is no intruder. We invite the phenomenon with open arms; the screen is enough. Platforms normalised the broadcasting of the self, first as an act of occasional performance, and now as a ritual. In such an environment, performance begets mimicry. We begin to replicate one another and our expressions become less about <em>interiority</em> and more about <em>legibility</em> - what will be seen, liked, or shared. Over time, this creates a loop of sameness, where individuality is shaped not by thought, but by visibility.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Not all content is homogenised equally</strong></h3><p>Most of us could recognise &#8220;internet speak&#8221; without being able to define it. Picture a &#8220;day in the life&#8221; vlog: the flat, affectless voiceover, the slick edits, a minimalist apartment in neutral tones. Curtains drawn back. Coffee brewed. Jog completed. Every moment tailored to project discipline, control, aspiration, and a soft, sterile affluence. Our protagonists exist in curated solitude. No friends or family appear. No noise, no clutter. Just productivity, aestheticised.</p><p>Influencers may have pioneered the template, especially during lockdowns, but everybody now performs it and the algorithms promote it. Internet speak extends to memes and catchphrases - those brief, hyper-referential fragments of culture that flood our feeds and vanish almost as quickly as they appear. Some linger longer than others - <a href="https://knowyourmeme.com/memes/cash-me-ousside-howbow-dah">&#8221;Catch me outside, how about that?&#8221;</a> (2017), or <a href="https://knowyourmeme.com/memes/pooja-what-is-this-behaviour">&#8220;Pooja, what is this behaviour?&#8221;</a> (2022), or <a href="https://knowyourmeme.com/memes/very-demure-very-mindful">&#8220;Very demure, very mindful&#8221; </a>(2024) - but most dissolve into the endless feed before registering. It begins to resemble that infamous scene in A Clockwork Orange, where the protagonist is strapped in, eyes forced open, condemned to absorb whatever flashes on the screen. Only now, we do it to ourselves. Willingly.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n3AL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n3AL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 424w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 848w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 1272w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n3AL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png" width="452" height="255" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0d6823e-f66a-47f7-8195-28833746b099_452x255.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:255,&quot;width&quot;:452,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n3AL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 424w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 848w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 1272w, https://substackcdn.com/image/fetch/$s_!n3AL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0d6823e-f66a-47f7-8195-28833746b099_452x255.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A still from Stanley Kubrick&#8217;s A Clockwork Orange (1971)</figcaption></figure></div><p>The online world also shapes how we speak, or at least how we perform speech, offline. What becomes popular can affect our daily rhythms, accents, and verbal tics. In 2021, viral <a href="https://www.theguardian.com/tv-and-radio/2021/jul/19/peppa-pig-american-kids-british-accents">news reports</a> suggested that American children had started speaking in vaguely British accents after extended exposure to Peppa Pig during the Covid-19 lockdown. The evidence for this was anecdotal, but it points to a larger truth - online culture is a powerful vehicle for exporting linguistic habits.</p><p>Consider the <a href="https://arc.net/l/quote/cirsfhlt">Valley Girl-style</a> overuse of &#8220;like&#8221; as a filler word. I hypothesise that this can be traced, at least in part, to the fact that much of the technology that enabled self-broadcasting emerged in the US and so the linguistic ground was first occupied by native speakers who set the tone, cadence, and affective style. Everyone else simply adapted. The result is a speech pattern that feels breezy, casual, and emotionally flattened - the kind of tone that algorithms also seem to prefer.</p><p>If the &#8220;<a href="https://en.wikipedia.org/wiki/Bhad_Bhabie">Catch me outside&#8221; girl</a> is an exaggerated embodiment of this performative cadence, so too is the rise of &#8220;up talk&#8221;, or <a href="https://en.wikipedia.org/wiki/High_rising_terminal">High Rising Terminal</a>, where declarative sentences end with a questioning inflection. This phenomenon was once also associated with young women in Southern California, but has spread far beyond its origins, adopted by speakers across regions, genders, and class lines. Its rise is a reminder that platform-mediated speech is not only shaped by visibility but also by who gets seen and heard first.</p><p>This algorithmic conformity doesn&#8217;t stop at the content we express. It seeps into our physical surroundings. Today, whether you&#8217;re in New Delhi or Kathmandu, walking into a caf&#233; increasingly feels like entering the same meticulously curated scene: exposed brick walls, high-contrast Edison bulbs, matte-black menu boards (now swapped for QR codes), and a menu of boba tea, matcha, or some trendy variation thereof. The furniture is distressed, the playlist is ambient and indie, and the vibe is always tuned to a soft, sterile version of authenticity. The banality of the internet&#8217;s lingua franca - minimalism, mindfulness, and a curated performance of taste - has spilt into the offline world, where it is shaping how and where we express ourselves.</p><p>Yet this cultural export is rarely reciprocal. How often do Western users adopt words or phrases from <a href="https://www.theguardian.com/commentisfree/2016/jan/04/indian-english-phrases-indianisms-english-americanisms-vocabulary">Indian English</a>, say, &#8220;<a href="https://www.merriam-webster.com/wordplay/prepone">prepone</a>,&#8221; for instance? The flow of influence remains largely unidirectional. We absorb the dominant voice, often unconsciously, while our own idioms and accents remain peripheral. This is not to say Indian-origin words haven&#8217;t entered the English lexicon: terms like &#8220;<a href="https://www.google.com/search?q=pariah+etymology&amp;sca_esv=b50dc48a566d410b&amp;sxsrf=AE3TifOXrEIKNM8wWGkuwIpjmoVLvh8eaA:1749873203060&amp;ei=M_JMaPK-A-Hj2roPiaq72Ac&amp;oq=pariah+et&amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiCXBhcmlhaCBldCoCCAAyBRAAGIAEMgUQABiABDIGEAAYFhgeMgYQABgWGB4yBhAAGBYYHjIGEAAYFhgeMgYQABgWGB4yCxAAGIAEGIYDGIoFMgsQABiABBiGAxiKBTILEAAYgAQYhgMYigVI5hZQ9wdYmgpwAngBkAEAmAHIAaAB0gSqAQUwLjIuMbgBAcgBAPgBAZgCBaAC9ATCAgoQABiwAxjWBBhHwgINEAAYgAQYsAMYQxiKBcICDhAAGLADGOQCGNYE2AEBwgITEC4YgAQYsAMYQxjIAxiKBdgBAcICFhAuGIAEGLADGEMY1AIYyAMYigXYAQHCAhIQABiABBixAxhDGIoFGEYY-QHCAggQLhiABBixA8ICChAuGIAEGEMYigXCAgoQABiABBhDGIoFwgITEC4YgAQYsQMYQxiDARjUAhiKBcICChAAGIAEGBQYhwLCAiwQABiABBixAxhDGIoFGEYY-QEYlwUYjAUY3QQYRhj5ARj0Axj1Axj2A9gBAcICBRAuGIAEmAMAiAYBkAYTugYGCAEQARgJkgcFMi4wLjOgB9UpsgcDMi0zuAfrBMIHBTItNC4xyAcd&amp;sclient=gws-wiz-serp">pariah</a>&#8221; and &#8220;<a href="https://www.google.com/search?q=loot+etymology&amp;sca_esv=b50dc48a566d410b&amp;sxsrf=AE3TifODfq8-zJ2oAEZyRnnMaFVZKy0ckg:1749873233816&amp;ei=UfJMaLbMMZrc2roPkbSgIA&amp;oq=loot+ety&amp;gs_lp=Egxnd3Mtd2l6LXNlcnAiCGxvb3QgZXR5KgIIADIFEAAYgAQyBhAAGBYYHjIGEAAYFhgeMgsQABiABBiGAxiKBTILEAAYgAQYhgMYigUyCxAAGIAEGIYDGIoFMgsQABiABBiGAxiKBTIIEAAYgAQYogQyCBAAGIAEGKIEMggQABiABBiiBEjXEVCmAli-BnABeAGQAQCYAdIBoAGBBqoBBTAuMy4xuAEByAEA-AEBmAIFoAKjBsICChAAGLADGNYEGEfCAg0QABiABBiwAxhDGIoFwgIPECMYgAQYJxiKBRhGGPkBwgINEC4YgAQYsQMYFBiHAsICChAAGIAEGBQYhwLCAgoQABiABBhDGIoFwgILEAAYgAQYsQMYgwHCAgUQLhiABMICJxAAGIAEGIoFGEYY-QEYlwUYjAUY3QQYRhj5ARj0Axj1Axj2A9gBAcICCxAAGIAEGJECGIoFmAMAiAYBkAYKugYGCAEQARgTkgcFMS4yLjKgB8QgsgcFMC4yLjK4B5sGwgcFMi00LjHIBx0&amp;sclient=gws-wiz-serp">loot</a>&#8221;, absorbed during colonial and pre-industrial periods, are now firmly embedded in our colloquial vocabulary. But these borrowings are historical artefacts, not reflective of contemporary cultural parity. This imbalance is also at the heart of concerns about large language models.</p><h3><strong>Enter LLMs</strong></h3><p>In 2024, researchers published <a href="https://arxiv.org/pdf/2402.01536">an experiment</a> in which they gave a very small number of participants access to an LLM and tasked them with coming up with creative ideas. They compared these ideas against those that participants generated when using a set of &#8216;<a href="https://stanislav-stankovic.medium.com/oblique-strategies-for-game-design-5688e206a90f">creative inspiration&#8217; cards</a> from the artists Brian Eno and Peter Schmidt. The authors argued that, in aggregate, the LLM-based outputs were less &#8220;semantically diverse&#8221;.</p><p>The worry about LLMs and homogenisation goes something like this: LLMs are trained with similar architectures and data and will inevitably be somewhat biased as a representation of global culture. If people and organisations start to use multimodal LLMs, or agents based on them, across the cultural spaces, to help write newsletters, generate images or create music this will inevitably lead to more cultural homogenisation. The fear is perhaps best crystallised in a 2017 <a href="https://x.com/chethaase/status/925715289244819458">tweet</a> by software engineer-turned-writer Chet Haase.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7f37!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7f37!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 424w, https://substackcdn.com/image/fetch/$s_!7f37!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 848w, https://substackcdn.com/image/fetch/$s_!7f37!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 1272w, https://substackcdn.com/image/fetch/$s_!7f37!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7f37!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png" width="597" height="242" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:242,&quot;width&quot;:597,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7f37!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 424w, https://substackcdn.com/image/fetch/$s_!7f37!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 848w, https://substackcdn.com/image/fetch/$s_!7f37!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 1272w, https://substackcdn.com/image/fetch/$s_!7f37!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc260eaf8-ca68-48be-8dd4-4befc74c4755_597x242.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://x.com/chethaase/status/925715289244819458">@chethaase / X</a></figcaption></figure></div><p>Today, Haase&#8217;s observation feels less like a joke and more like a thesis: LLMs will generate what is most palatable, most generic, and least likely to offend. The average will become aspirational. The edge, or the anomaly, that which is strange, specific, or difficult, will be quietly filtered out. Think <em>GenericAI, </em>not GenerativeAI. An engine of blandness. An algorithmic distillation of the already said. Or as K Allado-McDowell <a href="https://longnow.org/ideas/identity-neural-media-ai/">put it</a>: &#8220;<em>LLM base models are built to be mid</em>&#8221;. Beneath all of it lies a deeper anxiety: culture was once filtered by institutions or gatekeepers, but now it is being flattened by machines. Sameness is the product.</p><h3><strong>Are these concerns legit?</strong></h3><p>Recently, the British newspaper The Guardian faced some challenges after using the word &#8220;gotten&#8221; - the North American past participle of &#8220;get&#8221; - in an opinion piece. In <a href="https://www.theguardian.com/commentisfree/2025/jun/04/use-word-gotten-some-readers-upset">response</a>, they reminded readers that two-thirds of them were based outside Britain and that their US desk aims to reflect different linguistic and cultural norms. It also pointed out that &#8220;gotten&#8221; wasn&#8217;t a recent American invention, but rather emerged in Middle and Early Modern English. Language, it said, isn&#8217;t a fortress. The defence is compelling, at least to me. Language is indeed dynamic; what feels foreign or jarring one day often becomes colloquial the next. It is normal for words like &#8220;chat,&#8221; &#8220;normies,&#8221; &#8220;looks maxxing,&#8221; or &#8220;zoomers&#8221; to begin as internet slang, fringe and unserious, and later become embedded in mainstream vocabulary.</p><p>One could also challenge the idea that culture has become more<em> </em>homogenised, as it has moved online. Indeed, you could argue the inverse. That the internet has enabled the creation and circulation of an unprecedented <em>heterogeneity</em> of content. Memes mutate hourly; aesthetics are born and buried within weeks; microcultures bloom and vanish on Discord, subreddits, and Substack. When I zoom out, I still observe a kind of creative inbreeding. Even when the topics are new, the formats feel recycled; the voices start to blend; everything feels just a little&#8230; templated. There may be more choice than ever before, but for most of us, most of the time, this potential seems largely untapped.</p><p>The argument that LLMs will inevitably make things worse, by churning out content from the middle of the bell curve, could also be challenged. AI is not deterministic. In theory at least, we get to choose what we optimise models for - accuracy, engagement, familiarity, safety, provocation or curiosity. These ideas have long existed in the realm of media theory - LLMs might be the first tools capable of realising them. Consider the world of science where practitioners <a href="https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery">worry</a> about replacing the unorthodox, intuitive and serendipitous tendencies of humans with a more homogenous AI-based approach to research. Here, we already have counter examples &#8212; such as <a href="https://home.cern/news/news/physics/how-can-ai-help-physicists-search-new-particles">an effort</a> that used AI to identify<em> </em>and <em>amplify </em>anomalies in massive datasets from Large Hadron Collider experiments. Are there other disruptive ideas out there, ready to be liberated by new kinds of LLM-enabled recommendation engines, semantic search, or generative media?</p><p>A deeper pushback is to question whether homogeneity is truly bad, or whether a certain amount of it may be necessary, or even beneficial. In conversations about culture, it certainly <em>sounds </em>bad. But maybe people need some amount of homogeneity, in order to free up our minds for more creative pursuits? Even if homogenisation is bad, it may be an inevitable cost to bear, in service of a greater good - globalisation. For centuries, this was defined by the movement of goods, capital, and people. Then the internet enabled the spread of information. And now AI will enable the spread of intelligence and the activities that rely on it. And like every previous wave, it will follow a pattern: diversity at the margins, conformity at the centre. In the early stages, choice will look like it's expanding. But over time, tastes will coalesce around the most exportable, the most scalable. And perhaps this is the inevitable price to pay for scaling connection. If we believe that globalisation, despite its faults and recent retrenchment, is a net good, are we willing to accept a steady drift toward uniformity as its cultural side effect?</p><h3><strong>What to do? </strong></h3><p>First, to get a handle on the challenge, we could measure if and how online culture is homogenising. A growing number of organisations have designed <a href="https://www.aipolicyperspectives.com/p/what-we-learned-from-reading-100">evaluations</a> to assess the risks that AI poses, from hallucinations to cyber-attacks. But few have tried to evaluate the quality and diversity of online content, people&#8217;s engagement with it, and how this is changing in an AI world. Should we design new quantitative metrics of novelty or counterintuitiveness? Should we invest in ethnographic studies about how LLMs are reshaping cultural reference points? More important than the measurement technique: What does good look like here?</p><p>Second, if the concern is a global flattening, then governments could take their nascent &#8216;Sovereign AI&#8217; programmes and apply them to cultural ends. Few would advocate returning to a world where governments dictate cultural content. But more would likely support the need to <a href="https://www.techpolicy.press/local-ai-research-groups-are-preserving-nonenglish-languages-in-the-digital-age/">digitise and protect local languages</a>, books, and <a href="https://www.nhm.ac.uk/press-office/press-releases/natural-history-museum-to-lead-new-national-programme-to-digitis.html">history</a>. Take my own home state in Northeast India. My native tongue - an oral language with scant written documentation - remains virtually absent from digital corpora, as do many others. A recent, mildly absurd anecdote illustrates the issue. An Indian food delivery app used AI to visually auto-render a traditional eastern Indian dish - <a href="https://whatanindianrecipe.com/east-indian/aloo-jhuri-bhaja-recipe.html">Aloo Bhaja</a> - a crispy, golden heap of shredded fried potato. What the app showed instead was a sandwich-like creation that was completely alien to the dish. Such exclusions cannot be resolved by AI alone. We need structural interventions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!s7yA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s7yA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 424w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 848w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s7yA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png" width="1080" height="1080" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1080,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!s7yA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 424w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 848w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!s7yA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc29cc56a-6b21-43ff-bf30-f3dabafa2190_1080x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://x.com/tuhinat221b">@tuhinat221b/X</a></figcaption></figure></div><p>Third, AI labs could empower individuals to personalise their nascent AI assistants. Users can already toggle the &#8216;temperature&#8217; of their LLMs&#8217; outputs. Moving forward, they could share more nuanced instructions, context, and feedback about what they want<em>. </em>Equipped with long-form memory, these assistants could create &#8216;<a href="https://x.com/ziwphd/status/1867014009343578147">belief graphs</a>&#8217; for what they think users want and periodically check that this is correct. This could offer the kinds of personalisation that has long been promised online, but rarely delivered. Done well, this should provide a healthy buffer against homogenisation, not least since AI agents will increasingly communicate with each other, and so more personalised agents should lead to less homogenous outcomes.</p><p>This also raises a question about whether the homogeneity critique is yesterday&#8217;s problem, and whether tomorrow&#8217;s is <em>over-personalisation.</em> The internet is already shifting from the public to the personal. Those living under the same roof now watch their own content on their own devices. Social media executives note how people are <a href="https://www.businessinsider.com/adam-mosseri-instagram-threads-private-sharing-interview-peter-kafka#:~:text=Instagram%20head%20Adam%20Mosseri%20on%20the%20%27paradigm,aren%27t%20sharing%20as%20much%20in%20public%20anymore.">communicating less on public feeds and more in private chat groups</a>. New LLM assistants could personalise people&#8217;s online experience even more. Against this backdrop, the primary concern may shift from a surfeit of homogenous culture to <a href="https://www.ft.com/content/c5b8655d-27b6-4d64-b8b2-3217e3535c1a">a lack of any shared culture at all</a>.</p><p>To offset this, <a href="https://knightcolumbia.org/content/a-public-service-media-perspective-on-the-algorithmic-amplification-of-cultural-content">some practitioners</a> hope to develop recommender systems that can support both shared cultural experiences <em>and </em>individual diversity. But what is the user themself, worried about homogeneity on one hand and over-personalisation on the other, to do? One habit you pick up as a writer is learning to sit with the official version of events for a while, whether you&#8217;re in a war zone or watching a product reveal. It may seem tedious, but there&#8217;s a skill in parsing what&#8217;s said and what&#8217;s being left unsaid. You read the press release to read between the lines, to ask not just what is being communicated, but why now, and to whom. During the recent flare-up between India and Pakistan, I found myself with little to offer the public beyond a simple plea: <em>add friction to your life.</em> Not because slowness is inherently virtuous, but because the flood of unverified video and hyperpartisan content overwhelmed our attention. There was no shared anchor. It was just motion and noise.</p><p>Edward Murrow, during the height of the Cold War, warned that television was being numbed by commercialism and cowardice. In his now-famous &#8220;<a href="https://www.youtube.com/watch?v=AIhy0T7Q48Y&amp;t=657s">wires and lights in a box</a>&#8221; speech, he lamented that the screen, instead of informing and elevating the public, had become a source of confusion, distraction, and spectacle. What he feared about television, we now live through daily online, only the feed is faster, the wires are invisible, and the box is always on and in our hands. With AI, the trade-off may be even more profound. Retaining some friction is not just an aesthetic choice; it is a necessary strategy for navigating what comes next.</p><p>___________</p><p><em>Thanks to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Conor Griffin&quot;,&quot;id&quot;:10433197,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f534d44b-8798-43ce-9f4f-16fd9f08a87c_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;f05fa6af-9ab2-42e6-bf63-169d04eeb475&quot;}" data-component-name="MentionToDOM"></span>, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Harry Law&quot;,&quot;id&quot;:10612241,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b1f870a-3e2e-47c4-b05f-d7a69b3c58e7_1728x1728.jpeg&quot;,&quot;uuid&quot;:&quot;952f7a48-59f4-4359-bf8e-a85cd98e15f1&quot;}" data-component-name="MentionToDOM"></span> and Arianna Manzini for feedback.</em> As with all pieces you read here, it&#8217;s the personal views of the authors. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI & the retraining challenge ]]></title><description><![CDATA[Historically, US government programmes haven&#8217;t helped much.]]></description><link>https://www.aipolicyperspectives.com/p/ai-and-the-retraining-challenge</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-and-the-retraining-challenge</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 26 Jun 2025 14:34:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9znO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In this essay, <a href="https://www.linkedin.com/in/julian-jacobs-a729b87a/">Julian Jacobs</a> writes about the history of US public worker retraining programmes, their efficacy, and how they might fare as AI diffuses throughout the economy.</em> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9znO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9znO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!9znO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!9znO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!9znO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9znO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png" width="1280" height="894" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:894,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9znO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!9znO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!9znO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!9znO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d62e266-d058-44cb-8617-fe75ea8a9629_1280x894.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Source: Venus Krier</em></figcaption></figure></div><p>When asked to reflect on how AI may affect society, people <a href="https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/">frequently</a> rate the loss of their job as their top concern. Such worries are not new. During the Industrial Revolution, the Luddites smashed textile machines and fought with mill owners, even if their demands were <a href="https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/">more nuanced</a> than is often ascribed. At the turn of the 20th century, public administrators overseeing the US economy <a href="https://www.brettonwoods.org/article/covid-19-will-only-increase-automation-anxiety">feared</a> how new kinds of glassware and steel might impact workers.</p><p>Such fears are understandable. Jobs, particularly the skilled trades that are often most vulnerable to automation, can provide financial independence and <a href="https://journals.sagepub.com/doi/full/10.1177/00332941211040439">feelings</a> of status, purpose, and community. This is of course not true for all jobs, all the time. In his seminal book <em><a href="https://thenewpress.com/books/working">Working</a></em>, Studs Terkel observed that many people feel their jobs are defined by a <em>lack</em> of meaning, as well as an intense disconnectedness, fatigue, and a droning anxiety about wasting their lives. But surveys of today&#8217;s employees, at least in the West, suggest that <a href="https://www.pewresearch.org/social-trends/2023/03/30/how-americans-view-their-jobs/">a majority</a> are relatively satisfied with their job and that as many as 50%, or more, <a href="https://www.cipd.org/globalassets/media/knowledge/knowledge-hub/reports/2024-pdfs/8625-good-work-index-2024-survey-report-1-web.pdf">would want to continue working</a> even if they did not need the money.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>1. Will we need retraining to respond to AI?</strong></h1><p>When new technologies <a href="https://www.nber.org/system/files/working_papers/w30074/w30074.pdf">disrupted employment</a> in the past, they typically led to an increase in aggregate employment, albeit not always immediately. This was true with the steam engine and spinning jenny during the<a href="https://www.nuff.ox.ac.uk/Users/Allen/engelspause.pdf"> Industrial Revolution</a>, and with  industrial robots and digitisation in the<a href="https://www.sciencedirect.com/science/article/abs/pii/S0169721811024105"> 20<sup>th</sup> Century</a>. However, in the US, where <a href="https://www.epi.org/publication/unions-and-well-being/">labor unionisation</a> has been mostly <a href="https://home.treasury.gov/news/featured-stories/labor-unions-and-the-us-economy">declining</a>, these latter technologies also widened income and, especially, wealth inequalities. This was mainly due to two dynamics - the extent of which economists <a href="https://davidcard.berkeley.edu/papers/skill-tech-change.pdf">continue to dispute</a>.</p><p>First, the technologies increased productivity and economic growth, but a growing share of this expanding pie went to capital owners, especially those with a significant ownership stake in fast-growing enterprises. <a href="https://www.sciencedirect.com/science/article/pii/S016518892200269X">Labour&#8217;s share</a>, once inflation was accounted for, declined. In 2022, according to the economist <a href="https://www.aeaweb.org/articles?id=10.1257/jep.38.2.107">Loukas Karabarbounis</a>, the share of income going to labor in the US hit its lowest point since the Great Depression, at just under 60% of national income.</p><p>Second, the technologies complemented people with certain skills while displacing others - what economists refer to as &#8216;<a href="https://www.sciencedirect.com/science/article/abs/pii/S0169721811024105">skill-based technological change</a>&#8217;. Digitisation and industrial robots automated <a href="https://www.ddorn.net/papers/Autor-Dorn-LowSkillServices-Polarization.pdf">middle-wage occupations</a> such as clerks, bookkeepers, and assembly line workers, while enabling high-paying roles for software engineers, roboticists and data analysts. These new roles also fostered demand for <em>lower-wage</em> roles, including to provide services to higher earners, for example in the retail, healthcare, food service or personal grooming sectors. These two trends made the labour market more polarised, as workers who lost their jobs or were unable to benefit from new technologies <a href="https://www.aeaweb.org/articles?id=10.1257/pandp.20201061">struggled</a> to move into higher-wage work. If we expect AI&#8217;s impacts to be similar, this could provide a rationale for governments to fund large retraining programmes to help people retain their jobs and move into higher-wage roles.</p><p>Will AI&#8217;s effects be similar? We don&#8217;t know. <a href="https://www.governance.ai/analysis/predicting-ais-impact-on-work">Some efforts</a> to address this question break jobs down into bundles of tasks and evaluate AI&#8217;s ability to perform them, now and in the future. These evaluations cover a wide range of jobs and tasks, but they don&#8217;t tell us whether organisations that hire people to do these tasks are investing in AI, or changing their hiring. Other studies <em>do </em>assess the impact of AI on real-world employment outcomes, for example on <a href="https://olin.wustl.edu/about/news-and-media/news/2023/08/study-ai-tools-cause-a-decline-in-freelance-work-and-incomeat-least-in-the-short-run.php">freelance employment</a>, but only cover a small share of the labour market. No assessments yet give us <em>breadth </em>and <em>depth.</em></p><p>In the near-term, the employees at greatest risk from AI are likely those that work in, or would like to work in, occupations where tasks are currently performed (almost) entirely on a computer. And where some degree of human error is already common and not catastrophic. This<a href="https://www.economist.com/finance-and-economics/2025/06/16/why-todays-graduates-are-screwed"> may include</a> graduates aspiring to work in consultancies, legal firms or content agencies or the large number of people who work in remote customer service roles. If new kinds of <a href="https://deepmind.google/models/gemini-robotics/">AI-enabled robots</a> become more capable and cheap, then other kinds of roles, for example in warehouses, could be at risk.</p><p>If AI starts to cause people in these roles to lose their jobs en masse, we can expect loud calls for new public retraining programmes. The idea that the government should help to <a href="https://www.dol.gov/general/ai-principles">retrain</a> people in response to new technologies, trade shifts or other &#8216;shocks&#8217; is <a href="https://www.mckinsey.com/featured-insights/future-of-work/retraining-and-reskilling-workers-in-the-age-of-automation">ubiquitous</a> in policy briefs, consulting reports, and academic research, <a href="https://www.mckinsey.com/featured-insights/future-of-work">including on AI</a><a href="https://www.mckinsey.com/mgi/our-research/a-new-future-of-work-the-race-to-deploy-ai-and-raise-skills-in-europe-and-beyond">.</a> But, these reports typically don&#8217;t specify what an AI-induced retraining programme should look like, who should do it, and what lessons, if any, we should draw from past efforts.</p><p>In the remainder of this essay, I trace the history of US public retraining programmes and their impacts. In short, I find little evidence that they have been effective. In future essays, I hope to consider lessons from <em>private </em>retraining programmes and other countries.</p><h1><strong>2. A brief history of US public retraining programmes</strong></h1><p>In 1933, as the Great Depression reached its darkest moments, President Roosevelt signed the <a href="https://en.wikipedia.org/wiki/Wagner%E2%80%93Peyser_Act">Wagner-Peyser Act</a>, creating the United States Employment Services, a new national network of offices to help the <a href="https://www.fdrlibrary.org/great-depression-facts">~25% of the labour force</a> that was out of work. Their retraining offering was rudimentary but provided a foundation to build from. Since then, retraining has become an integral &#8216;<a href="https://commission.europa.eu/system/files/2020-06/european-semester_thematic-factsheet_active-labour-market-policies_en_0.pdf">Active Labour Market Policy</a>&#8217; in the US and beyond. If <em>passive</em> labour market policies, like unemployment insurance, aim to provide a safety net for the unemployed, <em>active </em>policies like retraining and job search assistance aim to provide a ladder back into stable work.</p><p>In 1962, John F Kennedy signed the <a href="https://www.richmondfed.org/publications/research/econ_focus/2022/q1_economic_history">Manpower Development and Training Act</a>, the first federal retraining programme to operate at scale. Over the next decade, it retrained 1.9m people to navigate the &#8216;constantly changing economy&#8217;. For men, this typically meant retraining as machine shop workers, auto mechanics, and welders. For women, clerical and administrative roles. In 1973, Richard Nixon replaced the MDTA with the short-lived <a href="https://federalism.org/encyclopedia/no-topic/comprehensive-employment-and-training-act/">Comprehensive Employment and Training Act</a>, which focussed on getting low-income individuals, the long-term unemployed, and students into subsidised, entry-level jobs in public sector agencies and nonprofits.</p><p>CETA began a process of <em>decentralising </em>US public retraining, putting decisions about how to run the programmes into the hands of cities and states, rather than the federal government. In 1982, the Reagan administration accelerated this further, when it passed the <a href="https://www.gao.gov/assets/hehs-96-40.pdf#:~:text=The%20NJS%20showed%20mixed%20results,Than%20Control%20Group%20Earnings%20After">Job Training Partnership Act</a>. In line with <em>Reaganomics</em>, the JPTA aimed to further empower local organisations to deliver retraining and boost private sector employment. To do so, it established Private Industry Councils, with representatives from local businesses, to help direct and supervise the programmes. It also tightly means-tested participants, with the vast majority coming from low-income or &#8220;hard to serve" backgrounds, which included people with disabilities, the homeless, offenders, welfare recipients, and out-of-school youth. The training focussed on cultivating basic skills, such as remedial reading and maths, &#8216;work habits&#8217;, such as punctuality and r&#233;sum&#233; writing, and short, entry-level courses for clerical, services or trades work.</p><p>As I expand on below, today the JPTA is typically <a href="https://ippsr.msu.edu/research/benefits-and-costs-jtpa-title-ii-programs-key-findings-national-job-training-partnership">viewed as</a> a policy failure. In 1998, Bill Clinton replaced it with the <a href="https://www.dol.gov/agencies/eta/wioa">Workforce Investment Act</a>, which significantly <a href="https://www.nber.org/system/files/chapters/c13490/revisions/c13490.rev0.pdf">widened the</a> criteria for participation, making retraining available to anybody who wanted it<em>, </em>while giving low-income and disadvantaged people priority. If the JTPA was essentially a poverty reduction scheme, the WIA aspired to become a <em>universal employment service</em>, including for displaced middle-income workers, when budgets allowed. The WIA also sought to prepare individuals for a more fluid labour market. Unlike JPTA and CETA, which offered participants little choice, the WIA provided individuals with &#8220;individual training accounts&#8221; so that they could (at least in theory) choose the skills and sectors to invest their time in, once they had completed some general training.</p><p>In 2014, Barack Obama replaced the WIA with the more streamlined <a href="https://www.dol.gov/agencies/eta/wioa/programs">Workforce Investment and Opportunity Act</a>, which today provides most US federally-funded retraining. The WIOA allows participants to directly participate in their preferred retraining services, without the need to first participate in more general training. It also allows regions to offer more locally-relevant retraining and has tried to increase accountability, by requiring more third-party evaluations.</p><p>Every year, approximately ~500,000 participants take part in the WIOA&#8217;s &#8216;Adult&#8217; and &#8216;Dislocated Worker&#8217; streams, of whom <a href="https://www.pw.hks.harvard.edu/post/publicjobtraining">~200,000</a> receive training vouchers, at a cost of ~$500m. As <a href="https://www.pw.hks.harvard.edu/post/publicjobtraining">noted</a> by David Deming and colleagues, this is a relatively low figure, when one considers that the US government spends 25bn a year on <a href="https://en.wikipedia.org/wiki/Pell_Grant">Pell Grants</a> for undergraduate education. Most WIOA participants are <a href="https://www.dol.gov/agencies/eta/performance/results">low-income</a>, but their profiles vary. For example, most of those on the &#8216;Adult&#8217; stream are below or near the poverty line, with limited education or employment experience. In contrast, most of those on the &#8220;Dislocated Worker&#8217; stream have lost stable employment, for example in manufacturing, and are more likely to be older with more substantial work experience. Although not mandatory, people who receive welfare and public assistance are encouraged to apply and make up approximately one third of WIOA adult participants, according to a <a href="https://www.dol.gov/sites/dolgov/files/ETA/Performance/pdfs/PY2022/PY%202022%20WIOA%20and%20Wagner-Peyser%20Data%20Book.pdf">2022 evaluation</a>.</p><p>The skills that WIOA retraining programmes impart and their method of instruction <a href="https://www.nber.org/system/files/working_papers/w21659/w21659.pdf">vary considerably</a> across US states, owing to differences in the capacity of local retraining providers (who must bid for federal funding), employer needs, and political considerations, which can override more objective readings of labour demand. The result is a patchwork. In <a href="https://www.nber.org/system/files/working_papers/w21659/w21659.pdf">2015</a>, Burt Barnow and Jeffrey Smith distinguished between different types of retraining, from small group sessions focussed on <a href="https://www.nber.org/system/files/chapters/c10261/c10261.pdf">basic skills</a> to subsidised apprenticeships. In the early days of the Clinton era Workforce Investment Act, 47% of participants enrolled in formal, classroom retraining programmes, but this figure ranged widely across states, from 14-96%. According to data from 2023-24, less than 10% of WIOA training involved <em>paid </em>on-the-job training, and just 2% involved apprenticeships.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2RoS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2RoS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 424w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 848w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 1272w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2RoS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png" width="1280" height="1227" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1227,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2RoS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 424w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 848w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 1272w, https://substackcdn.com/image/fetch/$s_!2RoS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4658745-9fc3-4e73-8c75-48b8453af971_1280x1227.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Burt Barnow and Jeffrey Smith; Venus Krier </figcaption></figure></div><p>Summing together, over the last 80 years, we have seen a steady evolution in US public retraining, from more centralised, New Deal-style programmes to more decentralized efforts targeting local private sector employers. The goal has shifted from addressing widespread unemployment to reducing poverty and back towards a more universal employment service that integrates retraining with other policies, such as job search support.</p><p>We can expect further changes. The Trump Administration has <a href="https://www.dol.gov/sites/dolgov/files/general/budget/2026/FY2026BIB.pdf">proposed</a> merging the WIOA&#8217;s programs into a single funding stream titled &#8216;Make America Skilled Again,&#8217; which may result in a <a href="https://www.jff.org/fact-sheet-trump-administrations-fy26-budget-request/">significant funding cut</a> and replace much of today's federally funded training with apprenticeships. However, the proposal will likely be altered significantly as Congress deliberates on its details. So the future of US federal retraining, and how it may respond to AI, is still very much to be determined.</p><h1><strong>3. Does public retraining work? The evidence base</strong></h1><p>Consider the hypothetical example of &#8216;Tony&#8217;, who used to be employed at a mid-sized auto parts manufacturing plant in Ohio, until his employer steadily introduced a wave of industrial robots. As documented by Thomas Phillippon in <a href="https://www.hup.harvard.edu/books/9780674260320">The Great Reversal</a>, many US localities are dominated by a handful of &#8216;good&#8217; employers. The absence of alternative options reduces worker bargaining power and wages. It also makes layoffs more challenging as employees like Tony must compete against many others, most of whom also lack transferable skills to pursue other roles.</p><p>After a prolonged job search, Tony enters a Workforce Investment and Opportunity Act retraining program, run by his local <a href="https://www.dol.gov/general/topic/training/onestop">American Job Centre</a>. Owing to his low-income status, he is given priority. Upon arrival, the Centre screens Tony to see if he qualifies for the <a href="https://www.dol.gov/agencies/eta/workforce-investment/adult%5C">Dislocated Worker</a> stream. Once confirmed, he is asked to participate in maths, problem-solving, and reading assessments, as well as an aptitude test. From there, a counsellor reviews labour market data and recognising Tony&#8217;s background in the auto trade, proposes a variety of skilled trades. She also proposes the opportunity to reskill into a new sector, like medical assistance. If Tony is under pressure to return to work quickly, he may move directly to job search assistance and on-the-job retraining. Alternatively, he may pursue longer classroom retraining at a community college or technical school. If all goes well, he will have regular meetings with his counsellor, participate in soft skills workshops and networking events, and will land a secure new job, with regular progress check-ins.</p><p>In reality, successful examples like Tony are rare. In 2016, the year of Donald Trump&#8217;s first election, David Autor, David Dorn &amp; Gordon Hanson published &#8216;<a href="https://www.nber.org/papers/w21906">The China Shock</a>&#8217; - arguably the most impactful US economics paper of the past decade. The study demonstrated that, since the 1990s, import competition from China had devastated large parts of the American workforce, particularly regions focussed on manufacturing textiles, furniture, toys and other light goods. The shock reverberated across communities, igniting brain drain and depressing economic and social prospects for a generation. Meanwhile, other sectors, regions and employees benefited from cheaper imports.</p><p>The China Shock, and the wider <a href="https://www.sciencedirect.com/science/article/pii/S0022199624000369">technology-based automation</a> that was occurring in these sectors, led to a glut of displaced workers and a stream of youth in search of alternative employment opportunities. This was the sort of challenge that the Clinton-era <a href="https://www.dol.gov/agencies/eta/wioa">Workforce Investment Act</a> and the Obama-era <a href="https://www.dol.gov/agencies/eta/wioa/programs">Workforce Investment and Opportunity Act</a> were designed to address. However, Autor and colleagues showed that many displaced employees either failed to find employment or were forced to take up new roles in the service sector, for example as cashiers or security guards, that were often less-skilled, lower-paid and less rewarding.</p><p>These findings chime with the more formal evidence base on US public retraining programmes. To evaluate retraining programmes, researchers expend <a href="https://www.dol.gov/sites/dolgov/files/ETA/Performance/pdfs/2025%2002%2012%20-%20WIOA%20State%20Perf%20Narr%20Rep%20PY23%20(final).pdf">considerable effort</a> to track key variables, including the proportion of participants who find work shortly after exiting, the proportion who remain employed for at least six months, and their average earnings. In general, researchers have failed to show any statistically significant benefit on these outcomes.</p><p>Teasing out <em>why </em>- and what this means for future retraining programmes, including for AI - is difficult. One key challenge is <em>non-random selection</em>. The population that takes part in retraining is not representative of the wider population of people who have been displaced. This means that we <a href="https://www.aei.org/economics/whats-the-real-impact-of-employment-programs/">do not know</a> the extent to which participants&#8217; subsequent labour market outcomes are due to the impact of the retraining programme (or lack thereof), or other characteristics that are more common among participants, such as a willingness to take part in retraining in the first place.</p><p>To address this issue, researchers use <a href="https://www.richmondfed.org/publications/research/econ_focus/2022/q1_economic_history">quasi-experiments</a> that aim to approximate randomised controlled trials by <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC6086368/">matching</a> a group of people that <em>did </em>participate in public retraining programmes - say 10,000 middle-aged men from rural districts - with a similar group that didn&#8217;t. However, to do this well, researchers need to know what characteristics are most relevant to future labour market outcomes - prior education?, proximity to a nearby city? - so that they can control them. And many potentially important social or psychological characteristics are impossible to reliably capture in datasets. On top of this, researchers must try to account for the huge variance in the focus, format, and resources of different states&#8217; programmes. As a result, some researchers <a href="https://www.aei.org/articles/heres-a-kind-of-job-training-program-that-works/">conclude</a> that it is impossible to make reliable causal claims about why public retraining is, or isn&#8217;t effective.</p><p>The evidence that does exist provides cause for skepticism. For example, a <a href="https://www.gao.gov/assets/hehs-96-40.pdf#:~:text=Not%20Significantly%20Greater%20Than%20Control,of%20the%20intervening%20years%2C%20but">National Study to</a> evaluate the Reagan-era <a href="https://www.gao.gov/assets/hehs-96-40.pdf#:~:text=The%20NJS%20showed%20mixed%20results,Than%20Control%20Group%20Earnings%20After">Job Training Partnership Act</a>, involved a genuine randomised controlled trial that ran from 1987 to 1992, with a representative sample of more than 20,000 participants. It found no statistically significant improvement in employment rates, employment duration, or earnings. In 2019, a 10-year <a href="https://www.mathematica.org/projects/wia-gold-standard-evaluation#:~:text=Key%20findings%20from%20the%2030,impact%20study%20include">evaluation</a> of the Workforce Investment Act, and the Workforce Investment and Opportunity Act found that, while intensive one-on-one career counselling <em>did</em> improve employment and earnings outcomes, the programme&#8217;s retraining streams did not.</p><p>As of 2023, the most recent data available, 70% of participants in WIOA <a href="https://www.dol.gov/agencies/eta/performance/wioa-performance">were employed</a> in the 2nd and 4th quarters after finishing their retraining programmes. But these outcomes are not compared to a control group, so we don't know if, or to what extent, WIOA is truly improving them. Even when WIOA retraining is helping people find jobs, <a href="https://www.pw.hks.harvard.edu/post/publicjobtraining">research</a> by David Deming and colleagues suggests that ~40% of participants are being trained into &#8216;low-wage&#8217; support roles, particularly in the healthcare sector. The most common roles, such as nursing assistants, come with an annual salary of less than $25,000. Demand for these roles is high, which explains the WIOA&#8217;s focus on them, but they often offer little scope for career growth.</p><p>There is even evidence that some retraining programmes may hurt participants. For example, a <a href="https://mathematica.org/~/media/publications/pdfs/labor/taa_benefits_costs.pdf">2012 </a>evaluation of the US <a href="https://en.wikipedia.org/wiki/Trade_Adjustment_Assistance">Trade Adjustment Assistance</a> programme, which provides retraining to workers displaced by outsourcing and trade, found that participants had lower employment rates in the two years after they were laid off, compared to similar workers who did not participate, potentially due to the opportunity cost of not being able to apply for more immediate work opportunities. Even four years after losing their job, TAA participants were underemployed and earned slightly less, compared to non- participants.</p><h1><strong>4. Why does retraining fail?</strong></h1><p>In the absence of clear causal evidence, researchers are left to speculate as to why US public retraining programmes have underwhelmed.</p><p>A first challenge relates to the <em>participants </em>and their ability and willingness to take part. Some potential participants may avoid retraining, or drop out, due to the costs involved, which range from transport to arranging childcare - single parents are over-represented among participants. These cost pressures are particularly strong for candidates who are still in work but at risk of losing their jobs. For those <a href="https://www.federalreserve.gov/publications/2023-economic-well-being-of-us-households-in-2022-expenses.htm">with little savings</a>, even the offer of a payment to take part in retraining, may be insufficient.</p><p>In other instances, there may be a mismatch between the training and career paths on offer and what candidates are interested in, or capable of. For example, older workers, often close to retirement, have been<a href="https://www.brookings.edu/articles/digitalization-and-the-american-workforce/"> overrepresented</a> in some of the jobs displaced by digitisation and may be less enticed by retraining into a brand new sector. Retraining participants are also <a href="https://www.dol.gov/agencies/eta/performance/results/narrative-quickview">disproportionately likely</a> to have been homeless, an offender, or to lack the basic skills that longer classroom training requires.</p><p>All participants may be bewildered by the choice on the offer. As David Deming and colleagues <a href="http://publicjobtraining">note,</a> the WIOA funds ~7,000 Eligible Training Providers and ~75,000 programmes, in more than 700 occupational fields. Although it aspires to provide &#8216;informed consumer choice&#8217;, via its voucher system, the websites describing different programmes can quickly overwhelm candidates, while failing to provide the comparable programme information and performance data that people need.</p><p>A second challenge relates to <em>training providers </em>and their ability to offer a high-quality service that is well-curated to local employers&#8217; needs. Experts note huge variance in the quality and format of local training providers, but with such a large number of providers, the evidence base does not allow us to reliably tease out the good from the bad. Training providers also struggle with the bureaucracy that public programmes entail and the <a href="https://www.ilo.org/media/411786/download">challenge</a> of ensuring that the skills they provide are useful in an <a href="https://cep.lse.ac.uk/pubs/download/sercdp0121.pdf">ever more specialised</a> economy, where many skills are not easily transferable. Providers also need to look beyond the current labour market and anticipate <em>future </em>skills demands - a task that has always been hard, if not impossible, and which AI is now exacerbating.</p><p>A final challenge is that there may simply not be <a href="https://journals.sagepub.com/doi/abs/10.1177/0032329294022003005">enough skilled jobs</a> for people to retrain into. The typical question for workers looking to retrain is not: &#8220;<em>How do I find employment</em>?&#8221;, but rather &#8220;<em>How do I get a more secure, better paid job</em>?&#8221; Past technologies did not increase the aggregate<em> </em>unemployment rate, but there is <a href="https://www.aeaweb.org/articles?id=10.1257/jep.33.2.3">evidence</a> that they did lead to short to medium-term reduction in the number of &#8216;skilled&#8217; occupations for workers to retrain into. In the AI era, similar challenges could emerge if, for example, new university graduates were unable to find the kind of role, or career path, they expected.</p><h1><strong>Retraining &amp; AI: Four ideas</strong></h1><p>What does this mean for concerns about AI? At a minimum, we should avoid assuming that public retraining programmes will be a useful response. The baseline hypothesis is probably that they won&#8217;t. However, there should be ways to make them more useful.</p><p>Here are four ideas:</p><p><strong>1. Develop better labour market projections</strong></p><p>There is vast <a href="https://www.governance.ai/analysis/predicting-ais-impact-on-work">uncertainty</a> about how quickly AI capabilities will develop, <a href="https://www.aipolicyperspectives.com/p/an-agents-economy">diffuse through the economy</a>, and affect workers. This uncertainty will not disappear any time soon. But AI labs and policymakers could make it easier for retraining providers to understand the jobs that may get displaced, prove more resilient, or emerge. At the moment the US <a href="https://www.bls.gov/emp/">Bureau of Labour Statistics</a> provides high-level forecasts for demand for different occupations. AI labs could work with policymakers and researchers to develop much richer and more granular forecasts that draw on, among others, the latest AI capability evaluations; <a href="https://www.anthropic.com/economic-index">insights from how users are querying LLMs</a>, online job postings, and government surveys of employers and graduates.</p><p><strong>2. Experiment with new retraining approaches</strong></p><p>Almost <a href="https://www.dol.gov/sites/dolgov/files/ETA/Performance/pdfs/PY%202023%20Q4%20WIOA%20and%20Wagner-Peyser%20Quarterly%20Report.pdf">80%</a> of Workforce Investment and Opportunity Act retraining takes place fully in-person, while just 7% takes place fully online. This creates barriers for people in more remote regions and hinders innovation. It will be difficult to usefully change this, because delivering high-quality online education is hard. As the scholar Mary Burns noted in an evidence <a href="https://learningportal.iiep.unesco.org/en/library/background-paper-prepared-for-the-2023-global-education-monitoring-report-technology-and">review</a> for UNESCO, &#8220;<em>few innovations have generated such excitement and idealism - and such disappointment and cynicism - as (digital) technology in education.&#8221;</em></p><p>But now is the time to experiment. Education providers are learning from their failures, such as the early Massive Open Online Courses that crudely transposed offline learning content. Some providers are shifting to hybrid formats, while others are developing targeted micro-credential courses. In the AI community, labs are training large language models, and the tutors based on them, to be <a href="https://blog.google/outreach-initiatives/education/google-learnlm-gemini-generative-ai/">more &#8216;pedagogically inspired</a>&#8217;, while educators and students are using multimodal AI to personalise learning materials to the language, format, or substance they want. Against this backdrop, there should be opportunities to design AI-enabled retraining programmes that are more dynamic and better able to respond to labour market needs. Another goal of such retraining, at least for some participants, could be on how to use AI systems most effectively.</p><p><strong>3. Collate better evidence about what works</strong></p><p>At the moment, we have little evidence about what works, or doesn&#8217;t, with respect to public retraining programmes. This is particularly true for training provided to workers displaced by new technologies. Future evaluations should target this group, with a focus on those affected by AI, and understand how outcomes are affected by factors such as programme type, age, gender, geography, prior education, and existing skills. This will require better data collection by public sector institutions, with a focus on RCT-style evaluations, but also smaller-scale experiments that academics or companies could run, with standardised measures to harmonise the two.</p><p>One area to explore is the <a href="https://www.journals.uchicago.edu/doi/abs/10.1086/717932">nascent positive evidence</a> on training programmes that are co-designed with employers, with specific sectors in mind. Some RCTs in this area show positive effects on earnings and employment, but they are small, typically including a few thousand people, and we do not know if they will generalise across geographies and sectors.</p><p><strong>4. Consider goals beyond employment</strong></p><p>Finally, it may be time to <a href="https://academic.oup.com/book/57593/chapter-abstract/469200842?redirectedFrom=fulltext">reevaluate</a> whether &#8216;work&#8217; should remain<a href="https://academic.oup.com/book/57593/chapter-abstract/469200842?redirectedFrom=fulltext"> the c</a>entral way to measure a person&#8217;s economic contributions and the central goal of any retraining programme. As Tom Rachman <a href="https://www.aipolicyperspectives.com/p/human-learning-in-the-age-of-machine">wrote</a> in a recent essay: educational policy tends to tinker with the &#8216;What&#8217; of learning (curriculum) and fret about the &#8216;How&#8217; (methods). But it&#8217;s the <em>Why</em> that demands its boldest recalibration since the Enlightenment.</p><p>In theory, education can serve <a href="https://www.researchgate.net/publication/41529875_Good_Education_in_an_Age_of_Measurement_On_the_Need_to_Reconnect_with_the_Question_of_Purpose_in_Education">many functions</a>, from boosting an individual&#8217;s agency to promoting national unity and assimilation. But preparing students for a career has come to dominate both retraining and broader education. If scenarios where AI has more dramatic economic effects materialise, we need to think about what other knowledge, skills and values people will need to navigate this transition, and the other ways that they can contribute to society. For example, future training programmes could include best practices on how to use AI agents, or how to improve community life and ward against atomization. This more flexible understanding of &#8216;work&#8217; and &#8216;training&#8217; could give us a better chance of navigating the AI economic transition in a way that preserves worker livelihoods, opportunity, and dignity.</p><p><em>I would like to acknowledge and thank Burt Barnow for his guidance and contributions in developing this essay. Thank you to Venus Krier for the illustrations contained in this piece. </em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives.  Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AGI, Government, and the Free Society ]]></title><description><![CDATA[Navigating the tightrope between authoritarianism and anarchy]]></description><link>https://www.aipolicyperspectives.com/p/agi-governments-and-free-societies</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/agi-governments-and-free-societies</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Mon, 09 Jun 2025 09:11:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Y2I-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In this essay, S&#233;b Krier explores how AGI might affect the delicate balance in power between state and society. The essay is based on a <a href="https://arxiv.org/pdf/2503.05710">recent paper</a> by S&#233;b, Justin Bullock and Samuel Hammond.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y2I-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y2I-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y2I-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png" width="1456" height="1097" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1097,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11827773,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/165359234?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Y2I-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!Y2I-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe645aa21-e0d0-44fd-a08d-5621ef52984b_2464x1856.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>As <a href="https://www.jstor.org/stable/2139277">highlighted by</a> Woodrow Wilson in 1887, governments are composed of two aspects - politics and administration - which, though often confused in practice, should be clearly distinguished. Politics is defined by who decides, who makes the rules, and how these individuals are selected. Administration describes g<em>overnment in action</em> - the efficient and systematic execution of laws and public duties. Over time, these concepts have evolved to accommodate different forms of government, from tribes and divine monarchies to republics. The development of free and liberal democratic societies is a relatively recent phenomenon, emerging in the 16th and 17th centuries with the rise of European nation-states.</p><p>Throughout history, new leaps in technology have altered and destabilised these forms of government. In the 15th and 16th centuries, the <a href="https://academic.oup.com/qje/article-abstract/126/3/1133/1855353">diffusion of the printing press and a growing mercantile class</a> enabled administrative record keeping, which alongside broader demands for property and the enforcement of contracts, laid the groundwork for more centralised forms of governance. More recently, the Arab Spring and the contemporary rise of populism in Western democracies have been attributed, <a href="https://journalistsresource.org/economics/research-arab-spring-internet-key-studies/">at least in part,</a> to the capacity of the internet and social media to potentiate mass mobilisations against incumbent political establishments.</p><p>AI heralds the next major technological shift. While its precise trajectory and diffusion remain uncertain, many researchers and forecasters anticipate the advent of AGI in a matter of years rather than decades or centuries. This raises a question about how AGI would impact liberal democratic societies. Daron Acemoglu and James Robinson&#8217;s &#8216;narrow corridor&#8217; <a href="https://www.penguin.co.uk/books/305520/the-narrow-corridor-by-robinson-daron-acemoglu-and-james-a/9780241314333">framework</a> argues that free and open societies have traditionally depended upon maintaining a delicate balance between the relative powers of society and the state. This equilibrium avoids an overly-powerful despotic state on one hand, and a chaotic absent state that is too weak to govern or provide services on the other. Liberty thrives in the narrow corridor between these extremes.</p><p>Historically, liberal societies maintained this precarious equilibrium through constitutional constraints, checks and balances, and the rule of law. However, this equilibrium has never been static. Rather, technological and social change have forced repeated renegotiations of the social contract - from the rise of mass politics in the industrialising West to the welfare state reforms of the early 20th century.</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4031002">Recent empirical research</a> by Ryan Murphy and Colin O'Reilly has questioned whether countries actually follow these trajectories and challenged some of the proposed mechanisms, such as the &#8216;Red Queen effect&#8217; where the state and society are thought to be locked in a constant, co-evolutionary race. But the core idea that we need to maintain a necessary balance between state power and societal autonomy still serves as a useful illustrative heuristic, when we consider the potential impacts of AGI. In particular, it helps to highlight the critical trade-offs that we can expect to face between efficiency and accountability, collective coordination and individual freedom, and technological capability and democratic control.</p><p><strong>How could AGI strengthen free societies?</strong></p><p>AGI will likely offer an absolute advantage over human decision-making in terms of scalability, cost, and quality. Artificial bureaucrats could draw on specialised sub-agents to dynamically switch between data interpretation, risk modelling, and stakeholder communication, to dramatically speed up lengthy tasks like interpreting legislation or carrying out environmental impact analyses and fraud detection.</p><p>AGI agents could also lead to more <em>equitable</em> decision making. Human bureaucracies are littered with subjectivity. Some of this stems from their use of traditional rules-based automation that leads to disproportionately harsh outcomes for marginalised groups in domains like tax enforcement or welfare eligibility. These &#8216;dumb&#8217; systems can miss critical context and are typically only deployed in situations, such as to initiate audits of recipients of Earned Income Tax Credits, that are amenable to such rules-based automation in the first instance. In contrast, more general AI systems will be able to grapple with the complexities and idiosyncrasies of the taxes filed by high-income individuals, potentially reducing disparities in how laws are enforced.</p><p>AGI could also improve how governments secure democratic inputs, leading to more feedback on, and potentially control over, what governments do. Gudi&#241;o-Rosero and colleagues recently <a href="https://arxiv.org/abs/2405.03452">explored</a> how &#8220;digital twins&#8221; that simulate individual citizens&#8217; views and &#8220;represent&#8221; their policy preferences could lead to an &#8220;augmented democracy&#8221;, in similar fashion to how AI agents may soon start to represent individuals in commercial transactions. This work stops far short of creating actual digital twins for political representation, partly due to limitations in capturing individual preferences. But future AI systems, enhanced with long-term memory, could enable higher-fidelity simulations. Governments could use these simulations to draw up more effective political agendas and experiment with different policy ideas, while also raising questions about how the roles of elected officials should evolve.</p><p><strong>How could AGI </strong><em><strong>undermine </strong></em><strong>free societies?</strong></p><p>In <a href="https://yalebooks.co.uk/book/9780300246759/seeing-like-a-state/">Seeing Like a State</a>, the late political scientist and anthropologist James C. Scott offered a critical take on government efforts to make society <em>legible</em>, from birth and death registries to financial reporting requirements. AGI could dramatically enhance such efforts, enabling governments to analyse vast data streams in real-time and monitor and predict societal trends, risks, and individual behaviors, at a granularity and accuracy that is far beyond current levels.</p><p>This could make government decision-making radically more efficient and data-driven. But it could also enable unprecedented surveillance and control over citizens, stifling dissent. It could also dramatically reduce the cost of monitoring whether people are complying with laws. For example, governments could pass CCTV camera feeds through a multimodal AI model for continual analysis, leading to a form of &#8216;perfect enforcement,&#8217; where even minor infractions become subject to consistent punishment. While this might seem beneficial from a rule-of-law perspective, it raises significant concerns for individual freedom and the quality of governance.</p><p>Take, for example, the National Highway Traffic Safety Administration&#8217;s recall of Tesla&#8217;s &#8216;Full Self-Driving&#8217; software update <a href="https://static.nhtsa.gov/odi/rcl/2022/RCLRPT-22V037-4462.PDF">in 2022</a>, because it was carrying out &#8216;<a href="https://lifelanes.progressive.com/what-is-a-rolling-stop/">rolling stops</a>&#8217; - something that human drivers regularly do when an intersection is empty. The advent of self-driving technology could make every rolling stop legible, imposing a level of perfect compliance that many humans would consider draconian and inefficient. (Although such perfect compliance may come with a silver lining - exposing outdated or poorly crafted laws that rely on lenient enforcement and human discretion.)</p><p>Delegating decisions to AGI also raises concerns about the loss of moral accountability in public administration. For example, while AGI agents may excel at optimising policies for efficiency, they may lack the ethical nuance required to address competing societal values. This disconnect between computational optimisation and human morality risks eroding public trust.</p><p>AGI could also undermine free societies by empowering <em>non-state </em>actors. In more positive scenarios, it could enable citizens to better understand and advocate for policy positions, fact-check officials, and usher in new kinds of public deliberation. However, individuals and groups could also use AI agents to orchestrate harmful actions, such as to manipulate public opinion or coordinate insurgencies. They could also create opaque financial communication methods that make the economy <em>less </em>legible to governments, rather than more, similar to how cryptocurrency can be used to launder money, despite its legibility.</p><p><strong>How to secure free societies</strong></p><p>To secure the narrow corridor, states must neither blindly hand off power to AI systems nor clamp down on them in ways that stifle innovation. On the technology front, novel privacy-enhancing technologies could help individuals to maintain autonomy and privacy in the face of increasingly pervasive state monitoring. Investments in interpretability could help to ensure that AGI systems operate transparently and remain accountable for their decisions.</p><p>On the institutional front, governments could embrace hybrid structures that combine AGI&#8217;s computational power with the nuanced judgment and accountability that human administrators provide. This might include equipping public institutions with their own advanced AI tools for functions like biosurveillance, cyberdefense, and regulatory oversight, ensuring they are not outpaced by threats.</p><p>Governments could also look to reinforce participatory democratic processes by enabling large scale deliberative platforms, real-time citizen feedback systems, and representative digital twins, while designing robust safeguards to ensure that they genuinely enhance, rather than undermine, democratic accountability.</p><p>Perhaps most importantly, securing the narrow corridor in an age of AGI will require an epistemic shift in how we approach the governance of emerging technologies. Rather than passively reacting to technological disruptions, policymakers and the public must cultivate a greater capacity for anticipatory governance. This means proactively imagining and stress-testing our institutions for AGI's transformative potential. To do this, we can use tools like scenario planning, threat modelling, and forecasting - drawing on AI's own emerging abilities in these areas. </p><p><em>Thanks to Justin Bullock and Samuel Hammond for authoring the <a href="https://arxiv.org/pdf/2503.05710">original paper</a> and to Conor Griffin for editing support. </em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts. Lots more in the pipeline! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Human Learning in the Age of Machine Learning]]></title><description><![CDATA[Rethinking the why]]></description><link>https://www.aipolicyperspectives.com/p/human-learning-in-the-age-of-machine</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/human-learning-in-the-age-of-machine</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Fri, 02 May 2025 08:31:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4ANt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>We are glad to share another guest post from <a href="https://www.linkedin.com/in/tom-rachman/?originalSubdomain=uk">Tom Rachman</a>. In this essay, Tom explores the human motivation to learn, the evolution of modern education systems, and how both could evolve with more powerful AI. Please let us know your thoughts &amp; critiques. Like all the pieces you read here, it is written in a personal capacity. You can read Tom&#8217;s earlier essay on how AI may affect human behaviour, <a href="https://www.aipolicyperspectives.com/p/ai-and-behaviour-change">here</a>.</em> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4ANt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4ANt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4ANt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png" width="1280" height="894" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:894,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2005232,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/162573922?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4ANt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 424w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 848w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 1272w, https://substackcdn.com/image/fetch/$s_!4ANt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b162293-af73-408c-8adf-c3f0aae71038_1280x894.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Venus Krier </figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Lots more in the pipeline. Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A baby giggles on a picnic blanket in the park, her gaze darting around, before she blinks at a dazzle of sun. The child is still oblivious to the name of her planet, or that dinosaurs once roamed here, or that the blue above has black on the other side. But her lifelong pursuit&#8212;the project of all humanity&#8212;is underway: learning.</p><p>Only, what for?</p><p>Today (and especially tomorrow), children will blink into existence as the latest joiners of a once-superlative cognitive tribe, now somewhat diminished. In the talent we admire most, and rely on entirely, we&#8217;re falling behind machinery. When people feel <a href="https://psycnet.apa.org/manuscript/2016-30868-001.pdf">ineffectual</a>, they plunge in mood and effort. At the margins, you <a href="https://x.com/tylercowen/status/1845656495737745816">glimpse</a> The Great Dejection already.</p><p>&#8220;I&#8217;ve grown not to entirely trust people who are not at least slightly demoralized by some of the more recent AI achievements,&#8221; Tyler Cowen <a href="https://x.com/tylercowen/status/1845656495737745816">said</a>, while Ethan Mollick <a href="https://x.com/emollick/status/1845921836288053741">remarked</a>: &#8220;If you haven&#8217;t had at least a minor crisis (What does this mean for my job? What does it mean for my kids&#8217; jobs? What does it mean to think?) you probably haven&#8217;t used AI enough.&#8221;</p><p>Such anxiety feeds into a thought haunting AI progress: <em>What are <strong>we </strong>for? </em>It&#8217;s a species-level interrogation that helps explain why philosophers find employment in the tech sector nowadays. The public&#8212;once they reckon with what&#8217;s coming&#8212;may prefer therapists. Thankfully, <a href="https://unherd.com/2025/01/chatbots-are-not-your-friends/">chatbots</a> can listen to our woes.</p><p>The emotive response is to condemn technology, as if it might be stopped. More pragmatic is to consider what it reveals. For instance, if students are cheating with generative AI, does this suggest that education&#8217;s objectives have become misaligned from the incentives?</p><p>Today, its main goals are job-preparation, socialization, and wellbeing. Yet in all three, education falters. Consider wellbeing: the mental health of school-age children has <a href="https://www.anxiousgeneration.com/research/the-evidence">deteriorated</a> for more than a decade. Socialization seems an elusive goal too, now that &#8220;real life&#8221; is screen life for many. And job-preparation is a precarious promise, given how many occupations are contingent on what AI does to the world.</p><p>Facing all this, educational policy tends to tinker with the <em>What </em>of learning (curriculum) and fret about the <em>How </em>(methods). But it&#8217;s the <em>Why</em> that demands its boldest recalibration since the Enlightenment.</p><h2>HOW WE GOT HERE</h2><p>Socrates brought a cup of poisonous hemlock brew to his lips, its whiff of mouse urine pervading his nostrils. He swallowed, then paced till his feet grew numb, whereupon he stretched out, awaiting the consummation of a death sentence for corrupting Athenian youth with his teachings. Reclining there, he embodied one last Socratic lesson: that education is a matter of control and ethics&#8212;the original alignment problem.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q60H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q60H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q60H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Q60H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Q60H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F753aa3f5-a336-4a19-95fb-78f5c93c1c77_1024x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Society creates intelligent agents (its young), and aspires to design their behaviour. But how to protect against data-poisoning? Which reward-functions to set? And how to ensure that those agents never go berserk?</p><p>In prehistory, education meant trailing after family, absorbing skills, myths and hearsay about the larger world. But the technology of writing led to formal schooling. Sumerians established <em><a href="https://www.laphamsquarterly.org/sites/default/files/images/maps/educationmapfinal_0.jpg">eduba</a></em> to train scribes in cuneiform, Ancient Egypt constructed <em>per-ankh </em>houses of learning, the Islamic Golden Age saw the flourishing of <em>kuttab </em>schools, Medieval Italy developed <em>scuole d&#8217;abaco</em>, and Aztec Tenochtitlan ran <em>calmecac </em>for the nobility<em>.</em></p><p>Technology jolted learning afresh when the printing press multiplied the quantity of information and diffused it, weakening institutional control over knowledge. The Industrial Age saw further updates, with more machinery requiring more skilled workers, and an engineering ethos infiltrating classrooms. &#8220;Teach these boys and girls nothing but Facts,&#8221; the data-processing schoolmaster, Thomas Gradgrind, says in Dickens&#8217; novel <a href="https://www.gutenberg.org/files/786/786-h/786-h.htm">Hard Times</a> (1854). &#8220;You can only form the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.&#8221;</p><p>To gather the Facts of education itself, testing became standardized, tracking pupil progress, and sifting the young according to their apparent talents. Standardized assessments, besides stressing children, became a debatable <a href="https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html">proxy</a> for learning. Orwell recalled one of his teachers yanking pupils&#8217; hair, kicking their shins, and making them plead with hands raised to affix dates to wars of which they knew nothing. &#8220;The whole process was frankly a preparation for a sort of confidence trick,&#8221; he <a href="https://orwellsociety.com/orwell-the-essayist-such-such-were-the-joys-1947/">wrote</a>. &#8220;Your job was to learn exactly those things that would give an examiner the impression that you knew more than you did know, and as far as possible to avoid burdening your brain with anything else.&#8221;</p><p>Foucault <a href="https://archive.org/details/foucault-michel-discipline-and-punish-the-birth-of-the-prison-1977-1995/page/227/mode/2up?q=schools">noted</a> the resemblance of schools to prisons, characterizing them as institutions to regulate and enforce control. But, while it&#8217;s na&#239;ve to ignore power, it&#8217;s na&#239;ve to see only power. People also wish children to thrive, and they care about strangers&#8217; offspring too. During the Enlightenment, this humanistic vision seeped into educational theory, notably by the hand of Rousseau, who considered children as virtuous creatures that society befouls. &#8220;All is good upon leaving the Maker&#8217;s hands; all degenerates in the hands of man,&#8221; he wrote in his treatise on education, <a href="https://philo-labo.fr/fichiers/Rousseau%20-%20Emile%20(Grenoble).pdf">&#201;mile</a>.</p><p>Amid the bloodshed of the Napoleonic wars, a disorganized but kindly Swiss educator sought to apply Rousseau&#8217;s ideals<em>. </em>Johann Heinrich Pestalozzi&#8212;brow furrowed in a muddle of compassion and anxiety&#8212;stood before 80 uneducated orphans, far too numerous to teach at once. So he had them draw, write, and learn by their own propulsion. &#8220;It quickly developed in the children a consciousness of hitherto unknown power, and particularly a general sense of beauty and order,&#8221; he <a href="https://archive.org/stream/howgertrudeteach00pestuoft/howgertrudeteach00pestuoft_djvu.txt">wrote</a>. &#8220;It was the tone of unknown powers awakened from sleep; of a heart and mind exalted with the feeling of what these powers could and would lead them to do.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!txh3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!txh3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 424w, https://substackcdn.com/image/fetch/$s_!txh3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 848w, https://substackcdn.com/image/fetch/$s_!txh3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!txh3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!txh3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png" width="785" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:785,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!txh3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 424w, https://substackcdn.com/image/fetch/$s_!txh3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 848w, https://substackcdn.com/image/fetch/$s_!txh3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!txh3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8efcd4d7-7f3a-482b-a8a2-4735bb4cc165_785x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Johann Heinrich Pestalozzi</figcaption></figure></div><p>Here was a fresh <em>Why</em> for learning: the discovery of beauty external and powers internal.<sup> </sup>This educational ideal spread via Pestalozzi&#8217;s German acolyte Friedrich Fr&#246;bel, who equated school to nourishing a garden of children, so named it Kindergarten. John Dewey&#8217;s progressive movement followed, as did Steiner&#8217;s Waldorf education, and the Montessori schools. Gradually, the humanistic Enlightenment <em>Why </em>(flourishing) merged with the mechanistic Industrial <em>Why</em> (training), a sometimes-awkward marriage that persists today.</p><p>Into this tense union, the internet barged. Learning had never been more available; writing and speech boomed. Yet peculiar changes were afoot, with a range of human cognitive metrics falling from around 2012, <a href="https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc">arguably</a> because the deluge of digital inputs overwhelmed our minds, turning humans into a race of scatterbrains. Book-reading <a href="http://unherd.com/newsroom/the-decline-of-book-reading-is-more-serious-than-we-think">plummeted</a>, while teenagers&#8217; performance in science, reading and math dropped across the industrialized world, with many reporting an inability to concentrate. Adults were scoring lower in numeracy and literacy too. Then chatbots arrived.</p><h2>THE HOMEWORK APOCALYPSE</h2><p>According to a recent survey, <a href="https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/">92% of British undergraduates</a> are using AI, with nearly all doing so for assignments. Videos circulate online showing how to conceal homework fraud with AI apps that &#8220;humanize&#8221; chatbot text and evade educators&#8217; counterintelligence apps, which seek out dubious prose. It&#8217;s an arms race that leaves teachers in despair, not least because they&#8217;re <a href="https://www.theguardian.com/education/article/2024/jun/26/researchers-fool-university-markers-with-ai-generated-exam-papers">losing</a>.</p><p>The dilemma is this: Do you hold proudly to the educational values of before? Or decide that current testing is measuring little but the past? In recent generations, many kids who seemed maladapted because of tech obsession&#8212;early internet hackers, all-night gamers, social-media addicts&#8212;ended up succeeding in business, the arts, politics. Early adopters rule the world today. So maybe AI cheating is adaptive.</p><p>Moreover, the will to cheat one&#8217;s own learning is not the fault of tech. The American writer John Warner spent years <a href="https://www.insidehighered.com/blogs/just-visiting/everyone">posing</a> the following hypothetical question to his college classes: <em>If I offered you an &#8216;A&#8217; but you had zero work to complete, no more classes, and could never tell anyone&#8212;would you go for it?</em> In one course, the takers commonly exceeded 90%. That was back in 2013. AI may be satisfying a pent-up demand.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zPgA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zPgA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zPgA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zPgA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!zPgA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb30e29fc-03aa-4923-9034-7f39c94a7bdc_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The postwar expansion of higher education made the degree a minimum admission ticket to most professions. When student numbers ballooned and tuition costs rose, the educational <em>Why</em> for many wasn&#8217;t learning but earning. Most students still want an education. But even the most earnest face a collective-action problem: If your peers are all generating impeccable AI assignments, you may be penalized for &#8220;humaning it.&#8221;</p><p>Educators cannot dump assessments, though. They need metrics to track teaching efficacy, to motivate, and to propel students into roles where they&#8217;d thrive. Ironically, technology is prompting a retreat to the past in some quarters, with <a href="https://www.timeshighereducation.com/opinion/sampled-vivas-are-pivotal-combating-ai-cheating">oral examinations</a>&#8212;which Oxford and Cambridge <a href="https://www.researchgate.net/publication/248939168_The_Shift_from_Oral_to_Written_Examination_Cambridge_and_Oxford_1700-1900">phased out</a> starting in the 18<sup>th</sup> century&#8212;now back in vogue.</p><p>The tricky part is that cheating and learning with AI may be proximate. While it&#8217;s dishonest to submit a chatbot essay as your own, is it wrong to pose your essay question to a chatbot, hear its insights, request that it pull up primary sources, assign its deep-research function to create a study guide, and pore over this, digging into the most-pertinent primary sources yourself, then posing follow-up questions to the AI? That process might be more instructive than battling with poorly written academic texts.</p><h3>AI TO THE RESCUE?</h3><p>A World Bank pilot program in Nigeria involving AI tutoring <a href="https://blogs.worldbank.org/en/education/From-chalkboards-to-chatbots-Transforming-learning-in-Nigeria">claimed</a> gains so striking (two years&#8217; learning in six weeks) that they seem unlikely to replicate. But another <a href="https://arxiv.org/ftp/arxiv/papers/2402/2402.09809.pdf">study</a>, of an AI-powered math tutor in Ghana, also saw meaningful success, claiming learning benefits equivalent to an extra year of study in eight months.</p><p>Personalized AI tutors may offer timid students the chance to pose questions they&#8217;d be shy to ask before their peers, or that the overtaxed teacher might lack the time (or ability) to resolve. Educational AI also allows students to probe a source, seeking clarifications or sharpening their comprehension, perhaps even voice-chatting with a virtual <a href="https://www.theguardian.com/education/2025/mar/06/the-english-schools-looking-to-dispel-doom-and-gloom-around-ai">Darwin</a>. AI apps could track individuals&#8217; learning in real-time rather than punctuating courses with tests, allowing educators to identify and respond dynamically to students&#8217; progress. And data-tracking at scale could generate a finer conception of human learning than social-science methods have yet produced, such that machine-learning helps humans learn how humans learn.</p><p>AI might help with testing too. Oral examinations are far more time-consuming for human teachers than written tests, but voice AIs could quiz any number of students, evaluating their understanding, and issuing marks alongside audio transcripts, score rationales, and future-learning advice. Voice-assistants might even salvage the at-home essay, obliging their purported authors to discuss the contents, assessing students&#8217; understanding of what they &#8220;wrote.&#8221;</p><p>Many teachers remain worried about where this is heading, envisioning classrooms where students gape at AI interfaces, barely interacting with their peers, let alone human teachers, who are kept around as behaviour police. Above all, educators <a href="https://danmeyer.substack.com/p/generative-ai-is-best-at-something">suspect</a> that developers misunderstand humans, with edtech companies building tools around what tech can do, not what students need.</p><p>Previous digital tools promised transformative effects too, professing that the internet would bring elite education to all, irrespective of where on Earth and who on earth they were. Yet many tech tools succeeded only for the tiny fraction of students who used them as prescribed (typically the top of the class anyway), a phenomenon known as &#8220;<a href="https://www.educationnext.org/5-percent-problem-online-mathematics-programs-may-benefit-most-kids-who-need-it-least/">the 5 percent problem</a>.&#8221;</p><p>AI can worsen learning too, as shown by <a href="https://arxiv.org/pdf/2409.09047v1#page=26.71">research</a> on German university students who learned coding with the help of chatbots. Those who sought AI explanations showed notable gains, but those who had the chatbot complete exercises undermined their own learning. More surprising is that AI might make us stupider while believing ourselves smarter: users tend to <a href="https://download.ssrn.com/2024/7/15/4895486.pdf">overestimate</a> how much they&#8217;ve learned with AI, mistaking machine intelligence for their own.</p><p>An even more alarming prospect is <a href="https://www.theintrinsicperspective.com/p/brain-drain">cognitive atrophy</a>, where users offload <a href="https://www.mdpi.com/2075-4698/15/1/6">critical thinking</a> and <a href="https://arxiv.org/pdf/2410.03703">creativity</a> to AI. If humans are able to pursue higher-order thinking by dumping intellectual drudgery onto machines, that would be fine. However, what constitutes higher-order thinking is debatable.  A common pacifier is that humans collaborating with AI will be stronger than either alone: the &#8220;centaur model.&#8221; They said that of chess once. But these days, Magnus Carlsen would only slow down an AI grandmaster.</p><p>Technology is about making what&#8217;s hard easier. And learning is hard, even <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2Fbul0000443">aversive</a>. Yet the practice is not like doing the laundry, where we lose nothing by cramming filthy socks into a machine, and retrieving them clean. Learning is laborious or there is no learning. And labor is impossible to motivate if we don&#8217;t see its purpose.</p><p>This circles back to the question haunting this era: <em>What are <strong>we </strong>for? </em>The tautological response is that humans are for tasks that humans want other humans to do: You don&#8217;t fancy walking into a grief counsellor&#8217;s office only to find a robot. Yet <a href="https://papers.ssrn.com/sol3/papers.cfm">algorithmic aversion</a>&#8212;our preference for humans to make key judgments, even when algorithms perform better&#8212;seems likely to fade as AI becomes clearly superior, then commonplace. Until recently, sitting in a taxi that drove through San Francisco with nobody at the wheel would have seemed like a horror-movie scene. The real-life horror may be when no human is needed to drive anything.</p><h3><strong>LEARNING IN THE AGE OF MACHINE LEARNING</strong></h3><p>So what?</p><p>Machines can do our duties, feeding and clothing and cuddling us, leaving humans to sail yachts and paint watercolours. We&#8217;d never need to study again (but could for kicks). You hear such predictions, whose only flaw is the entirety of human history, a saga shaped by our cognitive hunger and the restless drive for comparative advantage. Artificial intelligence will do plenty, but not erase what evolution wrote.</p><p>Yet evolution itself might help explain our predicament, why technology satisfies our wants while producing effects we regret. Biological evolution set our inner clock, with neuronal firings like a cognitive <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-111-the-great">speed-limit</a>. Yet technological evolution keeps speeding up, the clock-hands spinning <a href="https://ourworldindata.org/moores-law">faster and faster</a>, such that humans perceive only a blur now. We&#8217;re inundated with inputs at the speed of computational time, trying to keep up in biological time. And we&#8217;re going mad from it.</p><p>&#8220;As computational systems accelerate while biological rhythms remain stubbornly constant, humans face an insurmountable temporal divide,&#8221; Nicklas Berild Lundblad explains, setting forth his <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-111-the-great">Bifurcation of Time</a> theory. &#8220;We appear destined for increasing friction between silicon speed and cellular patience, with our institutions caught in the crossfire. But this conclusion misses something profound: <em>the emergence of artificial intelligence as a temporal mediator</em>.&#8221;</p><p>Already, large-language models are doing this, digesting the (humanly) indigestible immensity of data, and converting it into chatbot responses intelligible at the pace of human cognition. AI will keep evolving, becoming our extrasensory sensors and interpreters of the world while remaining fluent in human time. This points to a future role for humans, where action is not our highest calling; judgment is.</p><p>&#8220;In the judgment economy, value derives from qualities that resist technological acceleration&#8212;discernment, wisdom, creativity, and ethical reasoning,&#8221; Lundblad writes. &#8220;These capabilities aren&#8217;t necessarily improved by moving faster; often they benefit from deliberate slowness.&#8221;</p><p>Humanity can&#8217;t compete on the factory floor, so walks upstairs to management, setting the objectives, designing human-AI labour relations, evaluating the outcomes. This becomes a new <em>Why</em> for education: to develop the human discernment and ethical reasoning to govern our new powers rather than letting them govern us.</p><p>But how to teach that?</p><h3>SPECULATIVE IDEAS</h3><p>The Great Dejection&#8212;a worsening mood over human cognitive recession during the AI boom&#8212;brings a risk: passivity. If people see little worth in educating themselves, perhaps they stop bothering, surrendering further competence to machines, and forfeiting human decision-making, much as the AI-safety movement long feared. Therefore, education policy should target motivation, seeking to drive learning across the lifespan.</p><p>Here are three possible approaches, taking inspiration from <a href="https://selfdeterminationtheory.org/the-theory/">self-determination theory</a>, which identifies three psychological needs that motivate us: <em>autonomy </em>(feeling in control of one&#8217;s actions); <em>competence </em>(feeling capable and effective), and <em>relatedness </em>(feeling connected to others):</p><ol><li><p><strong>Choose Your Own Adventure. (Autonomy)</strong></p></li></ol><blockquote><p>Learning should be reframed as a personal asset, earned through self-directed R&amp;D. To initiate this, schooling could set &#8220;choose-your-own-adventure&#8221; hours, during which even the youngest pupils embark on personal enterprises in any area of interest, including the unacademic. The only condition would be that the student pursues an adventure&#8212;that is, adds to their knowledge and skills.</p><p>Pupils could use AI to help brainstorm the steps of the adventure plan, allowing them to stay in charge without the adult supervision that can puncture motivation. Nor should teachers mark the adventures. Rather, pupils mark the AI, assessing how well it helps them achieve their stated goals, and noting where the objectives fell short, which would grant the pupil insight into managing AI collaborations in the future. The adventure app should gather insights on each child&#8217;s learning strategies and efficacy too, providing feedback to help them learn how they learn.</p><p><a href="https://www.aipolicyperspectives.com/p/ai-and-behaviour-change">Behavioural AI</a> might help address student wellbeing here, employing data-analytics to detect which activities worsen a particular child&#8217;s determination and mindset, then adapting recommender systems to discourage these. That could also help resolve an enduring frustration: that years into the decline of children&#8217;s mental health and test scores, we still dispute the causes.</p><p>Students who relish autonomy might unlock more, while those who need (or prefer) closer guidance could select a more directed curriculum. As for teachers, they could upgrade from information-crammers to adventure-mentors, focusing on the meaningful part of their jobs: figuring out each learner, and inspiring them. In higher education, the &#8220;choose-your-own-adventure&#8221; approach could transition into a fully customizable degree: nothing but electives.</p></blockquote><ol start="2"><li><p><strong>Multiply Your Talents. (Competence)</strong></p></li></ol><blockquote><p>Lifelong occupations commonly spring from absurd factors such as location, wealth, and fluke&#8212;all mediated by the decision-making of adolescents whose prefrontal cortices have yet to mature. By this haphazard process, competencies (or their absence) become the boundaries of one&#8217;s life. Even when workers could expect a relatively predictable employment future, this career process was often cruel and foolish. Now that predictable work paths are dissolving, we may have a chance&#8212;and a need&#8212;to make credentialing more adaptive.</p><p>Micro-credentials could become <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-85-learning">source-agnostic</a>, available outside traditional institutions, and open to all ages, avoiding the cost and <a href="https://www.nytimes.com/2024/10/07/opinion/novelist-back-to-school-behavioral-science-identity.html">stigma</a> of returning to school after one&#8217;s typical schooling years. However, source-agnostic credentialing should take care not to undermine the institutions of higher education, which still produce inspiring learning communities, and spark interdisciplinary creativity (not to mention the plethora of non-educational benefits, such as varied social exposure and friendships).</p><p>The system should resist the tech-era <a href="https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/">tendency</a> to worsen isolation. This could be done by requiring group work, either virtually or locally, for micro-credentialing. Also, human tutors could supplement AI education, allowing students to discuss and digest their learning with a person.</p><p>AI might help individuals choose their credits through data-analytics of public information such as job ads, economic indicators, and other correlates of future demand, permitting users to align their learning with purpose. Intelligent systems could also connect a person&#8217;s existing skill/knowledge assets with others&#8217; needs, whether professional or not.</p><p>Micro-credentials might even be assigned to learning experiences such as extended foreign travel or volunteer work, or could be awarded for one&#8217;s decades alive, such that competence is less a framed diploma from youth than a photo roll of accumulating wisdom.</p></blockquote><ol start="3"><li><p><strong>The Wisdom Exchange. (Relatedness)</strong></p></li></ol><blockquote><p>Every person amasses a repository of wisdom over the course of life, but much of it goes untapped. AI could help build social platforms for wisdom-sharing, linking human experts to human learners, perhaps with a serendipity option for those wanting randomized knowledge discovery.</p><p>Wisdom exchanges should be voluntary, serving both sides of the equation: instruction for one, the gratification of purposeful assistance for the other. Each exchange could include a reciprocal element, meaning the expert turns the table on the learner at the end&#8212;say, a retired politician at first supplying insights to a young activist, then asking for a lesson on bewildering emojis. To encourage respectful interactions, the system should include reputation ratings, as with taxi apps.</p><p>Wisdom-exchanges could also employ AI for learning across borders, conducting real-time language interpretation between participants. A responsive system could also present relevant context during interactions, and offer post-exchange fact-checking and takeaways.</p></blockquote><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspective.  Lots more in the pipeline. Subscribe for free.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Maintaining agency and control in an age of accelerated intelligence]]></title><description><![CDATA[Climbing the ladder of abstraction]]></description><link>https://www.aipolicyperspectives.com/p/maintaining-agency-and-control-in</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/maintaining-agency-and-control-in</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 10 Apr 2025 11:49:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!p1NT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This essay is written by <a href="https://x.com/sebkrier?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">S&#233;b Krier</a>, who works on Policy Development &amp; Strategy at Google DeepMind. Like all the pieces you read here, it is written in a personal capacity. We encourage readers to engage with this piece as an exploration of ideas, rather than as a presentation of firmly held beliefs.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p1NT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p1NT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p1NT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png" width="1456" height="1097" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1097,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9457584,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/161010797?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p1NT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 424w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 848w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 1272w, https://substackcdn.com/image/fetch/$s_!p1NT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec0b4c38-61c6-4184-8a8a-026b51ab935e_2464x1856.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>S&#233;b Krier via Midjourney 7</em></figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Hofstadter&#8217;s Law: </strong><em>"It always takes longer than you expect, even when you take into account Hofstadter's Law."</em></p><p><em>In this piece, I want to untangle several threads that often get mixed up in AGI discussions: the distinction between capabilities and deployments, the relationship between technical progress and real-world impact, and how humans maintain meaningful control as systems become increasingly sophisticated. While the technical path to AGI is important, I'm equally interested in what happens afterward - my central concern is how we preserve agency not by understanding every cog in increasingly complex systems, but by designing the right level of abstraction for human oversight.</em></p><p><strong>The gaps between capability, deployment and impact</strong></p><p>Based on current scaling trends and algorithmic progress, I think it&#8217;s <em>likely </em>that we will reach ~<a href="https://arxiv.org/abs/2311.02462">AGI</a> <em>capabilities</em> before 2030. By AGI I'm specifically referring to systems that achieve something like &#8216;Expert AGI&#8217; as defined in the Levels of AGI <a href="https://arxiv.org/pdf/2311.02462">framework</a> - that is, AI that performs at least at the top percentile of skilled adults across a wide range of non-physical cognitive and metacognitive tasks. While physical capabilities through robotics are advancing and crucial for full real-world impact, my focus here is primarily on the cognitive capabilities enabling reasoning, learning, and problem-solving across domains with minimal human oversight.</p><p>Crucially, however, <em>achieving </em>these capabilities is distinct from <em>deploying </em>them in ways that yield truly transformative changes. This distinction between capabilities (what a system <em>can </em>do, often demonstrated in controlled evaluation settings) and deployments (how systems are integrated into real-world, value-producing applications) is central. Deployments are much harder to model and predict; notice how most online forecasting rarely focuses on concrete products or use cases. Simulated environments help gauge capabilities, but designing (and adopting) useful agents and products is another challenge entirely.</p><p>So while capabilities may arrive relatively soon, I expect a lag before we see widespread transformative impact from them. Intuitively, it might seem that AGI deployment would be relatively <a href="https://x.com/hamandcheese/status/1907055029644648579">quick</a> - we are already using pre-AGI systems, and so we have existing infrastructure to draw from. However, this underestimates real-world frictions, such as the slow grind of <a href="https://www.aipolicyperspectives.com/p/an-agents-economy">human and corporate inefficiency</a> and the difficulty in achieving product-market fit for truly useful applications. Progress on specific, often academic benchmarks can sometimes create an illusion of proximity to broad usability.</p><p>Furthermore, even as underlying capabilities advance, progress might <em>feel </em>slow during certain periods. There are several reasons for this potential perception gap. As Joshua Achiam <a href="https://x.com/jachiam0/status/1857973449085563200">observes</a>, AI may improve significantly on complex specialized tasks that are irrelevant to most people, "<em>creating an illusion that progress is standing still.</em>&#8221; Additionally, the advanced capabilities of a pre-AGI model or agent might initially go unnoticed in everyday interactions, as most current chatbot queries are basic. The capabilities might exist but remain underexploited, much like the months-long effort required by engineers to fully 'extract' value from a newly trained model.</p><p>Still, some strategically significant changes might arise even with limited deployment. A few actors using these capabilities effectively could gain a considerable head start &#8211; think of the decisive strategic edge gained by codebreakers during wartime, where a capability understood and wielded by only a select few dramatically altered outcomes.</p><p><strong>From today to AGI</strong></p><p>On the R&amp;D side, training a capable model is only the first step. Ensuring it's fit for purpose demands specialised data and meticulous post-training/fine-tuning. Building a truly useful and effective agent - or a multi-agent system with tailored roles and a robust pipeline - requires significant effort and development. These exercises take time and require many iterations.</p><p>High-quality synthetic data is not easy to generate. And when you finally <em>run </em>these systems, there&#8217;s still a lot of trial-and-error involved to achieve the desired output. If the output is data, you still need to evaluate, process, and use it - which is labour-intensive. If the output consists of actions, you need to verify and check results - same thing. This requires new tools and frameworks that also need to be created, iterated upon, and perfected simultaneously. The systems and products we build on top of these models will also be complex and unwieldy. The more I unpack and examine how models are trained and improved, the more I understand Hofstadter&#8217;s Law.</p><p>But while training a capable model is a significant hurdle, the need for specialised data and extensive fine-tuning will likely diminish as models become more advanced. More intelligent models will also be better able to work around imperfect tools and suggest improvements, reducing the need for painstaking manual optimisation. This is the benefit of generality and computational power.</p><p>We see this across the board: larger models tend to perform better out of the box than smaller, specialised models. For example, Bloomberg trained a GPT-3.5 class model on their financial data last year, but GPT-4-8k soon <a href="https://x.com/emollick/status/1770618237782307075">outperformed</a> it on nearly all finance tasks. While people sometimes make hyperbolic claims about this, I do think that it&#8217;s true - it&#8217;s certainly been my experience. The <a href="https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf">Bitter Lesson</a> remains as true as ever.</p><p>Perhaps this is because large models effectively bundle many specialized capabilities, but unlocking or optimizing them for specific tasks often requires dedicated fine-tuning or the creation of smaller, derived models for efficiency and convenience. The learnings from this specialization process, however, frequently inform the architecture and training of subsequent, even more capable general models, continuing the cycle.</p><p>The larger the model (and the more computation used for training), the <a href="https://x.com/sebkrier/status/1867167549982404812">greater</a> the potential relative gain from its self-improvement capabilities. As systems improve and demonstrate reliability, and as AI tools increasingly assist in the validation process itself, the level of scrutiny required during their development will likely decrease. Over time, we can expect many, if not most, parts of the R&amp;D pipeline to become automated.</p><p>For end-users, businesses, different sectors, and consumers, the calculus is different. <em>Once </em>products offer a sufficient degree of convenience, utility, and reliability, widespread adoption will likely occur rapidly. Researchers and academics may continue to meticulously scrutinise these systems, but the majority of users will prioritise ease of use and perceived benefits over deep technical understanding - accepting AI assistance with minimal oversight, much like following an online maps route without a second thought.</p><p>However, finding the right &#8216;product fit&#8217; is difficult. This is often overlooked by those working on R&amp;D, but it is critical for ensuring high adoption and diffusion. These elements are more about engineering, operational, contextual, and commercial complexity - as well as user experience - than raw model capabilities, which scaling trends don&#8217;t typically account for. To be clear, there&#8217;s nothing inherently impossible or intractable here, but it probably adds a couple of years or more to my timelines.</p><p>We'll likely see the gradual deployment of better models and more capable systems, with improved integration within the information ecosystem every month and year. Models will continue to improve, taking on more tasks in the economy. By the time we reach true AGI, much of the groundwork for deployment will likely have already been done.</p><p>In a sense, today's environment <a href="https://www.aipolicyperspectives.com/p/an-agents-economy">lacks the mature techno-organizational ecosystem</a> required to smoothly &#8216;drop in&#8217; AGI workers at scale, in a way that can account for the tacit knowledge that underpins many workflows. But as AI capabilities progress, this ecosystem is likely co-evolving, meaning the pathways for effective integration &#8211; whether through internal change or external disruption like AI-first startups &#8211; may be significantly <a href="https://www.aipolicyperspectives.com/p/an-agents-economy">clearer</a> by the time ~AGI capabilities arrive. After that, the challenge will be twofold:</p><ol><li><p>Continuing to improve models and deploying them systems at scale.</p></li><li><p>Achieving repeatable benefits and continuing this cycle on loop.</p></li></ol><p>The big questions are: How much time will this take? Is the curve exponential? And what about the physical infrastructure and hardware underpinning AI systems?</p><p><strong>So what about post-AGI?</strong></p><p>A lot of the above could, over time, be automated too - but the process of automating itself requires extensive iteration. This involves essentially repeating all the steps mentioned above, but multiple times, at different layers of abstraction. Every time you ascend one part of the abstraction ladder, new tasks, actions, and options are created and must be completed for progress to continue. Both executing and subsequently automating all of this requires specialised work and is unlikely to be solved in mere days, even with the assistance of multiple pre-existing AI systems operating in parallel. Put differently, each layer of abstraction doesn't just encapsulate the complexity below it but generates new types of complexity that must be managed. And all of this assumes we have reached this level with minimal societal pushback, which I don&#8217;t expect (e.g. strikes, protectionism, legally mandated human roles, etc.).</p><p>Setting aside the sociopolitical elements for a moment, managing this emergent complexity will necessitate substantial compute resources for experimentation, training and inference, leading to a faster escalation in energy demand. This rising demand could eventually drive a shift in how we produce compute substrates. As Epoch AI <a href="https://epoch.ai/blog/can-ai-scaling-continue-through-2030">highlights</a>, current chip manufacturing paradigms may not scale beyond 2030 (for training) due to power and manufacturing constraints. This may necessitate a transition to quantum computing or more unconventional approaches. This alone implies a degree of slowing down, relative to what one would expect from a purely linear extrapolation.</p><p>Progress will depend on optimising <em>both </em>the algorithmic and hardware layers, potentially shifting focus from software innovations to material science and advanced manufacturing. For example, building cutting-edge zinc factories to produce better lithography machines for more efficient chips is a highly complex endeavour. Even if millions of AGI scientists assist in this effort, financing and constructing adequate and safe physical labs will still require considerable time.</p><p>The pace of hardware improvements is likely to be constrained not only by hardware limitations but, <em>maybe</em>, also things like the transfer of tacit human knowledge and tense geopolitical dynamics, rather than algorithmic complexity alone. As we exhaust the potential for algorithmic optimisation on <em>existing </em>hardware, progress will increasingly depend on advancements in chip design and manufacturing infrastructure. This may involve building specialised factories or developing novel material processing techniques, areas where AI can help but where human expertise and tacit knowledge will likely remain essential for some time.</p><p><em>How much time do these implementation challenges add to the journey from initial AGI capabilities to transformative economic impact? Does ~AGI materially change things?</em></p><p><em><strong>What makes me lean towards very fast:</strong></em></p><ul><li><p>Parallel processes can stack multiplicatively with agents who work 24/7 - see also <a href="https://dariusforoux.com/prices-law/">Price&#8217;s Law</a>.</p></li><li><p>Geopolitical competition will accelerate investment, as well as massive capital mobilization and strategic concentration of compute resources.</p></li><li><p>As Daniel Kokotaljo <a href="https://ai-2027.com/">writes</a>, automation of ML R&amp;D will lead not just to faster iteration but also enable qualitative algorithmic breakthroughs that unlock step-changes in capability.</p></li><li><p>Larger models will continue to improve and outperform smaller, specialised models.</p></li><li><p>Smaller, more efficient models will continue to improve, eventually reaching the capabilities of the previous generation of large models.</p></li><li><p>We&#8217;ve only just begun exploring inference scaling, with ample room for further impressive capabilities.</p></li><li><p>There is potential for rapid, widespread automation <a href="https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d">across</a> the broader economy, leveraging AI for ordinary cognitive and physical tasks.</p></li><li><p>Privacy-preserving monitoring and verification technologies could greatly facilitate governance.</p></li><li><p>Robots are becoming increasingly capable and dexterous, a trend that is likely to continue.</p></li><li><p>AGIs could contribute to energy R&amp;D, creating a positive feedback loop, with energy efficiency of compute improving simultaneously.</p></li></ul><p><em><strong>What makes me lean towards not that fast:</strong></em></p><ul><li><p>As we solve one bottleneck, new, previously unappreciated ones become salient.</p></li><li><p>"Easy for humans, hard for AI" tasks are arguably more critical for widespread automation than superhuman performance on narrow cognitive benchmarks.</p></li><li><p>As Ege Erdil <a href="https://epoch.ai/epoch-after-hours/disagreements-on-agi-timelines">notes</a>, there isn't a clear, rapidly improving trendline for agency or common sense that can be confidently extrapolated; and transfer learning remains tricky.</p></li><li><p>Integration/knowledge <a href="https://x.com/emollick/status/1902877533965586757">challenges</a> may multiply, and verification requirements could grow over time.</p></li><li><p>Poorly designed regulations, <a href="https://www.uschamber.com/employment-law/unions/dock-workers-could-strike-again-what-you-need-to-know">protectionism</a>, or bureaucratic inefficiencies could hinder progress.</p></li><li><p>The difficulty and cost of acquiring the right kind of data to train capabilities in areas like agency, planning, and real-world interaction.</p></li><li><p>Legacy infrastructure lock-in: even well-capitalised firms are built on pre-AGI assumptions, and rebuilding physical and organisational systems takes time regardless of capabilities.</p></li><li><p>Inherent physical limits could restrict speed and slow compute scaling.</p></li><li><p>Military and geopolitical conflicts, economic crises, weak economies etc.</p></li><li><p>AI R&amp;D automation alone may not be <em>sufficient </em>for sudden, rapid acceleration; achieving that depends on other feedback loops and Baumol-like bottlenecks.</p></li></ul><p>Managing post-AGI economies and automated production-processes will ultimately require both <em>decisions </em>and <em>time</em>. Wolfram <a href="https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/">argues</a> that even with highly automated AI systems, strategic choices must still be made about which paths to explore in the computational universe, noting that "<em>something&#8212;or someone&#8212;will have to make a choice of which ones to take.</em>" I think this is a crucial point, relevant to both agency, control, and speed.</p><p><strong>Who does the work?</strong></p><p>What if the <em>'something' </em>making choices for everything is AIs with zero humans involved? In that case, humans are arguably no longer in the loop, and things could change much faster than we can process, understand, or control. This is the <em>real </em>&#8216;automation gone wild&#8217; risk - it&#8217;s not just about automating away machine learning R&amp;D, but automating <em>everything</em>. Even without any sudden technological discontinuity or overtly hostile AI actions, our economy, culture, and political institutions could gradually drift away from human influence as they become less dependent on human involvement to function.</p><p>I don&#8217;t think we will sleepwalk into this, but I do think there are adjacent risks worth considering.</p><p>If the <em>'someone' </em>making choices is humans, then our cognitive speed and limitations will serve as a ceiling, restricting the rate of progress. In this case, humans would constitute a slow-but-necessary component within an ecosystem where different types of thinking or processing occur at different speeds. This doesn&#8217;t seem sustainable in the long run; even today, we often sacrifice understanding for efficiency. For example, there&#8217;s little point in using an automated cancer detection tool if you intend to manually verify each image in parallel. At some point, the system <em>performs </em>well enough to be trusted. On the other hand, we still have largely decorative &#8216;drivers&#8217; for the UK tube system, even though these systems are fully and reliably automatable.</p><p>The most likely and sustainable allocation of tasks will likely involve <em>both humans and AIs making decisions</em>. Achieving this will require rules, verification systems, and trusted-based mechanisms to maintain a stable and peaceful coexistence - not only between different ASI powers or nations, but eventually also between humans and ASIs. Some tasks we will (hopefully) be content to automate entirely: just as we were content automating bank tellers in the past or call centres today, tomorrow we may be comfortable automating processes at a higher level of abstraction, such as &#8216;public transport&#8217;.</p><p>The difficulty is designing systems that can effectively coordinate fast, post-AGI-driven processes with slower human strategic oversight. In such a world, we will need what you might call <strong>"cognitive impedance matching" </strong>- systems that can translate between AI and human timescales while maintaining stability. These intermediaries could be agents, but other form factors and interfaces may be preferable in some cases. Imagine an ASI managing a complex global supply chain in real-time, making millisecond adjustments; the impedance matching system might present human overseers with curated summaries of network health, predictive alerts about potential disruptions days or weeks in advance, and simplified interfaces to approve strategic shifts (like prioritizing a specific region) without needing to track every individual shipment.</p><p>This human oversight is less like a technical checkup, which another ASI might perform, and more akin to setting the destination and preferred route on a navigation system. While another AI could monitor the engine's performance, the human role is to ensure the complex machinery is ultimately serving human-defined goals and navigating according to human-held values and priorities. It's about steering the 'what' and 'why', even as the ASI optimizes the 'how'.</p><p>Nicklas Lundblad explains that AI can serve as a "<a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-111-the-great">bridge</a>" between computational time and biological time. This is important because many processes - such as justice, democracy, and relationships - lose their meaning or utility when rushed. Down the line, cognitive enhancement for humans could help. But this will be the central 'dynamic' to manage, and it&#8217;s another important facet of what &#8216;alignment&#8217; is fundamentally about. Perhaps a different term is more appropriate, but the goal remains the same: ensuring the <em>systems </em>you design do what you <em>intend</em>, safely.</p><p>Michael Levin <a href="https://youtu.be/6w5xr8BYV8M">emphasizes</a> the diversity of intelligence and embodiment, acknowledging that different agents operate on different timescales. Just as biological systems - such as the interplay between fast neuronal firing and slower muscle contractions - achieve coordination across different timescales, we should be able to design AGI systems that translate between human and post-AGI speeds, enabling meaningful collaboration.</p><p><strong>What about </strong><em><strong>understanding</strong></em><strong>?</strong></p><p>One way I could see this process going wrong is if AGI systems become increasingly more deeply embedded in decision-making processes across all levels, gradually shifting the human role from active direction to passive approval. Practical concerns about sentience, rights, and sophonts could further <a href="https://x.com/sebkrier/status/1868638705025740968">complicate</a> these dynamics. Even with well-designed oversight mechanisms, the speed and complexity of AI-driven processes might make genuine human understanding and intervention increasingly difficult. This could lead to emergent system-level behaviours that no individual component was explicitly designed to produce, resulting in a gradual erosion of human agency and understanding. As Kulveit et al. detail in their '<a href="https://gradual-disempowerment.ai/">Gradual Disempowerment</a>' paper, this erosion could occur incrementally through the progressive replacement of human labour, cognition, and participation across interconnected societal systems.</p><p>The "cognitive impedance (mis)match" discussed earlier only worsens as AI systems become more capable. Even with the best interfaces and coordination mechanisms, humans may increasingly be forced to either: (1) trust the AI systems blindly because we can't verify their logic and complex reasoning in real-time; (2) slow everything down to human speed, creating massive inefficiencies; or (3) remove ourselves from more and more decisions. However, whether this scenario leads to a true loss of agency hinges critically on our ability to design and implement effective high-level oversight mechanisms built upon the right abstractions.</p><p><strong>Is it inevitable, then, that we lose track of reality and choice?</strong></p><p>Maybe not. Perhaps the challenge of understanding is already accounted for by the idea of climbing the ladder of abstraction. In principle, we could continue shifting to higher levels of abstraction in our oversight - just as executives don't need to understand every line of code to run a tech company, and I don&#8217;t need to understand how a fire alarm works to install one. We'll focus on <em>what </em>we want the AGIs to achieve, not necessarily <em>how </em>they achieve it (though nothing, apart from time, prevents us from unpacking the why if needed).</p><p>Consider a Minister of Health, who, faced with a new flu strain, strategically prioritizes vaccinations for the elderly based on epidemiological models, without needing to grasp the intricate molecular biology of the virus or the logistical complexities of vaccine production. This is a crucial, high-level decision, reliant on abstracted information and expert advice to achieve positive public health outcomes. Similarly, much like we see in other spheres of life, decision-makers can - and often must - make choices based on outcomes without intimate knowledge of all the underlying processes. The same applies to steering superintelligent multi-agent systems.</p><p>To illustrate the intuition behind this, imagine a similar line of thinking in a village 2,000 years ago: &#8220;<em>In the village, everyone understands how everything works - we know who makes our tools, who grows our food, how decisions are made at the village council. But in these new cities, everything is interconnected in ways no one person can fully grasp. People don't even know who bakes their bread! And with all these written contracts and money changing hands, decisions that affect us all are being made through processes we barely understand and have no time to unpack.&#8221;</em></p><p>The nuance here is that, while deep understanding is often useful, its necessity is built partly on our goals and the level of assurance required. Today, I don't know how to build a plough, because there are more effective processes (markets) that can provide me with one. However, I do need to gain experience working in policy to achieve my goals within my current environment. These goals will change and evolve, and so will the types of knowledge or understanding we need and want to internalise.</p><p>The key difference is that no single human may be able to understand all the building blocks; but ideally, these should be well-documented, maintained, and scrutinizable if needed - by different kinds of specialized agents. Building a new Library of Alexandria of knowledge seems like a worthwhile endeavour.</p><p>Biological systems also demonstrate that effective high-level functioning doesn't necessarily require low-level understanding. My body maintains homeostasis without my conscious mind understanding cellular biology. Similarly, I can effectively use my arms without understanding individual muscle fibre mechanics, or a computer mouse without knowing its hardware intricacies.</p><p><strong>From understanding to steering</strong></p><p>Beyond understanding, humans can also retain agency over <a href="https://arxiv.org/abs/2503.05710">governance</a> more generally. Scenarios of gradual disempowerment are compelling when entire systems are outsourced to identical copies of AI agents that don't represent anyone's direct interests. The 'human interest' feedback loop is weak in such a world. Instead, I think that every human should ideally have a personalized agent that learns and represents their evolving values and preferences. These agents, tightly linked to their human principals and acting on their behalf, would create a continuous feedback loop between individuals and large-scale automated systems, preventing system-level value drift.</p><p>So you don't necessarily need humans in the loop; you need aggregate human <em>interests </em>in the loop. Provided egregious misalignment is avoided, automated systems should be directly informed by, and in some sense exist downstream of, human preferences - preventing the gradual drift toward disempowerment one can expect with monolithic, unaccountable AI structures. Of course, you still face challenges like trade-offs, aggregation issues, and conflicting values. But these are not insurmountable and ultimately call for updating our democratic machinery (or building something better) to accommodate diverse human interests. The difficult part will be replacing our existing decaying institutions and navigating entrenched human interests.</p><p>And just as complexity necessitated new forms of social organization and market mechanisms, the speed and scale of ASI-driven processes will likely require automating significant parts of governance itself. Designing these automated governance systems, ensuring they operate effectively, adaptably, and remain aligned with human values and strategic direction, becomes a crucial meta-level challenge for maintaining agency in this future</p><p>This is not necessarily an erosion of agency, but a shift in <em>how </em>and <em>where </em>it is exercised. The real challenge isn't <em>maintaining </em>low-level understanding, but rather designing the right abstractions that capture what we truly care about and ensuring these abstractions remain responsive to evolving human values while preserving meaningful oversight as systems grow increasingly complex. It&#8217;s not easy to pre-design or pre-specify these, and I think working all this out will require significant human input for longer than is sometimes assumed.</p><p><strong>Amara&#8217;s Law:</strong><em> &#8220;We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.&#8221;</em></p><p>Overused, but useful.</p><p>&#8212;---------</p><p><em>Thanks to the following people for comments: Shane Legg, Ben Lepine, Nick Swanson, Harry Law, Pegah Maham, Zhengdong Wang, Conor Griffin, Tim Genewein, Herbie Bradley, Samuel Albanie, David Wolinsky, and Luke Drago. It goes without saying that they don&#8217;t necessarily endorse my ramblings.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI & Behaviour Change ]]></title><description><![CDATA[How should we shape how AI shapes us?]]></description><link>https://www.aipolicyperspectives.com/p/ai-and-behaviour-change</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-and-behaviour-change</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 13 Mar 2025 12:49:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!n9xj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>We are glad to share an external guest post. This essay is by <a href="https://www.tomrachman.com/about-tom.html">Tom Rachman</a>, a writer  who has recently pivoted into the world of AI policy, with particular attention to the ways that technology will intersect with, and barge into, culture and society. Like all pieces you read here, this post reflects the author&#8217;s personal views.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n9xj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n9xj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n9xj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg" width="1456" height="1410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1410,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9487204,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/158987557?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!n9xj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!n9xj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F095ce21d-002b-4c19-b850-34289597e6e2_6091x5897.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#8220;Shaping&#8221; human behaviour sounds sinister, stirring the horror of losing control over oneself, a fear almost akin to being consumed alive. From early childhood, we long to take charge of our actions, then gain that power in adulthood, marshalling it haphazardly for the decades that follow, and surrendering autonomy only if threatened by violence or debility.</p><p>But now and then, we <em>are</em> willing to toggle off full agency. Alcohol is a common tool for this. Or consider Ozempic and other semaglutides that millions of people eagerly take, targeting GLP-1 receptors in the brain that mimic hormones of satiation, shortcutting wants to deliberately modify their own behavioural responses. Those who dose themselves with Ozempic don&#8217;t feel disempowered. They frame agency as the high-level decision to alter behaviours they struggle to modify autonomously.</p><p>So the horror of losing self-control is not absolute. Rather, it is <em>unwitting</em> behavioural change that stirs anxiety. The history of new technologies is also a history of this anxiety, with each successful device blamed for seducing the masses from behavioural norms, whether it was the transistor radio, the video game, or the smartphone. Such panics are always correct: if the technology works, it will change users&#8217; behaviour. But society habituates soon enough, and the next generation smirks at the fears of the last.</p><p>Not all innovations are equally impactful, though. Daily, futurists prophesy that artificial intelligence will transform our world, becoming our newest <a href="https://academic.oup.com/oxrep/article/37/3/521/6374675#333037677">general-purpose technology</a>. But unlike electricity or the steam engine, AI&#8217;s &#8220;fuel&#8221; is data, the traces of what our species has written, spoken, done and recorded, iteratively refined by human feedback. Humanity feeds these tools, and is embedded within them, making the behavioural impacts more direct, more potent. If machines operate us, who&#8212;or what&#8212;is in control? &#8220;Man himself has been added to the objects of technology,&#8221; the philosopher Hans Jonas <a href="https://www.google.co.uk/books/edition/The_Imperative_of_Responsibility/sRP3uJkxydQC?hl=en">observed</a> back in 1979, remarking that this &#8220;may well portend the overpowering of man.&#8221;</p><p>The first flares of public unease over AI&#8217;s behavioural impact concerned social media, which many came to see as an algorithmic Svengali, engineering the polarization of politics, the degradation of culture, the fraying of norms. Debate persists over how blameworthy social media was, but nobody disputes that future AI systems will do far more than recommend cute videos and infuriating posts. These systems will insert themselves everywhere from the labour market to our bedrooms. We will invite them in.</p><p>Indeed, a key point of AI is human change, to better us by expanding our cognition. Evolution granted us an exquisite system of thinking, but it has embedded limits, and we cannot update nature at the pace of tech advancement. In particular, <a href="https://arxiv.org/abs/2303.04217">scale and complexity</a> overwhelm us. But AI promises to become a form of <a href="https://www.nature.com/articles/s41562-024-01995-5">pre-wisdom</a>, assimilating more than we could ever sift through, incorporating <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-105-ais-with">sensory signals</a> beyond our capacity, offering counsel we would never have known to ask for. Stanislaw Lem foresaw this in 1964, predicting that humanity would resist at first. But intelligent machines would process data so comprehensively that &#8220;<a href="https://csclub.uwaterloo.ca/~pbarfuss/digitalocean/Summa_Technologiae_-_Zylinska,_Joanna,_Lem,_Stanislaw.pdf">intelectronics</a>,&#8221; as he called it, would prove far wiser in selecting next steps. &#8220;After several painful lessons, humanity could turn into a well-behaved child, always ready to listen to [the machine&#8217;s] good advice,&#8221; Lem wrote.</p><p>We recoil at this infantilization, which triggers that dread of losing <a href="https://arxiv.org/pdf/2305.19223">agency</a>. The challenge is this: behavioural influence <em>will</em> happen; there is no opt-out. The question is whether we respond pragmatically, attempting behavioural audits of AI systems, incorporating workable controls, and infusing design with ethics. Otherwise, our rightful dread at what might someday befall humankind becomes a truth-denying passivity that decides matters for us. The choice may be this: shape your behaviour, or have it shaped for you.</p><h1><strong>OF TWO MINDS</strong></h1><p>Behavioural science is a field of optimistic pessimism. At once, it declares the human mind a blunderer, yet insists that the human mind can amend this.</p><p>Kahneman, a self-identified <a href="https://www.theguardian.com/books/2015/jul/18/daniel-kahneman-books-interview">pessimist</a>, described many cognitive biases on which the field is based, yet remarked that he still had little power to resist them. The science&#8217;s optimistic face is embodied in its other Nobel laureate, Richard Thaler, who considered the same constraints on human thinking, and proposed an answer: nudge. His book of that name, written with Cass Sunstein, enjoyed a timely publication, released around the nadir of the Global Financial Crisis in 2008, when rational actors had behaved in ways that seemed irrational. Policymakers, after stabilizing the markets, cast around for fixes. Among the most appealing was this promise of a low-cost brain toolkit: the nudge.</p><p>The key concept was &#8220;choice architecture&#8221; - the idea that small contextual changes may have large effects on how people behave. So, you arrange fruit before the desserts in a cafeteria, and diners&#8212;perfectly free to bypass the apples for the cake&#8212;are more likely to act in their better dietary interests. To avert charges of manipulation, the two authors offered the ethical underpinning of &#8220;libertarian paternalism,&#8221; that a policy could justifiably funnel individuals towards beneficial actions, provided that they retained the power to opt-out.</p><p>&#8220;The presumption that individual choices should be free from interference is usually based on the assumption that people do a good job of making choices, or at least that they do a far better job than third parties could do. As far as we can tell, there is little empirical support for this claim,&#8221; Thaler and Sunstein <a href="https://www.aeaweb.org/articles?id=10.1257/000282803321947001">wrote</a>. &#8220;People do not exhibit rational expectations, fail to make forecasts that are consistent with Bayes&#8217; rule, use heuristics that lead them to make systematic blunders, exhibit preference reversals (that is, they prefer A to B and B to A) and make different choices depending on the wording of the problem.&#8221;</p><p>Over the years, researchers cited <a href="https://www.visualcapitalist.com/wp-content/uploads/2021/08/all-188-cognitive-biases.html">a plethora of cognitive biases</a>, from anchoring effects, to intertemporal inconsistency, to present bias. Behavioural interventions included changing defaults (for instance, automatically enrolling drivers in an undersubscribed organ-donation plan); or adding commitment devices (compulsive gamblers signing up to self-exclusion lists at the casino); or encouraging social accountability (advising people to go to the gym with a friend to increase visits). But the field&#8217;s optimism was not always rewarded. A <a href="https://www.pnas.org/doi/10.1073/pnas.2107346118">meta-analysis</a> of 200 studies (n=2,148,439) showed that &#8220;choice architecture&#8221; had small to medium effect-sizes, with consistently lesser impact from interventions that demanded people commit to change, or relied on them absorbing new information. The largest effects seemed to come when participants didn&#8217;t exert themselves, but merely had their actions tweaked through changed defaults. Another <a href="https://www.nber.org/papers/w27594">study</a> looked at governmental &#8220;nudge units&#8221; that undertook behavioural interventions, comparing their outcomes to those cited in academic papers. In journals, the average impact of a nudge seemed impressive: 33.5% increase over the control. In real-world interventions, results were far weaker: an 8.1% increase. The main culprit seemed to be publication bias, with journals tending to only print studies that claim significant effects, while interventions that did little vanish from the academic record.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>A further challenge is that populations are heterogenous. So graphic warnings on high-calorie drinks might <a href="https://www.nber.org/system/files/working_papers/w30740/w30740.pdf">discourage</a> consumers who already had good self-control, but do little for those struggling to curb their appetites. Indeed, many just develop an aversion to graphic warnings, not to the drinks. That suggests that behavioural interventions might work better if personalized to individual preferences. But the value proposition of nudging for policy was that it promised benefits at scale.</p><p>As behavioural science grappled with these disappointments (or tried to ignore them), large-scale behavioural change <em>was</em> taking place all around. The cause was technology. Academics struggled to harness these possibilities for science, a few employing apps to gather individualized data, or testing wearable sensors, or just-in-time adaptive interventions (JITAIs) that prompted participants with personalized reminders and recommendations. Meantime, behavioural consultants worked with tech companies, and certain insights seeped into products. Behavioural designs sought stickiness, to give customers something to return for, and to stay with: just what they wanted.</p><p>Yet the proxies for human desire&#8212;for instance, click-throughs and engagement time&#8212;aligned with <em>short-term</em> wants. Long-term human objectives had few metrics, and this had a peculiar effect: many people found themselves doing what <a href="https://www.nber.org/system/files/working_papers/w31771/w31771.pdf">they wished not to do</a>. Terms like <a href="https://corp.oup.com/news/brain-rot-named-oxford-word-of-the-year-2024/">&#8220;doomscrolling&#8221; and &#8220;brain rot&#8221;</a> emerged. That primal dread of humans losing self-control surfaced. Public intellectuals fretted about whether <a href="https://www.youtube.com/watch?v=aYzFH8xqhns">free will</a> even exists, while notable behavioural scientists&#8212;admitting the limits of their past interventions&#8212;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4046264">questioned</a> whether their past focus on the &#8220;i-frame&#8221; (getting individuals to change) had deflected attention from the &#8220;s-frame&#8221; (how systems change behaviour).</p><p>Part of the problem is that our species has built-in systems of our own, inscribed by evolutionary pressures over millions of years, and resistant to alteration. Kahneman explained this with his model of dual cognitive tracks, System 1 thinking (fast and effortless) and System 2 (slow and reflective). Sometimes, the brain errs by employing System 1 shortcuts when System 2 could more effectively reason through matters.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> These two systems&#8212;rather than exposing humankind as a race of dunces&#8212;are effective in most cases; otherwise, natural selection would not have left us with them. Nevertheless, human cognition does have vulnerabilities, and algorithmic intelligence homed in on many, as when feeding our hunger for effortless pleasures now that subvert our deeper longings later.</p><p>Among the outcomes was what the psychiatrist Anna Lembke calls &#8220;<a href="https://www.nytimes.com/2025/02/01/magazine/anna-lembke-interview.html">the plenty paradox</a>,&#8221; that the abundance of contemporary life floods our evolved reward pathways in ways that stresses us, perhaps explaining <a href="https://mecp.springeropen.com/articles/10.1186/s43045-023-00315-3">the rise in depression and anxiety</a>, which is most acute in wealthy countries that also have greatest access to the people-pleasing offerings of new tech. &#8220;One would hope and think that we&#8217;d be engaging in deep philosophical discussions, helping each other, cleaning up the garbage,&#8221; Lembke says, of our era of plenty. &#8220;But instead what we&#8217;re doing is spending a whole lot of time masturbating, shopping, and watching other people do things online.&#8221;</p><h1><strong>SQUINTING INTO THAT FOGGY FUTURE</strong></h1><p>You wake with a jolt, your mouth parched, your T-shirt damp with sweat. Somehow, you&#8217;re queasy but famished too. Mostly, you&#8217;re annoyed. Not just for the excesses of last night, but for how you&#8217;ve acted lately: self-centred, distracted, unproductive. You rouse your AI ecosystem, and open its settings, selecting &#8220;My Future,&#8221; where the blizzard of daily life is simplified into sliders, using probabilistic analyses rooted in the heaped datapoints of humankind, cross-referenced to your personal behavioural history, modified by your current physiological inputs, and enriched with environmental sensors presenting perceptual insights beyond the capacity of any living being. You adjust the &#8220;Objectives&#8221; slider, shifting your behavioural preference from &#8220;Short-term pleasures&#8221; towards &#8220;Long-term goals,&#8221; and set the timeframe as &#8220;This Week.&#8221; Immediately, the system reconfigures your AR overlay of information stream and notifications, tweaking your mood-responsive assistant, and altering the haptic-nudge schedule on your wearables, all with the intent of curtailing the defeating habits you struggle to repress while promoting those behaviours you long to manifest.</p><p>Would this curb your autonomy? Or enhance it?</p><p>Humanity&#8217;s dilemma between short-term wants and long-term objectives is a conflict that has provided drama enough for most of the novels, plays and ballads ever written. It&#8217;s an inner contest that also infuses today&#8217;s primitive recommender systems, where algorithms overwhelmingly feed us what the philosopher Harry Frankfurt called first-order desires (<em>I like watching video clips</em>) at the expense of second-order desires (<em>I wish I just wanted to read a book</em>).</p><p>One way of understanding this is as a time-horizon mismatch between desire now or contentment later. Most people agree that a good life includes both fleeting pleasures and the accumulation of achievements. We simply disagree on the proportions. Conceivably, a personalized AI ecosystem could allow users to express their preferred balance, altering behavioural inputs accordingly. Users might also pursue betterment of specific character traits&#8212;say, moving a &#8220;Resilience&#8221; slider to challenge existing views and action patterns, much as one programs higher resistance on a treadmill to build endurance. Chatbot responses could incorporate <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-101-make-2025">random variation</a> too, emulating the stimulating unpredictability of other people, rather than making us captive to confirmatory feedback loops. Otherwise, <a href="https://unherd.com/2025/01/chatbots-are-not-your-friends/">AI companions</a>&#8212;already in deep relationships with humans that include sex and &#8220;marriage&#8221;&#8212;could become behavioural anchors, a constant companion for your whole life, who holds you to your 20-year-old character when you&#8217;d otherwise have evolved into a 50-year-old.</p><p>To avert behavioural stagnation, AI companions might be made mortal, but that could inflict terrible suffering on their human partners. A less-brutal approach would be programmed forgetfulness, so that AI systems remove facets of your past from their data, much as people superimpose your current character upon past versions, which become fainter as years pass. Likewise, AI companions could alter behaviourally over time, perhaps even growing apart from you. As most humans have a strong drive to hoard artifacts of their past, AI systems might store their comprehensive memories of you apart from the working behavioural datapoints, so that its agents and recommender systems operate according to your current self, while an archive sits in abeyance, like a nostalgia vault you may access if ever longing to revisit who you were. You might even choose to venture back among those old settings, experiencing the behavioural filters you once so ardently sought, presenting a parallax view of your life&#8217;s course, from then to now.</p><p>At set intervals, your AI ecosystem could present a review of your behavioural trends, and solicit your updated preferences, perhaps as part of New Year&#8217;s Resolutions. On each such occasion, the system could offer advice based on the latest empirical findings on human wellbeing. Today&#8217;s evidence on wellbeing still stirs <a href="https://www.afterbabel.com/p/a-debate-on-the-strengths-limitations">debate</a>, with mixed answers to questions like whether money can buy <a href="https://www.forbes.com/sites/johnjennings/2024/02/12/money-buys-happiness-after-all/">happiness</a>, and whether current wellbeing measures are reliable enough to make <a href="https://www.nber.org/system/files/working_papers/w28438/w28438.pdf#page=46.19">policy</a>. Influential studies may have queried just a few hundred people in an artificial environment. AI behavioural studies could involve sample sizes in the billions, amassing the most granular set of behavioural insights ever attempted - computing ever-larger datasets based on our (anonymized) responses and wants. This could produce a sturdier science of &#8220;choice architecture&#8221; and the conditions that amplify or stifle effects, how to design and avoid feedback loops and how individual characteristics moderate species-level tendencies. Such findings could amount to our greatest clarity yet on &#8220;the good life,&#8221; and whether that is even what people care for, given how often their actions commonly subvert their stated wishes. We might learn too whether behavioural interventions really change much, or if each of us is destined to hover around a baseline nature, hardly corrigible no matter how many self-help books we pile on the bedstand, or how many &#8220;My Future&#8221; adjustments we make to our AI-ecosystem settings.</p><p>Troubling outcomes are possible too. AI might discern behavioural correlates within datasets, and engineer unintended changes in human culture. Imagine if datapoints revealed that listening to certain music was associated with higher rates of depression, causing well-intended recommender systems to steer everyone away from ever hearing such songs, becoming the gatekeepers of human art. Or what if AI ecosystems sensibly diverted people from toxic relationships? Anyone would be relieved to dodge a disastrous marriage, but what if the machines judged <em>you</em> toxic, and people you knew started treating you like a pariah? Those who failed to boost others&#8217; metrics could become data outcasts, banished from good society by AI. Arguably, this could disincentivize awful behaviour, pressuring the nasty to amend their ways. Or maybe it just punishes the eccentric.</p><p>Data protections would be imperative, ensuring that nobody is behaviourally hacked, whether by a bad actor, or by an institution to mobilize the citizenry to their preferred ends. However, people might accept external influences on their behavioural AI&#8212;say, allowing a spouse to adjust your settings in exchange for avoiding divorce. Or a healthcare provider might offer discounts to clients who alter their AI settings to optimize for determinants of lower medical costs, such as exercise, socializing, and healthy diets. Parole boards could make release conditional on AI behavioural diversions, and governments could incentivize pro-social behaviour by offering tax credits to those who optimize for charitable activities.</p><p>But any external influence tagged to one&#8217;s behavioural data is fraught with dangers, easily degenerating into a social-credit system that coerces conformity and further penalizes those who already struggle with immiserating behavioural tendencies. A further risk is that incentives for &#8220;good behaviour&#8221; undermine intrinsic motivation, much as a child who is paid each time she acts politely to an elderly person might develop a distorted practice of courtesy, withholding kindness from granny until someone hands over cash.</p><h1><strong>THE TAKEAWAY</strong></h1><p>AI ecosystems will influence human behaviour. The question is whether we can reconcile three outcomes that seem at odds: 1) motivating people to act in the present for their desired futures; 2) retaining human agency; 3) persuading AI developers to pursue these ends.</p><p>It&#8217;s facile to profess that companies must prioritize public wellbeing. Developers need a realistically incentivized way to implement beneficent ends. That could mean corporations changing income streams so that behavioural AI is not primarily funded by advertising based on short-termist metrics, but accrues revenue from users&#8217; progress towards stated goals, perhaps with payments made accordingly. People may be reluctant to pay $20 per month for a chatbot subscription, but most would shell out far more if the money were evidently correlated to their career success, or a better body image, or more fulfilling social lives. Think how much money people already spend on gym memberships and self-improvement courses, often to limited effect.</p><p>Regulatory policy might adapt too, so that tech developers whose products had a demonstrable drag on measures of health and productivity might face curbs or fiscal penalties, while those labs whose tools benefit public welfare might gain access to government contracts, R&amp;D funding, or tax discounts proportional to the improvements.</p><p>At each stage of this ongoing AI transformation, system design should attend with utmost care to behavioural effects, applying empirical evidence to responsible development, while informing policymakers, and alerting the public. This means moving beyond the existential-risk debates to establish a human-risk research agenda, including pre-release behavioural audits of significant AI applications, post-release monitoring of how humans actually interact with them, and longitudinal studies to judge their diffusion through society, considering everything from polarization, to loneliness, to wellbeing.</p><p>We did none of this with social media. As a result, we still <a href="https://www.afterbabel.com/p/why-some-researchers-think-im-wrong">dispute</a> what it did to humans, whether it is the cause of contemporary strife, or if we are just blaming machines for our own ills. What we must avoid is leaving AI to set the proxy for &#8220;the good life.&#8221; Otherwise we risk perpetuating the dismal compromise: getting what we want, to ends we do not want at all.</p><p>Done properly, we may create a positive feedback loop, where AI does not simply crash into humanity. It helps explain us.</p><h1><strong>APPENDIX: SEVEN SPECULATIVE SETTINGS</strong></h1><p>Could AI improve your behaviour? Or will it supplant your autonomy? In part, this may be a design problem. Behavioural AI should favour your best, allow for personal evolution, and incorporate human agency. Here are a seven ways:</p><p><strong>1. Know-Your-Human Requirements</strong></p><ul><li><p><strong>Onboarding</strong>: Before activation, any personalized AI system with meaningful sway over behaviour could conduct a short and transparent conversation with its user, establishing the human&#8217;s short-term habits (e.g., &#8220;I&#8217;m addicted to tea&#8221; or &#8220;I go to bed too late&#8221;) and long-term aspirations (e.g., &#8220;I dream of moving countries,&#8221; or &#8220;I wish I were more sociable&#8221;).</p></li><li><p><strong>No box-ticking</strong>: The process must never become akin to Terms &amp; Conditions, but like a first chat with a new therapist, establishing priorities, career goals, health issues, what&#8217;s missing from one&#8217;s life. The results must be strictly encrypted, and accessible only to the human in question.</p></li></ul><p><strong>2. Staying Aligned</strong></p><ul><li><p><strong>Changing Your Goalposts</strong>: The Know-Your-Human survey establishes the opening defaults of the AI ecosystem&#8217;s weighted recommendations and nudges&#8212;say, a behavioural balance of 70% long-term objectives to 30% short-term pleasures (among many other personalized settings). These defaults must remain accessible, intelligible, and simple to adjust.</p></li><li><p><strong>How&#8217;s It Going?</strong>: Annually, the system checks back with the user, offering an engaging review of the intervening period, akin to Spotify Unwrapped but for your behaviour, while ensuring that recommendation weights and intervention goals still align with the user&#8217;s wishes. Users could toggle on an optional &#8220;All Good?&#8221; oversight mechanism, which would detect radical fluxes in behavioural data that might suggest an alteration in goal-profile and perhaps acute distress, which would trigger a check-in.</p></li></ul><p><strong>3. Human in Charge</strong></p><ul><li><p><em><strong>Under My Thumb</strong>: Any behavioural influence&#8212;from recommender-system weights to goal-driven nudges&#8212;must have an available explanation. This could mean a sidebar reasoning card, or a hover-over rationale, or a voice-interaction mode that could take user questions.</em></p></li><li><p><strong>Sliders Instead of Black Boxes</strong>: Besides the pleasures/goals balance, users may also adjust more narrow preferences, moving behavioural influences between poles such as &#8220;entertainment&#8221; vs. &#8220;education&#8221;; &#8220;novelty&#8221; vs. &#8220;familiarity&#8221;; or &#8220;fresh learning&#8221; vs. &#8220;knowledge review.&#8221; Any settings change with significant behavioural impact should include a forecast of possible outcomes.</p></li></ul><p><strong>4. Sliding Doors</strong></p><ul><li><p><strong>Differing Paths</strong>: Whenever prompting behaviour, the AI system should know other approaches, and make these available at the user&#8217;s request, including simple explanations of how each alternative might differ in effect and underlying assumptions, much as map apps will display alternative routes and forms of transport, with differing travel times.</p></li></ul><p><strong>5. Shuffle Mode</strong></p><ul><li><p><strong>Randomized Serendipity</strong>: To encourage exploration and break feedback loops, recommendations should occasionally contravene past behavioural patterns. The user sets the desired proportion of randomization, but also has a &#8220;Surprise Me&#8221; option alongside standard recommendations.</p></li><li><p><strong>Default Updates:</strong> After the user interacts with a &#8220;Surprise Me&#8221; input, the system may ask whether this challenge felt worthwhile. Such queries should not be excessive, and may be toggled off. But if beneficial, the feedback could inform future system defaults.</p></li></ul><p><strong>6. Wisdom of Crowds</strong></p><ul><li><p><strong>Community Evidence</strong>: Recommender systems should favour inputs that other people with similar goals (and success in achieving them) have engaged with, rather than just circulating what is most popular.</p></li></ul><ul><li><p><strong>Smart Cues</strong>: The system should provide quality signals, tagging options according to reputational ratings and empirical goal-efficacy. The current practice with recommender systems&#8212;either struggling to define suitable content or eliminating curation altogether&#8212;have precipitated political contests that may disregard users&#8217; goals.</p></li></ul><p><strong>7. Repel the Intruders</strong></p><ul><li><p><strong>An Off-Switch for Context-Switching:</strong> The system default is to focus the user&#8217;s concentration on their current desired activity, minimizing distractions. This could include adjusting the user&#8217;s environment to encourage flow, and making judgments on whether to mute specific notifications, based on the user&#8217;s current activity and its pertinence to their goals.</p></li><li><p><strong>Angel on Your Shoulder: </strong>If users are engaged in behaviour they have expressed a wish to avert, the system could offer gentle interventions, perhaps speaking a reminder aloud in the AI-generated voice of a friend or trusted influence (who gave consent), or even in the voice of the user: &#8220;Hey! Sorry to interrupt, but just a reminder that you wanted an early night? Shall I turn off when you finish this clip?&#8221;</p></li></ul><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-and-behaviour-change?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-and-behaviour-change?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-and-behaviour-change?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Michael Hallsworth, chief behavioural scientist at the pioneering nudge unit BIT, wrote a nuanced <a href="https://behavioralscientist.org/making-sense-of-the-do-nudges-work-debate/">commentary</a> on the debate over such interventions. To claim that &#8220;they work&#8221; or &#8220;they don&#8217;t work&#8221; is a simplification, he argued, noting that interventions vary widely by context and target group, meaning that the effects will vary too.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Some theorists have presented AI as &#8220;<a href="https://www.nature.com/articles/s41562-024-01995-5">System 0</a>&#8221;, able to undertake the heavy data-processing that we cannot manage</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[An agents economy]]></title><description><![CDATA[How quickly might we integrate increasingly powerful AI agents into the workforce?]]></description><link>https://www.aipolicyperspectives.com/p/an-agents-economy</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/an-agents-economy</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Tue, 04 Feb 2025 13:36:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!X__Z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This essay is written by Seb Krier, who works on the Public Policy Team at Google DeepMind. Like all the pieces you read here, it is written in a personal capacity. The goal of this essay is to explore the potential long-term integration of AI agents into the workforce, examining the challenges and changes organizations will face as AI agents become increasingly capable and potentially replace human employees.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X__Z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X__Z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X__Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9320366,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X__Z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!X__Z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F320ff7a7-b0a0-4f8a-a4cd-debaff38f4bd_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Austin Vernon wrote a great <a href="https://www.austinvernon.site/blog/aimanagement.html">piece</a> on the <a href="https://www.austinvernon.site/blog/aimanagement.html">integration</a> of AI agents into the workforce. I agree with much of his perspective, but want to imagine the likely changes and challenges over a longer timeframe. For this essay, I&#8217;m not considering AI safety implications, and I&#8217;m assuming agents are broadly directable/aligned (like language models we use today) and roughly as capable as humans - though with some limits at first. This piece is exploratory, looking at plausible dynamics rather than making hard predictions; it&#8217;s very possible that in a year or two I will have updated my views significantly. A key question is how factors like the persistence of human value in certain contexts, regulatory responses, and social factors created will shape the path toward greater automation.</p><p><strong>The challenge of building and deploying a non-generic, practical, and useful AI agent</strong></p><p>With regards to Vernon&#8217;s piece, I agree that wikis and similar tools&#8212;which document and explain the nature of work and job functions&#8212;will be key to enabling AI agents to be productive in the workforce. But there are certain factors beyond cost and coordination constrain the extent to which we can fully codify essential knowhow.</p><p>The first challenge is that codifying knowledge isn't easy. In middle management roles, for example, what makes someone good isn't just the <em>knowledge </em>they hold. Trainee lawyers learn this early: knowing the law is expected, but success often hinges on social practices, taste, judgement, proactivity, billable hours, modelling other actors, and managing conflicting information.</p><p>Similarly, employees in most commercial organisations hold critical &#8220;grey&#8221; or institutional knowledge. This includes insights gained from informal sources, like podcasts, or a nuanced understanding of workplace politics. For instance, knowing how to navigate internal politics at work or recognizing which tasks are worth prioritizing under shifting external circumstances (e.g. political changes) is rarely written down. You won&#8217;t find an internal wiki entry that says &#8220;Avoid asking this person about X because they&#8217;re biased against it&#8221; or &#8220;The new minister hates automated vehicles, so highlight healthcare topics at the next event.&#8221;<em> In human-agent workflows, this kind of contextual knowledge and knowhow gives humans a certain advantage, and presents a significant challenge to overcome before organizations can transition to agent-only companies.</em></p><p>More cynically, employees may actively withhold institutional knowledge as a form of job security. This challenge is solvable - agents could infer and learn quirks over time if management provides enough access to contextual data. For instance, an agent might eventually learn, "This is the quirk to remember when submitting a finance request." However, this process will be slow and uneven, especially for roles requiring physical or social interactions. While online customer support jobs may adapt quickly, more complex roles will take longer to automate effectively.</p><p>Michel Berry, in <em><a href="https://hal.science/hal-00263141/document">Une technologie invisible</a></em>, highlights another issue: many organizational instruments or routines act as &#8220;invisible technologies&#8221; - structural mechanisms shaping day-to-day decisions beyond explicit policy. If AI agents replicate and reinforce these routines uncritically, they risk embedding outdated principles long after their original purpose is forgotten. This underscores the importance of revising these norms and processes alongside the deployment of agents. In other words, organizations risk inadvertently locking in past inefficiencies, even as agents upgrade capabilities.</p><p><strong>The need for better organisational and technological infrastructure</strong></p><p>In principle, all of this seems feasible for agents from a capabilities perspective; the real challenge lies in ensuring a fair playing field to compare them against humans. As Austin notes, agents need context - a pipeline to infer, store, and retrieve the relevant information at the right time. You can't simply &#8216;plug and play&#8217; an agent into a role and expect it to figure everything out; significant organisational changes are required to enable the use of these pipelines and agents effectively. At a minimum, this involves gradually replacing legacy IT systems and infrastructure, a process that, as most CTOs will attest, is both lengthy and tedious. In some cases, it may also require restructuring teams or reducing staff to address principal-agent problems and streamline organisational structures.</p><p>I anticipate that agents not only 'augment' employees but also observe and learn from them. For an agent to be truly useful and personalised, it must understand the employee&#8217;s work, goals, and style in detail. Agents may even crystallize insights and biases that employees themselves overlooked, enabling them to perform better over time. Much like a new hire learning internal dynamics, these agents would grow in capability - but with the advantage that their insights could be shared instantly across all other agents. An employee&#8217;s mistake and subsequent correction could improve the entire network, not just the individual agent. Eventually, most agents' queries to humans might focus on preference (e.g., &#8220;Which colour?&#8221;) or on information they lack the ability or authority to access (e.g., &#8220;What did the judge say at the trial?&#8221;).</p><p>However, this introduces complications, particularly around privacy and data sharing. What kind of data can an employee&#8217;s agent communicate to others? Should agents infer insights from personal chat logs? These questions will create thorny disputes that many companies may prefer to avoid. Instead, we may see a shift toward dramatic functional outsourcing, where legacy systems are discarded in favor of contracts with newer AI providers that deliver better performance at a lower cost. As I&#8217;ll explore later, startups and organisations (including in the world of <a href="https://inferencemagazine.substack.com/i/155018281/academia-is-poorly-configured-to-adopt-ai">research</a>) are often better equipped to start fresh with optimal setups, allowing them to take on tasks for larger, more rigid organisations.</p><p>Another challenge lies in how decision-making and task automation might reshape organisational culture. AI systems might tend to optimize for specific metrics, which could lead to a neglect of complex, ground-level realities. While competitive pressures between firms may address some of these issues over time, Berry&#8217;s work reminds us that entrenched management instruments can persist even in competitive environments.</p><p>This transformation will happen gradually. As AI-human hybrid organizations evolve, new forms of tacit knowledge will emerge - focused on effectively prompting, directing, and coordinating AI systems. While this will create a temporary need for human expertise (the &#8220;prompt engineers&#8221; of the future), the growing capabilities and utility of agents might ultimately reduce the demand for human workers.</p><p><strong>What happens when human quirks and tacit knowledge are accounted for?</strong></p><p>At this point, organisations could achieve agents that are quasi-substitutable for human employees, providing almost equivalent value. This might happen either because an organisation has reinvented itself to gradually shape agents capable of understanding and absorbing tacit knowledge as effectively as humans, or because the task has been outsourced to an &#8216;AI-first&#8217; start-up unencumbered by operating with fewer legacy constraints.</p><p>But why is this institutional/tacit knowledge required in the first place? In many cases, its importance stems from inefficiencies in human-dominated systems. For example, understanding a colleague&#8217;s subtle preferences or navigating office politics becomes necessary because humans can be irrational, misaligned with organisational goals, and/or biased. If AI agents were to replace these human colleagues, much of this institutional knowledge would become irrelevant. You wouldn't need to account for Sally from Finance's particular communication preferences or office politics - AI agents would interact rationally and efficiently with each other.<em> While AI agents might initially augment human employees by learning their know-how, this expertise in managing human quirks would diminish in relevance as more workplace interactions shift to being agent-to-agent.</em></p><p><em>We</em> are not rational agents - but agents can be designed to be. Over time, as agents take on more and interact primarily with each other, the know-how derived from human quirks will lose its value. Organisations will adapt to these changes by restructuring to better align with agents&#8217; needs. Rather than accommodating Sally in Finance&#8217;s arbitrary preferences, it will become more economical to replace the role entirely with an agent.</p><p>However, as Aghion, Jones, and Jones <a href="https://www.nber.org/system/files/working_papers/w23928/w23928.pdf">suggest</a>, growth in organisations and the wider economy may be constrained not by what tasks can be automated, but by tasks that remain resistant to improvement - a phenomenon akin to Baumol&#8217;s &#8216;cost disease.&#8217; This is reminiscent of how, <a href="https://acjsissons.medium.com/the-baumol-effect-not-a-disease-but-a-cure-c63d0ae0b481#:~:text=One%20of%20the%20most%20important,where%20productivity%20does%20not%20grow">historically</a>, even as manufacturing productivity soared, sectors reliant on interpersonal dynamics or nuanced judgment lagged behind, driving up costs and complicating efficiency gains. A key <a href="https://x.com/sebkrier/status/1883521333448913041">question</a> is whether AI agents can handle roles requiring rich interpersonal or social judgment, as well as those with physical demands. Even seemingly straightforward tasks can be surprisingly difficult to automate (e.g. planning a party), whether because no one codified the obvious (e.g., retrieving a physical file) or because the job relies on informal &#8220;corner-cutting&#8221; that keeps large organisations running smoothly. Employees often perform small favours, trade concessions, or bend procedures to avoid deadlocks, relying on trust or rapport. An agent might follow protocol rigidly, running into red tape where a human would find a workaround. Similarly, senior-level tasks such as forging alliances or interpreting political signals involve trust dynamics that agents may struggle to navigate. These &#8220;last 5%&#8221; edge cases could delay or complicate the transition to agents unless organisations redesign processes or equip agents with ways to handle the flexible, often invisible rules of human collaboration.</p><p>Similarly, stubborn &#8216;Baumol-like&#8217; constraints may arise from physical tasks and roles reliant on interpersonal relationships, where robotics lags behind cognitive automation. Though these frictions may slow adoption in some areas, it&#8217;s unclear they will be strong enough to derail the shift toward AI-first processes. Over time, as the cost-performance ratio of robotics improves, AI-driven advancements may also apply to physical systems. Once robotic platforms become as flexible as AI software, the residual Baumol-effect on labour-based tasks could diminish (for example, physical interactions will be less of a constraint). Just as agent-first startups have leapfrogged legacy enterprises in software automation, the next wave of agile robotics could disrupt entire industries currently shielded by physical complexity.</p><p><strong>Why restructurings and outsourcing will be necessary</strong></p><p>Restructurings and outsourcing will be necessary for two reasons. First, restructuring is required to integrate agents into company workflows seamlessly, ensuring they gradually learn and understand the grey institutional knowledge necessary for effective collaboration. Second, as agents take on more tasks and responsibilities, organisations will need to adapt workflows and processes to maximise the productivity efficiencies these agents can offer&#8212;efficiencies often limited by the quirks, biases, and inefficiencies of remaining human workers.</p><p>These efficiencies are significant, not arbitrary. One often-overlooked aspect of modern organisations and bureaucracies is the extent to which middle management can be &#8216;misaligned&#8217; with the company&#8217;s interests. Managers are sometimes incentivised to make decisions that <em>appear better </em>- safer, or more appealing - to those evaluating their performance, even if those decisions aren&#8217;t in the organisation&#8217;s best interest. "<em>I'm more likely to get a promotion if I do X, even if X isn't really needed or as impactful as Y</em>". Similarly, employees may disagree with the organisation&#8217;s broader goals, creating further friction. As organisations scale, it becomes increasingly difficult for managers to oversee employees and for directors to oversee managers. Highly effective agents, capable of performing many tasks in parallel, offer an opportunity to simplify internal hierarchies and address these principal-agent problems.</p><p>If agents merely augmented employees - following their instructions without question - misalignments would persist. For example, the agent might obediently hire PwC to produce an unnecessary and costly PowerPoint presentation, or turn a blind eye as an employee fakes a sick day or pretends to be busy. Addressing this misalignment can be approached in two ways. The first is aligning the human employees more closely with organisational goals, which would likely require extensive surveillance - a method that would be resisted, demotivating, and unpopular. The second is to reduce human involvement altogether, making the agents accountable directly to the director.</p><p><em>But then how useful is the intermediary human really?</em> Under the above proposed model, the director delegates tasks directly to the agents, bypassing the need for human intermediaries. If taste, curation, and tacit knowledge are no longer where humans outperform agents, the rationale for keeping these intermediary roles diminishes. Delegating directly to agents (who benefit from sufficient context and capabilities) creates a cleaner, more efficient chain of command and reduces the risk of misalignment. I expect these restructurings will gradually decrease the involvement of human employees over time.</p><p><strong>Some important caveats</strong></p><p>Throughout this transformation, wider dynamics will likely slow these changes. Many people - particularly in high-skill, academic, and white-collar jobs - derive meaning from work and possess power to delay or block changes through strikes, unionisation, or negative publicity. Governments may also require reasonable human oversight in certain contexts, such as healthcare, justice, and critical infrastructure, for liability and safety reasons. In many services, including education and hospitality, the value of human interaction itself cannot be overlooked. These frictional roadblocks and institutional inertia could extend the human-to-agent transition &#8220;a few years&#8221; into &#8220;a decade or more,&#8221; depending on industry. Economists have long noted that large firms often resist adopting efficiency-enhancing measures that disrupt entrenched interests or managerial structures.</p><p>However, these factors alone may not be sufficient to halt the wider competitive forces driving automation. First, rising human labour costs increase the incentive to automate, as observed in France following the 2000 introduction of the 35-hour workweek. Second, these technologies will deliver <a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai?utm_source=post-banner&amp;utm_medium=web&amp;utm_campaign=posts-open-in-app&amp;triedRedirect=true">impressive</a> <a href="https://scholar.harvard.edu/files/aghion/files/what_are_the_labor_and_product_market_effects_of_automation_jan2020.pdf">economic</a> and other benefits that will be hard to ignore. Third, even maintaining some degree of human oversight doesn&#8217;t fully counteract these dynamics; AI allows fewer humans to accomplish far more. Put differently, organisations can still massively downsize their workforces despite HITL (human-in-the-loop) requirements.</p><p>Another critical consideration is the democratisation of coding and building, which will empower individuals and start-ups to adopt agents rapidly. Large restructurings within corporate monoliths are often met with significant resistance. By contrast, smaller start-ups - unencumbered by legacy systems - are better positioned to produce high-quality goods and services quickly by embracing agents from the outset. They avoid &#8220;Sally or Jake&#8221; problems altogether by not hiring them in the first place.</p><p>Smaller startups, leveraging minimal staff and maximum automation, could become showcases for more &#8220;rational&#8221; ways of operating. While some tasks, particularly those requiring deep human interaction, may resist automation temporarily (a microcosm of Baumol&#8217;s cost disease), the overall trend is clear: process-oriented and knowledge tasks will be increasingly handed off to agents. By minimising the accumulation of human employees and their inefficiencies, start-ups can operate more effectively and efficiently. Consider Palantir, which serves major enterprises with far fewer employees than its competitors. Successful start-ups following this model may force larger companies to adapt through competition or acquire these innovators, accelerating the transition to automated operations.</p><p><strong>Conclusion</strong></p><p>If we build the infrastructure to enable AI agents to learn tacit knowhow and integrate seamlessly into our systems, the future points toward leaner, more efficient organizations where agents progressively replace human roles. This essay illustrates that if we assume (a) low costs and <a href="https://www.dwarkeshpatel.com/p/ai-firm">parallelization</a> of mostly aligned agents; (b) human-like or stronger capabilities that are easier to control; (c) seamless integration into a company&#8217;s technical and managerial infrastructure; and (d) a pipeline for absorbing and learning tacit knowledge from humans, then the trajectory tends towards giving agents more production responsibilities, and removing these from humans. Augmentation may work for a while, but it doesn't seem sustainable in the long run as employees lose their know-how, and the agent takes on more tasks. However, it&#8217;s worth cautioning that the numerous &#8216;if&#8217;s involved - many of these outcomes are difficult to predict and account for in advance. It would be worthwhile to explore the complexities that could arise if agents were misaligned, harder to control, or exhibited disparities in capabilities.</p><p>Interestingly, the same infrastructure needed for human&#8211;agent augmentation is also what allows organisations to reduce reliance on humans entirely. In cases where a company is risk-averse or constrained by legacy contracts, this could mean outsourcing tasks and responsibilities to a smaller, leaner part of the organisation or a separate agent-first start-up - a form of internal cannibalisation. From the perspective of shareholders, and arguably mainstream economic theory, this approach makes sense. This trend will not affect lower-level employees or middle management; directors, too, face similar challenges. Over time, even their roles may diminish, leaving the CEO to <a href="https://newsletter.rootsofprogress.org/p/the-future-of-humanity-is-in-management">manage</a> a multi-agent system (essentially, a company of agents) with maybe a few other humans and optimised for shareholder benefit. This would result in a highly &#8216;rationalised&#8217; company with minimal principal-agent frictions: no strikes, no rest, no weekends, nothing.</p><p>The pace and magnitude of this shift are debatable. Still, I believe a substantial reorientation toward AI-driven operations, culminating in near-full automation of many roles, is plausible in the long term if AI capabilities continue to advance, adoption costs decline, and regulations remain permissive. In some interpersonal or legally sensitive roles, Baumol-like frictions may slow adoption, but are unlikely to halt it entirely. As Berry warns, invisible technologies that shape daily corporate life could embed old inefficiencies in new forms, necessitating organisational overhauls alongside agent deployments. Accelerating factors, such as groundbreaking discoveries or dramatic productivity gains, could further incentivize these shifts. For instance, curing diseases or rapidly improving productivity may create strong pressures to maintain progress and avoid unnecessary delays. Rising shareholder value and broader economic growth would also hasten these transitions.</p><p>Ultimately, this trajectory is desirable to the extent that it boosts productivity, reduces costs, alleviates poverty, cures diseases, and fosters abundance. While individuals like Sally from Finance or Jake from Policy are valuable human beings, their immediate interests may not outweigh the benefits experienced by a broader population through improved economic conditions, health, and life opportunities. The challenge will be ensuring that people like Sally and Jake transition into positive, fulfilling lives after job displacement. The promise of the future must consider them too, rather than relegating them to the ranks of unfortunate externalities. Firms, policymakers, and society must address these transitional frictions, rethinking training, labour market protections, and the invisible structures that guide daily decisions. Certain functions relying on interpersonal relationships will continue to require human-driven interactions where trust, rapport, and informal negotiations matter. This will be an important challenge of the next decade: rethinking labour, its role in our lives, and its connection to a meaningful existence. Critical questions about finding meaning in work and managing the acute challenges of job displacement have been deliberately avoided here and will be explored in a future essay.</p><p>&#8212;</p><p>Many thanks to the following people for comments: Conor Griffin, Gustavs Zilgalvis, Julian Jacobs, David Wolinsky, Ben Lepine, and @PITTI_DATA.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>