<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Policy Perspectives ]]></title><description><![CDATA[AI policy, governance, and more. Featuring contributions from a range of thinkers, all in a personal capacity.]]></description><link>https://www.aipolicyperspectives.com</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 11:53:13 GMT</lastBuildDate><atom:link href="https://www.aipolicyperspectives.com/feed" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><webMaster><![CDATA[aipolicyperspectives@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aipolicyperspectives@substack.com]]></itunes:email><itunes:name><![CDATA[AI Policy Perspectives]]></itunes:name></itunes:owner><itunes:author><![CDATA[AI Policy Perspectives]]></itunes:author><googleplay:owner><![CDATA[aipolicyperspectives@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aipolicyperspectives@substack.com]]></googleplay:email><googleplay:author><![CDATA[AI Policy Perspectives]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Science Needs AI Data Stocktakes ]]></title><description><![CDATA[A proof-of-concept for fusion energy]]></description><link>https://www.aipolicyperspectives.com/p/science-needs-ai-data-stocktakes</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/science-needs-ai-data-stocktakes</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Thu, 30 Apr 2026 11:09:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4WgZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4WgZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4WgZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4WgZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:143249,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/195646240?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4WgZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!4WgZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2366fe52-e41e-4c19-9ec9-cace84d38db0_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>By Conor Griffin, Don Wallace, and Theo Brown</strong></p><p>For 40 years, amid green pastures outside Culham, a small village in Oxfordshire, scientists and engineers toiled at the Joint European Torus. They were attempting to harness nuclear fusion, a force powerful enough to light the sun.</p><p>To create fusion, scientists and engineers must heat the nuclei of very light atoms with such intensity that they fuse, instigating a self-sustaining reaction that releases vast amounts of energy. The scale of the challenge is hard to fathom&#8212;at its most extreme, the Joint European Torus, or JET, was the hottest point in the solar system, hitting <a href="https://iopscience.iop.org/article/10.1088/1741-4326/ac47b4https://www.ukaea.org/news/jet-set-for-its-40th-birthday/">over 150 million degrees Celsius</a>.</p><p>JET concluded in 2023, generating <a href="https://www.gov.uk/government/news/jets-final-tritium-experiments-yield-new-fusion-energy-record">a record amount</a> of energy in its final experiments. The project is now part of fusion&#8217;s history but remains pivotal to its future. The growing number of organisations developing fusion reactors are drawing on JET&#8217;s discoveries. The UK Atomic Energy Authority is advancing a national fusion facility, <a href="https://www.ukaea.org/work/mast-upgrade/">MAST-U</a>, on the Culham site. This will serve as a test-bed for <a href="https://stepfusion.com/">STEP Fusion</a>, the UK&#8217;s project to put fusion electricity on the grid, set to begin operations in the early 2040s.</p><p>But JET didn&#8217;t just bequeath novel discoveries. It left behind massive troves of data. That raises a tantalising prospect: Could scientists use this data to train AI models that accelerate the path to fusion power?</p><p>This is possible, but challenging. Most JET data is raw and unvalidated. Many important insights are buried in scientists&#8217; logbooks. The data that does exist is not available open source or, generally, for commercial use. Changing this may require agreement from all of JET&#8217;s original partners across Europe. One expert we interviewed called JET data a &#8216;stranded asset&#8217;.</p><p>Such data predicaments are not specific to JET or to fusion, but apply across all of science, even though science is precisely the domain where AI could yield its <a href="https://www.aipolicyperspectives.com/p/a-new-golden-age-of-discovery">greatest benefit to society</a>. New breakthroughs and startups are emerging quickly, from protein design to material design. Scientists are also keen users of fast-improving AI coding agents. But a lack of high-quality data will dampen progress. In most disciplines, large, high-quality datasets like the Protein Data Bank, which underpinned <a href="https://deepmind.google/science/alphafold/">AlphaFold</a>, are absent.</p><p>The scientific community needs to tackle this problem, and there are promising signs. Late last year, the UK government launched an <a href="https://www.gov.uk/government/publications/ai-for-science-strategy/ai-for-science-strategy">AI for Science strategy</a>, which includes a new <a href="https://www.renaissancephilanthropy.org/ai-for-science-dataset-rfp-results">collaboration with Renaissance Philanthropy</a> to identify priority datasets. The US government&#8217;s <a href="https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/">Genesis Mission</a> aims to train AI models and agents on federal scientific data. <a href="http://google.org">Google.org</a> has a <a href="https://www.google.org/impact-challenges/ai-science/">dedicated AI for Science fund</a>, which can fund datasets and tooling.</p><p>These examples suggest that if the scientific community can identify the data that AI needs, a range of actors could help to fund and deliver it.</p><p>This demands what we&#8217;re calling <strong>AI data stocktakes</strong>. The concept is simple: interview leading experts in a given scientific field to understand the main opportunities to apply AI; the data obstacles; and the interventions that could make the biggest difference. Admittedly, some blockages, such as a paucity of engineers, are structural and will take years to fully resolve. <strong>AI data stocktakes</strong> should identify such challenges, but focus on projects that governments, companies and philanthropies could fund and implement within 1-2 years.</p><p>There are promising early <a href="https://www.climatechange.ai/dev/datagaps">efforts</a> to map AI data gaps. But, to our knowledge, there are no concise, accessible documents that explain the AI opportunities in genomics, weather forecasting, and food security and convert them into a list of fundable data projects for policymakers and funders to pursue.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In this essay, we offer a proof-of-concept. We interviewed 25 leading experts to create an <strong>AI data stocktake for fusion</strong>. We focus on the UK, but our analysis and recommendations could be taken up by funders anywhere in the world. Moving forward, we hope to support AI data stocktakes for other scientific disciplines and research problems.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>I. Why fusion? Why now?</strong></h1><p>If fusion is achieved, it would provide a safe, almost limitless source of clean energy.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> From a scientific perspective, it would yield a better understanding of the plasma that makes up more than 99% of the visible universe. From a social impact perspective, it would help address energy scarcity and unlock energy-intensive innovations, like desalination.</p><p>Despite quips about fusion being always 20 years away, <a href="https://pubs.aip.org/aip/pop/article/32/11/112106/3371239/Continuing-progress-toward-fusion-energy-breakeven">70 years of experiments</a> actually show fairly steady progress, which has continued in recent years, from <a href="https://euro-fusion.org/eurofusion-news/wendelstein-7-x-sets-world-record-for-long-plasma-triple-product/">Germany</a> to <a href="https://physicsworld.com/a/chinas-experimental-advanced-superconducting-tokamak-smashes-fusion-confinement-record/#:~:text=It%20began%20operations%20in%202006%20and%20is,is%20currently%20being%20built%20in%20Cadarache%2C%20France.">China</a>. In most fields, such progress would have solved the problems of interest decades ago. But fusion is an extremely hard problem. And the primary product is only attainable at the end of the line.</p><p>To achieve fusion, scientists need to create and control <em>plasma, </em>a super-hot state of matter, in which the atoms have been stripped of their electrons, and extreme heat and pressure are used to force the remaining nuclei to collide and fuse.</p><p>Scientists are pursuing two main approaches to doing this, with very different physics, data, and AI opportunities. Magnetic confinement fusion uses massive magnets, while inertial confinement fusion uses high-energy lasers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> We focus this data stocktake effort on magnetic confinement, as the UK&#8217;s STEP project is pursuing that approach, as is <a href="https://tokamakenergy.com/">Tokamak Energy</a>, the UK&#8217;s leading fusion power startup, and <a href="https://deepmind.google/blog/bringing-ai-to-the-next-generation-of-fusion-energy/">Google DeepMind&#8217;s fusion team</a>.</p><p>The end of the line for fusion is now getting closer, for two reasons. First, the underlying technology landscape has changed. In addition to AI, the discovery of <a href="https://news.mit.edu/2021/MIT-CFS-major-advance-toward-fusion-energy-0908">high-temperature superconducting magnets</a> makes it easier to build smaller and <a href="https://www-pub.iaea.org/MTCD/Publications/PDF/p15935-25-02871E_WFO25_web_Dec2025.pdf">potentially cheaper</a> reactors. Second, fusion has traditionally relied on government funding. But in the past five years, a wave of private investment has arrived, with <a href="https://www.fusionindustryassociation.org/over-2-5-billion-invested-in-fusion-industry-in-past-year/">more than 30 companies</a> now pursuing fusion power.</p><p>These shifts have injected welcome momentum into the field, but also significant hype. In response, we need a clear view on the primary bottlenecks that AI can address.</p><h1><strong>II. How to accelerate fusion with AI</strong></h1><p>To create fusion, scientists and engineers need to<em> predict</em>, <em>control</em> and <em>understand</em> how plasma behaves. The challenge is that plasmas are highly complex and much of their underlying physics&#8212;from fluid dynamics to electromagnetics&#8212;remains poorly understood.</p><p>To make progress, scientists <strong>run experiments</strong> that create plasmas in a reactor, and use sensors to measure their properties under different conditions. Scientists use these experiments to validate their theories, reveal unexpected phenomena, and test the hardware needed for power-plant-class devices. However, building fusion reactors is extremely expensive and so few machines exist, with most researchers running their experiments at just ~10 leading facilities worldwide. When they can get access to such a facility, scientists must decide how to design the optimal experiment, including how to toggle an array of possible parameters, from the electrical current in a reactor&#8217;s coils to the valves that control the gas levels.</p><p>Fusion scientists also run <strong>computer simulations</strong><em><strong>, </strong></em>including to help design and interpret these costly experiments. This is also challenging, as researchers must simulate a diverse range of phenomena, at very different scales, from the tiny, lightning-fast movements of electrons to the larger, slower evolution of the entire plasma. For simulations run on massive supercomputers, this may mean weeks. For scientists without such resources, it may mean many months. As a result, scientists make trade-offs, using assumptions and approximations to run their simulations more quickly and cheaply, but also less accurately.</p><p>The challenges don&#8217;t stop there. Scientists know that their theories, simulations and experiments are imperfect. But when a gap emerges between what a simulation suggests and what an experiment reports, it is often unclear where exactly the issue falls.</p><p>AI can help in four main ways.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1rTv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1rTv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1rTv!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;powering-fusion-2__figure@2x.png&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="powering-fusion-2__figure@2x.png" srcset="https://substackcdn.com/image/fetch/$s_!1rTv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!1rTv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f78bb3e-a1a0-4e2e-83f8-8766cae697c4_2048x1152.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>1. <strong>Improve simulations</strong></p><p>Scientists can develop &#8220;AI surrogate&#8221; models that <a href="https://conferences.iaea.org/event/392/contributions/36059/attachments/20032/34088/Zanisi-TH-C.pdf">emulate the predictions from a fusion simulation code</a>, at a fraction of the cost and time. To do so, they run a code many times, varying the input parameters each time. They then use the resulting dataset to train an AI model to predict the outputs of interest much more quickly.</p><p>Scientists <a href="https://conferences.iaea.org/event/392/contributions/36059/attachments/20032/34088/Zanisi-TH-C.pdf">have already shown</a> that AI surrogates can make simulations <em>faster</em>. Moving forward,  AI surrogates could make simulations more <em>useful</em>. First, scientists could develop AI surrogates for more accurate, but computationally expensive simulation codes. Second, they could develop &#8216;integrated models&#8217;, like <a href="https://github.com/google-deepmind/torax">TORAX</a>, to stitch together AI surrogates for different phenomena&#8212;from the &#8216;turbulence&#8217; that determines how well confined a plasma is, to the &#8216;scrape-off&#8217; layer that simulates the plasma hitting the reactor&#8217;s wall. Finally, scientists could move beyond producing one-off AI surrogates that result in a paper and some code, to a world where surrogates are documented, maintained and ready for use in fusion reactors.</p><p>2. <strong>Improve experiments and operate the reactor</strong></p><p>In most fusion experiments, scientists must decide if and how to tune various parameters, while striking a balance between more proven and novel settings. To help, researchers can use AI to <a href="https://iopscience.iop.org/article/10.1088/1741-4326/ad22f5/pdf">predict the optimal parameters</a> for their next experiment by learning from past ones; and to <a href="https://www.science.org/doi/10.1126/science.adm8201">predict</a> how well their experiments will fare. More recently, scientists have also started querying LLMs to <a href="https://www.theinformation.com/articles/new-competitors-chase-openai-in-reasoning-ai-race?rc=fzr499">check and refine their experimental protocols</a>.</p><p>Scientists also use AI to predict the plasma &#8216;disruptions&#8217; that frequently end experiments, damage machines and are one of the biggest obstacles to a future power plant. AI models can already <a href="https://www.nature.com/articles/s41567-022-01602-2">predict</a> past<em> </em>plasma disruptions with high accuracy. But predicting <em>future </em>disruptions, on more powerful machines, quickly enough to stop them, is an open research challenge.</p><p>The ultimate goal is to use AI to help operate the reactor itself. Fusion reactors run on a real-time feedback loop: sensors monitor the plasma, while the actuators, such as the magnetic coils, are adjusted accordingly. The traditional control algorithms used to enable this often struggle with the chaotic, non-linear nature of millions of plasma variables interacting.</p><p>In recent years, researchers have <a href="https://deepmind.google/blog/accelerating-fusion-science-through-learned-plasma-control/">demonstrated</a> how reinforcement-learning agents can learn more effective control policies, including to <a href="https://www.nature.com/articles/s42005-025-02146-6">reduce plasma disruptions</a>. To help these RL agents generalise to novel scenarios and reactors beyond their training data, scientists are developing &#8216;hybrid approaches&#8217; that <a href="https://arxiv.org/pdf/2509.01789">integrate some knowledge of physics</a> into the models.</p><p><strong>3. Improve fusion data</strong></p><p>Fusion experiments are extreme environments. The intense heat and the chaotic nature of the plasma mean that the data that sensors pick up is often noisy or low quality. Some variables cannot be directly measured, and must be inferred, introducing additional sources of error.</p><p>Scientists are <a href="https://www.iter.org/node/20687/magnetic-fusion-diagnostics-and-data-science">training AI models to extract clean signals</a> from this noisy data and to learn correlations that allow them to predict data for one sensor, given data for others&#8212;a capability that could be critical if sensors in a future reactor get damaged. Scientists are also using AI to train <a href="https://iopscience.iop.org/article/10.1088/1361-6587/ac6fff">surrogate models</a> that speed up, and better calibrate, reconstructions of the plasma, using the limited experimental data that is available.</p><p>Scientists often care less about the raw data from their experiments, and more about important events, such as when a disruption to the plasma began. Today, they often need to manually inspect graphs and plots to detect these events. AI can help to <a href="https://pubs.aip.org/aip/pop/article/32/4/042508/3344977/Using-deep-learning-for-the-detection-of-UFOs">automate</a> parts of this process and to detect events that scientists may have missed.</p><p><strong>4. Improve the underlying technologies</strong></p><p>Achieving fusion will require a supply chain rich in technologies that could be applied more broadly. AI could help to accelerate their development. </p><p>For example, the chamber walls in a fusion reactor will <a href="https://www.royce.ac.uk/news/updated-roadmap-focuses-on-materials-for-commercial-fusion/">require new materials</a> that can withstand extreme temperatures. Scientists are training AI <a href="https://www.nature.com/articles/s41586-023-06735-9">surrogate models</a> that speed up the simulations needed to assess a candidate material&#8217;s real-world properties, like how strong or resistant to radiation it will be over its lifetime.</p><p>A typical fusion reactor also spends much of its time out of operation, at great cost. This makes fusion a logical place to develop<strong> </strong><em>predictive maintenance</em><strong> </strong>techniques that ingest historical data from sensors and train AI models to learn the subtle signatures that indicate pending breakdowns, allowing practitioners to schedule maintenance or design more reliable systems. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>III. The challenges with fusion data</strong></h1><p>As they pursue these AI opportunities, scientists will need access to three main kinds of fusion data: from experiments, simulations, and sources that are not traditionally available, such as researchers&#8217; logbooks. There are promising efforts underway on this front, but many obstacles.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Trt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Trt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Trt!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:&quot;powering-fusion-1__figure.png&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="powering-fusion-1__figure.png" srcset="https://substackcdn.com/image/fetch/$s_!-Trt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!-Trt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d06d77d-af5b-4331-a753-e13996d97f4e_1920x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>1. Experimental data: Unvalidated, single-machine and hard to access</strong></p><p>Experimental data is the &#8216;ground truth&#8217; that the sensors in reactors pick up, from line graphs to videos. In magnetic confinement fusion, the challenge is not so much a <em>lack</em> of data, but an<em> excess</em> of <em>raw</em> data that has not gone through <a href="https://arxiv.org/pdf/2507.23018">the processing</a> needed to make it useful to AI. This processing ranges from addressing noise and imperfections in the underlying sensors, to detecting and annotating important events, such as plasma disruptions.</p><p>Currently, the community has to rely on the small well-validated datasets that do exist, which may be as little as a few hundred or thousand experimental &#8216;shots&#8217;&#8212;individual test runs of a reactor. The high cost of fusion experiments has also resulted in a natural incentive to pursue experiments that will not fail, curtailing more novel research and meaning that much of the resulting data is in a similar &#8216;parameter&#8217; space and does not represent the full range of plasma dynamics that scientists want to model.</p><p>This experimental data is also not generally available open source or for commercial use. One promising initiative to change this, which several interviewees cited, is UKAEA&#8217;s <a href="https://github.com/ukaea/fair-mast">project</a> to open source data from their MAST facility.</p><p>However, to develop more general AI models, researchers want <em>multi-machine </em>databases that extend beyond a single facility like MAST. To that end, the IAEA is developing a federated <a href="https://zenodo.org/records/15053494/files/POS-12_Gahle%20-%20Amrik%20Singh%20(1).pdf?download=1">Fusion Data Lake</a> where different institutions would store their data locally but make it accessible via a central data catalog<em>. </em>One challenge with this approach is that fusion facilities have defined fusion variables and stored data in different ways. The <a href="https://conferences.iaea.org/event/251/contributions/20713/attachments/11191/16492/IMAS%20Tutorial%20-%20Pinches.pdf">Integrated Modelling &amp; Analysis Suite</a>, or IMAS, addresses this by providing a standardised ontology and set of structures for fusion data. It is nascent, but has positive momentum.</p><p><strong>2. Simulation data: No incentives, process, or place to host it</strong></p><p>In theory, researchers should be able to run fusion simulation codes many times and train AI surrogate models on the resulting data to reproduce the outputs at a fraction of the cost. In practice, most scientists run a simulation to answer a single, narrow, physics question. They do not run a large number of simulations to build representative datasets to train AI surrogates&#8212;a very different activity.</p><p>That activity is also a hard one. There is no standard procedure to follow to generate a dataset for training an AI surrogate model, and the codes are often finicky to use. Most simulation codes contain &#8216;free parameters&#8217;&#8212;knobs that scientists must decide how to best tune&#8212;a practice that can be as much an art as a science. The datasets can also be huge and there is no obvious location to store them, although some <a href="https://arxiv.org/abs/2412.00568">early examples</a> exist.</p><p><strong>3. Dark data: Nascent, IP issues, and hard to integrate into workflows</strong></p><p>&#8216;Dark data&#8217; describes the contextual information that scientists generate that is not captured in structured datasets. This includes notes scribbled in experimental logbooks, where scientists describe the procedures they ran, the hardware issues they faced, and the phenomena they observed. For simulations, it includes the many nuances needed to run and interpret a code&#8217;s results successfully, and the many undocumented imperfections to be aware of.</p><p>Accessing this dark data could help ensure that AI systems do not focus on the wrong things&#8212;for example, when an anomaly in the data is caused by an equipment failure or error, rather than a meaningful phenomenon. It could also provide AI with <a href="https://www.nature.com/articles/s42254-024-00702-7.epdf?sharing_token=F6VFYv-f-UII3agyUWQXTNRgN0jAjWel9jnR3ZoTv0PsZ4W76TMzTOmNXsymzNEoeEGBc0_oemkeSlCDlVhpXL4NUYebo2C0TO6GM1v-dqXaih35GbkjuA6ixpwYSbAsgaASUaz2o_Q-Pd6iosOAQybFabOJhVWwPoo0w6B7qJo%3D">a window into the entire research process,</a> including its many dead-ends, rather than just the final result.</p><p>Researchers are using LLMs to try to make dark fusion data accessible, for example by enabling scientists to query <a href="https://control.princeton.edu/assets/data/publications/pdfs/%5B234%5D%20Viraj%20Mehta%20et%20al.%20%E2%80%9CTowards%20LLMs%20as%20Operational%20Copilots%20for%20Fusion%20Reactors%E2%80%9D.pdf">experimental logs</a> and <a href="https://www.linkedin.com/posts/d3dfusion_fusionenergy-fusionscience-artificialintelligence-activity-7370821988711321600-SCD6/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADZpgNABb4jJiQ25ynNt00JOXHWtkILfiy4">archive documents</a>. But much of the data is not well-annotated, there are IP issues in accessing it, and it is not yet clear how to integrate the data into practitioners&#8217; daily workflows.</p><p><strong>The three &#8216;debts&#8217; holding fusion data back</strong></p><p>Many of these challenges with fusion data result from three underlying issues, which have compounded over time into systemic debts that inhibit the use of AI today.</p><p>1. <strong>Technical debt</strong></p><p>The fusion community has traditionally had to prioritise getting large, complex machines to work, rather than building infrastructure to collect, curate, and share data. As a result, activities like data annotation and writing high-quality code are underfunded. Many leading fusion codes were created decades ago and have evolved slowly, while the quality of experimental data is limited by the capabilities of the sensors available.</p><p>2. <strong>Bureaucratic debt</strong></p><p>The large costs of fusion experiments and the traditional reliance on government funding mean that many fusion projects have a complex web of owners and collaborators, which can make agreeing on new data initiatives difficult. For example, JET was sponsored and funded by Euratom, the EU&#8217;s nuclear research community. Its scientific exploitation was managed by EUROfusion, a pan-European network of fusion research labs. UKAEA managed engineering and operations. Releasing its data may require agreement from all of these actors.</p><p>There are other bureaucratic hurdles too. Scientists who run fusion experiments often want an embargo period on the resulting data so that they can prepare a publication. Such embargoes are rational, common in science, and largely supported, but many interviewees felt that they had become too long. Fusion data is also subject to diverging open-source policies. For example, the MAST experiment was funded by UK Research and Innovation, which has strong open data requirements. The follow-up MAST-U experiment is funded by the UK Department for Energy Security and Net Zero, which does not have the same policies. Many fusion companies also do not open source their data.</p><p><strong>3. Human and cultural debt</strong></p><p>The fusion community does not have enough software engineers and experts who are able to clean data, attach confidence levels, and curate it for AI use. As a result, physicists must take on many tasks that are outside their core areas of expertise, including writing high-quality code.</p><p>This issue is compounded by a research culture that inhibits data sharing. Scientists are constantly pushed to move on to the next experimental campaign, rather than to validate older data. This stops some scientists from sharing their data, because they fear that end users will not appreciate the resulting gaps and do bad science with it. Or they fear that they themselves will be criticised for releasing &#8216;unscientific&#8217; data.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h1><strong>IV. Recommendations</strong></h1><p>Below we provide eight recommendations to address these data limitations and accelerate fusion with AI. Each project could be led by a mix of government bodies and funders, like the Department for Science, Innovation and Technology and UK Research and Innovation; public research organisations like the UK Atomic Energy Authority, companies; universities; and philanthropies. Where possible, the UK should look to collaborate internationally&#8212;for example, with the US <a href="https://genesis.energy.gov/">Genesis Mission</a> and the International Atomic Energy Agency.</p><p><strong>1.  Strengthen the UK&#8217;s lead in open fusion data</strong></p><p>Expand <a href="https://www.ukaea.org/service/fair-mast/">FAIR MAST,</a> the UK&#8217;s pioneering open sourcing of experimental data from its MAST facility, by adding data from the follow-up MAST-U facility and making the user interface more accessible. This will require the UK Department for Energy Security and Net Zero clarifying that open data policies apply to MAST-U, funding at least five data engineers over a two-year time period, and ensuring that the project has sustainable compute and data storage.</p><p><strong>2. Liberate 40 years of data from the Joint European Torus</strong></p><p>Launch a project to open source at least 30% of JET experimental data by 2028. This will require agreement on what data to release. For example, should the project only release validated, curated data relating to notable discoveries? Or should it also release data that is raw, validated only in part, or which relates to &#8216;normal&#8217; machine behaviour? Second, and much harder, will be securing agreement from all relevant institutions to release the data.</p><p><strong>3. Launch a competition to predict plasma disruptions</strong></p><p>Fund a competition to see which AI model can best predict future plasma disruptions in new experimental campaigns, building on <a href="https://aiforgood.itu.int/about-us/ai-for-fusion-energy-challenge/">early examples</a> and <a href="https://disruptions.mit.edu/">work</a> in this space. This could include funding dedicated experimental shots on machines such as MAST-U, to evaluate models on challenging edge cases. Beyond accuracy, sub-competitions could evaluate models on important variables, such as: Can the model make predictions with little data, such as when sensors become damaged?; Can the model predict disruptions across different reactors?; Can the model predict disruptions with sufficient lead time to prevent them?; and Can the model shed new light on <em>why </em>disruptions are occurring?</p><p><strong>4. Prototype the future of AI-enabled scientific data curation</strong></p><p>Expand <a href="https://www.aappsdpp.org/DPP2025/html/3contents/pdf/5691.pdf">the platform</a> that UKAEA recently developed to enable human experts to use AI to annotate experimental data, by adding data from other fusion facilities; increasing the complexity and variety of the metadata that is captured; and training AI models to directly annotate an increasing share of this data.</p><p><strong>5. Make leading simulation codes AI-ready</strong></p><p>Launch an effort to modernise priority fusion simulation codes, including to make it easier to train AI surrogate models based on them. This could build on <a href="http://google.com/url?q=https://www.hartree.stfc.ac.uk/work-with-us/projects/ukaea/fusion-computing-lab-duplicate-papers/freegsnke/&amp;sa=D&amp;source=docs&amp;ust=1777407414263127&amp;usg=AOvVaw2iWDpwI5W8rE3f67mXt79R">early efforts</a> in <a href="https://plasmafair.github.io/">this space</a> and target codes, such as JINTRAC, which are important to the UK&#8217;s proposed STEP Fusion power plant and the international ITER effort. The project could start by modernising the codes&#8217; documentation and &#8216;refactoring&#8217; them so that they are compatible with modern chips, like GPUs and TPUs, and allow for parallel data generation. It could then open source the codes, with a plan for how to maintain them. Throughout the modernisation process, it could test the usefulness of <a href="https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/">AI coding tools</a> to the tasks at hand.</p><p><strong>6. Demonstrate a new state-of-the-art for AI surrogate models</strong></p><p>Fund small teams of software engineers and experts to develop AI surrogate models of important, computationally expensive phenomena in fusion simulations. The project should ensure that all newly created surrogates have state-of-the-art documentation, data provenance and version control. It should release the data used to train and validate the surrogates and develop software pipelines to <a href="https://simvue.io/">automate time-intensive aspects</a>, such as organising the data.</p><p><strong>7. Use AI agents to preserve expert fusion knowledge for the future</strong></p><p>Gather a group of leading experts on a priority fusion simulation code, and equip them to use AI agents to make the tacit knowledge involved in running that code available to the wider research community. To do so, the experts could task the agent with running the code. As it seeks to execute, the agent would have an &#8216;internal monologue&#8217; that the experts could trace, steer and intervene on. The end result would be a series of documents, such as markdown files, that capture the important dark data needed to run the code well.</p><p><strong>8. Create Fusion-Bench to measure and drive LLM performance</strong></p><p>Assign leading fusion experts to create an evaluation metric to quantify how well leading large language models understand <a href="https://arxiv.org/pdf/2504.07738">core fusion concepts</a>. This would make it easier to improve the usefulness of LLMs for downstream tasks in fusion. This evaluation will be more difficult to create than in disciplines like maths or computer science, where it is easier to automatically verify a model&#8217;s performance. But the experts could determine the most useful approach, which will likely involve a combination of question-answering and task performance.</p><h1><strong>V. Six open debates</strong></h1><p>The experts we interviewed disagreed on some points. Despite the framing below, few are either/or debates. Rather, most are about relative degrees of emphasis.</p><ul><li><p><strong>Incrementalism vs novelty: </strong>Should we build on the early AI opportunities that fusion practitioners have already showcased? Or pursue more novel, uncertain AI ideas, such as training general-purpose &#8216;fusion foundation models&#8217; or using AI &#8216;<a href="https://www.nature.com/articles/d41586-026-00820-5">world models</a>&#8217; to pursue new kinds of fusion simulations?</p></li><li><p><strong>The past vs the future: </strong>Should we strive to get as much value as possible out of older fusion data, like JET? Or, do the costs mean that we should accept our losses, and focus on making future fusion experiments AI-ready?</p></li><li><p><strong>Science vs engineering: </strong>Are efforts to validate, annotate and standardise data part of an ultimately doomed quest for perfect scientific understanding in fusion? Should we instead use AI to embrace a more engineering-led approach that can get the machines to work with noisy, imperfect, data?</p></li><li><p><strong>Domestic vs international: </strong>Should the UK rejoin ITER, the world&#8217;s flagship international fusion collaboration, which it left following Brexit? Or should the UK focus on domestic efforts, perhaps in collaboration with priority partners, like the US and IAEA?</p></li><li><p><strong>Magnetic vs Alternatives: </strong>Should the UK continue to focus on magnetic confinement fusion as the most realistic pathway to a future power plant? Is magnetic also a better bet for AI because it produces much more data and doesn&#8217;t have the same associations with the security establishment, which makes data access easier? Or should the UK invest more in inertial confinement and alternative fusion efforts, given the country&#8217;s diverse academic expertise, its historically strong relationship with the US National Ignition Facility, and notable assets, such as a <a href="https://www.clf.stfc.ac.uk/Pages/ar10-11_lsd_laser_r-d.pdf">world-leading laser</a>?</p></li><li><p><strong>Public vs Private: </strong>Should the UK government try to derive more immediate value from its fusion data? For example, should the UK license some data to companies, to cover the costs of data processing and annotation? If so, should local startups pay less? Or would such efforts hurt the UK&#8217;s goal of developing a world-leading fusion sector?</p><p></p></li></ul><p><em>_________________</em></p><p></p><p><em>This essay was originally posted on the <a href="https://deepmind.google/public-policy/science-needs-ai-data-stocktakes/">Google DeepMind website </a>and is a summary of a 20-page <a href="https://storage.googleapis.com/deepmind-media/DeepMind.com/Public-Policy/science-needs-ai-data-stocktakes/science-needs-ai-data-stocktakes-may-2026.pdf">report</a> that contains more details and examples. </em></p><p><em>Thank you to the following experts who let us interview them, reviewed the draft, and/or provided other support, as well as those who prefer to remain anonymous. All mistakes belong to the authors and no expert spoke to us on behalf of their organisation.</em></p><p><em>Jonathan Citrin, Brendan Tracey, Cristina Rea, Nathan Cummings, Andrea Murari, Jess Montgomery, George Holt, Alain Becoulet, Matteo Barbarino, Arthur Turrell, </em>Adriano Agnello, <em>David Dickinson, Steven Rose, Alessandro Pau, Kristina Fort, Charles Yang, Federico Felici, Tim Dodwell, Sam Vinko, Aidan Crilly, Lee Margetts, Tom Westgarth, Lorenzo Zanisi, Chris Packard, Justin Wark and Stanislas Pamela.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to read future pieces about how AI may change science, society and more. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Fusion has several characteristics that make an AI data stocktake exercise tractable, including a relatively small and centralised research community and early efforts to build on, like the open-source FAIR MAST initiative and the IMAS data standardisation effort. Fields like genomics, weather forecasting, and food security look quite different, and so careful thought is needed on how to best scope AI data stocktakes in these fields. Nevertheless, we think they would be useful.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There are caveats to the claim that fusion power would be essentially limitless, emission-free, and perfectly safe. One of the input fuels, tritium, is not widely available and scientists will need to <a href="https://www.iter.org/machine/supporting-systems/tritium-breeding">use nascent &#8216;blankets&#8217; to breed it</a> from lithium. Certain parts of fusion reactors will become radioactive over time, although they can likely be recycled after ~50 years. Thermonuclear weapons use fusion reactions. However, the weapons first require <em>fission</em> reactions and fissile materials like enriched uranium and plutonium.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Note: There are other approaches to inertial confinement fusion that do not use lasers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>For more in-depth reviews of AI for fusion opportunities, see publications from <a href="https://proceedings.mlr.press/v235/spangher24a.html">MIT</a>, the <a href="https://www.catf.us/resource/a-survey-of-artificial-intelligence-and-high-performance-computing-applications-to-fusion-commercialization/">Clean Air Task Force</a>, <a href="https://www.iaea.org/publications/15198/artificial-intelligence-for-accelerating-nuclear-applications-science-and-technology">IAEA</a>,<a href="https://arxiv.org/pdf/2603.25777"> FusionFest</a>, and the <a href="https://link.springer.com/article/10.1007/s10894-020-00258-1">US Department of Energy</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/science-needs-ai-data-stocktakes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/science-needs-ai-data-stocktakes?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Q&A with Ethan Mollick]]></title><description><![CDATA["People like AI when they use it themselves; they don&#8217;t like AI writ large"]]></description><link>https://www.aipolicyperspectives.com/p/q-and-a-with-ethan-mollick</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/q-and-a-with-ethan-mollick</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Wed, 22 Apr 2026 09:48:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9yg5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9yg5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9yg5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9yg5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1517553,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194500111?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9yg5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 424w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 848w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!9yg5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe933c12-2187-4675-abcb-137f3638eed4_4000x2667.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Jennifer Buhl)</figcaption></figure></div><p><em>How can companies get their employees to use artificial intelligence when human intelligence remains sharp enough to know that this risks replacing jobs? How should education revise itself for the ever-revising technological world that students emerge into? And how to understand the love/hate relationship so many people have with AI?</em></p><p><em>Ethan Mollick&#8212;<a href="https://mgmt.wharton.upenn.edu/profile/emollick/#:~:text=Ethan%20Mollick%20is%20the%20Ralph,on%20work%2C%20entrepreneurship%2C%20and%20education.">professor</a> of management at the Wharton School of the University of Pennsylvania and bestselling author of </em><a href="https://www.penguinrandomhouse.com/books/741805/co-intelligence-by-ethan-mollick/">Co-Intelligence: Living and Working with AI</a><em>&#8212;is among the leading public intellectuals <a href="https://www.oneusefulthing.org/">commenting</a> on AI adoption, connecting the latest scholarship to real-world usage, including his own tinkering with each new model.</em></p><p>AI Policy Perspectives <em>caught up with Ethan to hear his latest thinking on everything from agentic systems, to why scientific publication is broken, to how workers emotionally relate to AI colleagues. Too much chatter, he argues, considers this transformation at the broadest level. Too little digs into the practicalities of getting it right. </em></p><p style="text-align: right;">&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p><div><hr></div><p style="text-align: right;"><em>[Interview edited and condensed for clarity]</em></p><p><strong>Tom: In your 2024 book </strong><em><strong>Co-Intelligence</strong></em><strong>, you proposed four rules for human and AI collaborations, including that people should oversee and verify AI outputs. But doesn&#8217;t the value of AI agents come from people </strong><em><strong>not</strong></em><strong> overseeing and verifying everything?</strong></p><p><strong>Ethan: </strong>This is where policy matters a lot because these are choices now. In the &#8220;co-intelligence era,&#8221; you&#8217;d prompt the AI to do something in a chatbot, and it would give you an answer. You prompted again, and it&#8217;d give you another response. The human was in the loop. And not being in the loop was really dumb because it meant that you were just pasting in the AI&#8217;s answer, and then you&#8217;d get in trouble, as a lawyer with the judge, or whatever it was. Capabilities were weak, so human-in-the-loop mattered a lot. </p><p>But with agentic systems that could do hours of work on their own, now it&#8217;s a design choice. When do we want humans-in-the-loop? When is human verification valuable? When is human verification morally required? When is it legally required? What kind of interventions move the system forward? I feel there has been a complete lack of deep understanding about these topics.</p><p><strong>Tom: You&#8217;ve said that, with agentic systems, management becomes a superpower. Can you explain this?</strong></p><p><strong>Ethan: </strong>Increasingly, systems look like mini-organizations as they get subagents they can delegate to. So the best way to organize is to give the AI a clear direction of where you want to go. And it turns out that this looks a lot like management. When do you want the AI to check in with you? How do you write a really clear brief? What checks are important? What tests do you want to run? What&#8217;s acceptable? What&#8217;s not acceptable? Those are management questions.</p><h4><strong>THE WORKPLACE</strong></h4><p><strong>Tom: You co-wrote a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231">study</a> last year involving a field experiment at Procter &amp; Gamble that showed AI usage enhanced employee performance. But there were other interesting findings besides that.</strong></p><p><strong>Ethan: </strong>The most interesting piece about it was that people liked working with the AI, and that it substituted for people emotionally. The second interesting piece was the &#8220;smoothing&#8221; of capabilities&#8212;so, technical people previously had technical ideas while business people had business ideas. But AI smooths out both. If technical people can do business work and business people do technical work, what that tells you is we have to redesign organizations.</p><p><strong>Tom: The emotional side&#8212;that using AI improved people&#8217;s feelings about the work&#8212;was surprising to me; I wasn&#8217;t sure what to make of it.</strong></p><p><strong>Ethan: </strong>What to make of it? That views of AI are complicated. If people keep saying, &#8220;Yeah, AI is going to destroy all jobs, and may kill everyone on Earth&#8230;but might not&#8221;&#8212;and then, &#8220;Why is AI unpopular?!&#8221; Feels like not a hard question. People like AI when they use it themselves; they don&#8217;t like AI writ large. It&#8217;s not surprising to me that AI makes your job better because a lot of jobs suck! And if we do good design work with AI, it makes people&#8217;s lives better. If we just let it loose on the world, and tell management that the only option they have is automation, then we&#8217;re in big trouble.</p><p><strong>Tom: Many knowledge workers seem to be using AI in secret right now, perhaps from fear of being exposed as less valuable.</strong></p><p><strong>Ethan: </strong>This is a leadership problem.<em> </em>The incentives have to be aligned properly. Currently, it&#8217;s, &#8220;I&#8217;m going to automate your jobs away&#8221; or &#8220;I&#8217;m not going to share with you any of the gains the company gets.&#8221; People are exquisitely tuned to rewards. So it&#8217;s about leaders articulating a vision of what the world looks like with AI for employees. &#8220;What should I expect to do? How are people rewarded for doing the right thing? If they automate 90 percent of my job, what happens to me?&#8221; Without those answers, everything else is secondary.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Q_Zv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 424w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 848w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png" width="1456" height="808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1446357,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194500111?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 424w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 848w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!Q_Zv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b55957a-71e3-40fc-a0a2-6ffcbd09879e_1931x1071.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4><strong>CHANGING ORGANIZATIONS &amp; EDUCATION</strong></h4><p><strong>Tom: You have a concept of &#8220;<a href="https://www.oneusefulthing.org/p/making-ai-work-leadership-lab-and">leadership, lab, and crowd</a>.&#8221; Could you explain?</strong></p><p><strong>Ethan: </strong>There was a huge amount of R&amp;D in the 1900s about how you organize work, and 40 percent of the American advantage in business came from <a href="https://www.nber.org/system/files/working_papers/w22327/w22327.pdf">management</a>. In the last 30 years, a lot of that muscle has died. But experimentation is important, and leaders need to guide that. So, there are three things that organizations need to be successful with AI. First is &#8220;leadership&#8221;: a team that articulates a clear vision of the future, and is willing to experiment. Then there&#8217;s &#8220;the crowd,&#8221; the employees who might actually use AI. They need access to a frontier model, they need clear rules, they need reward systems. Then there is &#8220;the lab,&#8221; and this is the piece a lot of companies are missing. You need a dedicated team working on AI innovation. They can&#8217;t be just a technical team; this is not an IT department problem. If you don&#8217;t have that piece, you&#8217;re not building things for the future. And where does the crowd go when they have a good idea? &#8220;I came with a breakthrough idea that saves 90 percent of effort!&#8221; How does that diffuse in the organization? That&#8217;s where you need the lab.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Tom: If AI transforms the workplace, that should change how we educate the next generation, right?</strong></p><p><strong>Ethan: </strong>The early workplace is under a lot of threat because the old apprenticeship model just broke. The idea <em>was</em> that there were tasks&#8212;especially in white-collar work&#8212;that were tedious and annoying for managers to do. But you could pay a relatively cheap person to do them, and that person would learn as a result of this, and receive mentorship. So we had this amazing machine for talent: we taught you, we evaluated you, and you got paid, and you were doing work we needed. A junior person&#8217;s goal was to produce good work that made managers happy, so that they got promoted. But now the junior person is worse than AI, so they&#8217;ll use AI to do their work. And the middle manager&#8217;s goal was to give work to a junior person who&#8217;s not great, and give them feedback so they get better, so that the middle manager has to do less work. And that broke because the middle manager would rather assign work to the AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tJTk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tJTk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 424w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 848w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tJTk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png" width="1456" height="808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1453693,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194500111?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tJTk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 424w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 848w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!tJTk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520c270f-fe48-4ee8-9339-8837619b8858_1931x1071.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Tom: But in terms of the educational system, what should change if workplaces no longer offer that apprenticeship role?</strong></p><p><strong>Ethan: </strong>Education is really screwed up right now, but it was screwed up for lots of reasons. It&#8217;ll be fine; we&#8217;ll figure this out. But it&#8217;s gonna take a bunch of years. It&#8217;s clear from early <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6423358">evidence</a> that AI will be a tutor outside of class and inside class. It&#8217;ll do activities and give guidance. But schools are places where we can compel students to not use AI, and have them in a room, and evaluate them, and teach them the things that we want them to learn. As long as we think people need to be educated, this is the best space to do it in.<em> </em>So students are cheating in the meantime? They were cheating before! We can give them different tests; we could do in-class writing assignments. There can be a weird, backward-looking &#8220;Education won&#8217;t adjust!&#8221; view. How many death spirals does higher education need to be in per moment? There are the pieces to reconstruct a better form of education. It&#8217;s just a massive changeover.</p><h4><strong>BETTER SCIENCE &amp; BETTER THINKING</strong></h4><p><strong>Tom: What about academia? There&#8217;s been much talk about AI-written papers, and how they could overwhelm academic publishing. But could AI benefit the peer-review process, and help with the dissemination of academic findings?</strong></p><p><strong>Ethan:</strong> This is another area where more lifting is needed. It&#8217;s a shame that we are building AI co-scientists, but not thinking about the rest of the process that&#8217;s needed to actually make science happen. It&#8217;s one thing to have science produce more papers. We have no ability to absorb more papers. Every publication is overwhelmed. Our dissemination techniques were already bad, but now they&#8217;re really broken.</p><p><strong>Tom: As a case in point, you submitted a paper around 2023, and <a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged">wrote</a> publicly about it then, making your term &#8220;the jagged frontier&#8221;&#8212;that AI capabilities advance in some areas but remain behind in others&#8212;highly influential. Yet the academic <a href="https://pubsonline.informs.org/doi/full/10.1287/orsc.2025.21838">paper</a> itself only just came out, three years later!</strong></p><p><strong>Ethan:</strong> One of the rejections we got early on was reviewers saying that they knew this already, and they cited a bunch of working papers&#8212;that cited the working paper <em>we</em> had submitted! This is not a unique story. Opening one part of the bottleneck without opening the others becomes a problem. But it takes longer to solve systemic problems of how science operates than to solve the problem of producing more papers.</p><p><strong>Tom: Another concern in education and science is <a href="https://www.mdpi.com/2075-4698/15/1/6">cognitive offloading</a>, that people may <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">surrender</a> thinking to machines, and lose those skills. On the other hand, AI&#8217;s value comes from machines thinking for us. What are examples of </strong><em><strong>bad</strong></em><strong> offloading and </strong><em><strong>good</strong></em><strong> offloading?</strong></p><p><strong>Ethan: </strong>We offload all the time, right? But we also force people not to offload. You could offload all your mental math to calculators, but we force students to do some math by hand in an attempt to get them to learn stuff. And we can enforce those rules in school. In the world of work, we are not used to thinking about training, about what should be offloaded, and what shouldn&#8217;t be. We need to make decisions about this. So, Rolls-Royce still employs someone to <a href="https://www.youtube.com/watch?v=q9yqXPNHyMA">paint stripes</a> on a car by hand, and that&#8217;s an obvious pushback against deskilling in one area. But Ford doesn&#8217;t do the same thing. These are choices we get to make at an organizational level, depending on what we think is valuable.</p><h4><strong>ADAPTING TO CONSTANT CHANGE</strong></h4><p><strong>Tom: A point you&#8217;ve made to young people about the AI future is that they&#8217;ll need to be adaptable. When educators talk about teaching adaptability, it sometimes boils down to encouraging &#8220;creativity&#8221; and &#8220;critical thinking.&#8221; Another view is that you&#8217;re more likely to be adaptable by developing deep domain knowledge. For you, what does learning adaptability mean?</strong></p><p><strong>Ethan: </strong>Adaptability requires both deep domain knowledge <em>and</em> wide knowledge: T-shaped behaviour is probably the way to go. I feel like it&#8217;s a throwaway line: &#8220;Well, we&#8217;ll all be adaptable!&#8221; If we could teach that, that&#8217;d be amazing. People are more adaptable than we think, so part of this is that people will figure stuff out. But we can&#8217;t just throw up our hands, and say, &#8220;Be adaptable!&#8221; You need to have deep enough knowledge to go into a field. You need to have broad enough knowledge so that, as one piece of knowledge becomes less useful, you&#8217;re moving to the next one. And we need to help people be adaptable by building systems that get them in place inside an organization and able to shift roles. I sometimes worry that adaptability is a catch-all for &#8220;Don&#8217;t worry! It&#8217;ll be fine!&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bNwQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bNwQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 424w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 848w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 1272w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bNwQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png" width="1456" height="808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1526206,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194500111?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bNwQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 424w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 848w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 1272w, https://substackcdn.com/image/fetch/$s_!bNwQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0029af43-cf4a-4465-ac85-08b368988fdf_1934x1073.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Tom: Another side is that not everybody will be equally adaptable. Could it be that the AI future favours certain circumstances and characteristics?</strong></p><p><strong>Ethan: </strong>A lot of these characteristics were already good characteristics to have. Does AI act as a multiplier of them? Does it disincentivize some people? We&#8217;re now past the edge of what we know. Ultimately, all of these questions come down to the same exact question, which is: How good does AI get, how fast? We need to articulate more clearly what we think that future looks like. Because you can&#8217;t say, &#8220;We&#8217;re going to build a superintelligent machine that&#8217;s better than all humans at every intellectual task&#8212;but let&#8217;s start thinking about adaptability!&#8221; Unless you mean, &#8220;Let&#8217;s adapt to UBI&#8221; [where everyone gets Universal Basic Income cash payments from the government]. And then, we should be spending a lot more time thinking about those issues. Not everyone in the labs believes this, and I find that the econ people believe it less. But you can&#8217;t have this message of, like, &#8220;All work will be obsolete!&#8221; and then have detailed, ticky-tacky conversations about what you should do in eighth grade. Because, by the time you enter the job market, there&#8217;s no jobs. So give me the pathway that you think <em>is</em> there, and that becomes the most important question to ask.</p><p><strong>Tom: Are there other important questions I didn&#8217;t ask?</strong></p><p><strong>Ethan: </strong>We need to start thinking about getting into fields, and understanding what the changes are&#8212;we need to get detailed. That is where the research is missing. Another large-scale econ picture about AGI isn&#8217;t as useful. General-purpose technology affects everything, so we need policymaking for everything, from power generation to accountants, and when does the government say it&#8217;s okay to do this. There&#8217;s just this assumption that if we do the macro stuff, everything will work out. I&#8217;d rather see a lot more micro stuff: a thousand flowers everywhere, trying to come up with different approaches.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/q-and-a-with-ethan-mollick?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/q-and-a-with-ethan-mollick?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Agents Running the State]]></title><description><![CDATA[What could possibly go wrong?]]></description><link>https://www.aipolicyperspectives.com/p/ai-agents-running-the-state</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-agents-running-the-state</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 15 Apr 2026 09:50:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CymV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CymV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CymV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 424w, https://substackcdn.com/image/fetch/$s_!CymV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 848w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png" width="1456" height="795" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:795,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8904267,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/194174723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CymV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 424w, https://substackcdn.com/image/fetch/$s_!CymV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 848w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!CymV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6880abf1-3bd3-4856-970a-6fd26eb0157e_2814x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Waiting for an AI helper. (Credit: Gemini)</figcaption></figure></div><div class="callout-block" data-callout="true"><p><em>&#8220;Public services&#8221; include everything from teachers to the trash, from roadwork to  permission for a tree house. Much seems routine, but plenty is at stake. This makes politicians hesitant to risk an overhaul, leaving the system creaking and the paperwork mounting. </em></p><p><em>Last October, a provocative proposal emerged. <a href="https://agenticstate.org/">The Agentic State</a> conjured a vision of officialdom transformed, converting outdated procedures with a new system of AI helpers. This fledgling project offers both a blueprint and a promise of assistance to governments around the world.</em></p><p><em>But what if the vision were blind to how this could go awry? <a href="https://simoneparazzoli.me/">Simone Maria Parazzoli</a>, a co-author of the paper, and <a href="https://www.linkedin.com/in/omerhanbilgin/">Omer Bilgin</a> of <a href="http://www.deliberaide.com">deliberAIde</a> decided to critique their own ideas, seeking pitfalls in hopes of averting them.</em></p><p style="text-align: right;">&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p></div><div><hr></div><h4><strong>By Simone Maria Parazzoli &amp; Omer Bilgin</strong></h4><p></p><p><strong>Amid the exhaustion of caring for a baby, new parents must deal with everything from bewildering sobs, to erratic feeding times, to the joys of changing a soiled newborn at 3 a.m. The last thing they need is paperwork.</strong></p><p>But what if, when coming home from the maternity ward that first day, they could awaken a government AI voice assistant, tell it the happy news, and hear the following response? &#8220;Congratulations! What&#8217;s the baby called?&#8221; The app would then take care of all the dreary admin, coordinating across agencies, registering the child, and setting in motion the services that this tiny new citizen should enjoy.</p><p>That is one example of how a future &#8220;agentic state&#8221; could simplify, speed up, and improve citizens&#8217; interactions with public services. To be clear, this does not yet exist. But projects like this one, <a href="https://oxfordinsights.com/insights/innovation-under-tough-circumstances-ukraines-ai-strategy-in-times-of-war/">envisioned</a> by Ukrainian officials, are more than fantasy, with several countries avidly testing early versions of agentic AI systems.</p><p>While Ukraine works toward the baby example, <a href="https://www.gov.uk/government/news/ai-helpers-could-coach-people-into-careers-and-help-them-move-home">Britain</a> is piloting agent-based support to provide citizens more tailored help. Meanwhile, <a href="https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai">Singapore</a> is developing governance frameworks for agentic AI, and governments from <a href="https://guides.data.gouv.fr/intelligence-artificielle/le-serveur-mcp-de-data.gouv.fr">France</a> to the <a href="https://www.govinfo.gov/features/mcp-public-preview">United States</a> are ensuring that their public data can be accessed by agents.</p><p>Agentic AI systems&#8212;capable of perceiving, reasoning, and acting with minimal human supervision&#8212;will transform what organizations can achieve. By combining the reasoning of large language models with retrieval, memory, and tool use, agentic AI can automate complex tasks. For governments, whose core work is high-volume, structured administrative processes, this could make services more efficient, timely, consistent, and fair, while lowering costs.</p><p>Consider a citizen looking to start a small business. An agentic system&#8212;instead of requiring the entrepreneur to individually navigate zoning boards, tax authorities, and regulations&#8212;could autonomously reconcile these requirements. The larger promise is a shift from just <em>doing things right</em> (optimizing for procedure-following) to <em>doing the right things</em> (pursuing outcomes that citizens truly want).</p><p>The <a href="https://agenticstate.org/">Agentic State</a> vision paper&#8212;supported by The World Bank and the Global Government Technology Centre Berlin&#8212;was the first effort to systematically map the opportunities of agentic AI adoption for governments. This was not an academic exercise: 21 leaders across 15 countries contributed, including ministers and chief technology officers preparing to lead this transition.</p><p>In this vision, AI agents are a means to manage <em>complexity</em> and <em>scale</em>, while humans develop <em>strategy</em>, exercise <em>judgment</em>, and hold <em>accountability</em>.</p><p>Several governments have integrated official chatbots into their government services, but most of these merely provide conversational guides to administrative procedures. A few pioneering countries are starting to move beyond that. Ukraine, for instance, is turning chatbots into agentic assistants. Specifically, its Diia.AI assistant can retrieve users&#8217; data from connected registries, and generate official documents such as income certificates, while also providing certified information based on records such as taxation, land registries, and pensions.</p><p>The United Kingdom is also exploring agentic interactions via <a href="https://insidegovuk.blog.gov.uk/2025/12/16/gov-uk-has-entered-the-chat-our-vision-for-gov-uk-chat/">GOV.UK Chat</a> (inspired by Diia.AI), including a pilot program to support job seekers that transforms a static digital portal into an active assistant, matching users&#8217; skills with available opportunities.</p><p>Yet trends and optimism are not enough for success. The agentic state vision rests on key assumptions. What if they&#8217;re wrong?</p><p>This article presents a &#8220;red-teaming&#8221; exercise&#8212;a stress test of this vision&#8212;that identifies six core assumptions, along with scenarios that could emerge if they don&#8217;t hold true, and guardrails to avert such failures.</p><div><hr></div><div class="callout-block" data-callout="true"><h4><strong>Assumption 1: </strong><em><strong>AI Agents Become More Capable and Reliable</strong></em></h4></div><p>Agents can already perform rudimentary planning, tool use (e.g., searching the internet, using calculators, sending emails), and multistep task execution. Frontier labs are <a href="https://www.technologyreview.com/2025/01/11/1109909/anthropics-chief-scientist-on-5-ways-agents-will-be-even-better-in-2025/">betting</a> <a href="https://blog.samaltman.com/reflections">heavily</a> on agents, making it plausible that systems capable of managing complex and large-scale administrative tasks will emerge soon.</p><h4><strong>Failure Scenario: </strong><em><strong>The Technology Falters</strong></em></h4><p>Governments reorganize around agentic execution, but systems never become reliable enough for public administration. The demos look strong, but real cases fail on edge conditions, and require constant human correction. The agentic layer becomes only superficially competent with layers of human intervention underneath.</p><h4><strong>Guardrail: </strong><em><strong>Start Cautiously</strong></em></h4><p>Governments should start with minimal deployments, and tightly scoped use cases to validate reliability, develop procedural rigor and organizational competence, and account for technological evolution rather than committing prematurely to large-scale redesigns.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 2: </strong><em><strong>Agents Can Work Together</strong></em></h4></div><p>The success of agentic systems demands that they&#8217;re able to interact seamlessly, conveying intent, carrying out tasks, and sharing data in an interoperable way. <a href="https://modelcontextprotocol.io/docs/getting-started/intro">MCP</a> (model context protocol) is emerging as the technological standard for connecting AI applications with external systems. </p><h4><strong>Failure Scenario: </strong><em><strong>Standards Fail to Converge</strong></em><strong> </strong></h4><p>Commercial interests diverge, establishing competing protocols, while  government departments end up using AI systems that cannot communicate with one another. When a citizen&#8217;s request requires action from multiple agencies, the process breaks down. </p><h4><strong>Guardrail: </strong><em><strong>Officials Insist on Shared Protocols</strong></em></h4><p>Governments should make interoperability a condition of adoption, participating in the cross-sectoral <a href="https://aaif.io/">bodies</a> and forums where these standards are being shaped, funding the development of shared agentic interfaces and other agent-specific standards, and mandating non-proprietary protocols in procurement. <a href="https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards">Standards</a> rarely emerge by accident, but they may emerge when powerful governments treat them as a priority.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 3: </strong><em><strong>Organizations Will Adapt</strong></em></h4></div><p>To adopt and employ agents effectively, organizations must rethink their processes, roles, and incentives. They need to flexibly change and dynamically adapt practices to keep pace with the changing technological landscape.</p><h4><strong>Failure Scenario: </strong><em><strong>The Status Quo Prevents Change</strong></em></h4><p>Agentic AI adoption outpaces organizational change, with citizens and civil servants using agents in an uncoordinated manner long before official programs catch up. Local practices harden into path dependence before common standards emerge. The state becomes more productive at producing bureaucracy, not societally beneficial outcomes.</p><h4><strong>Guardrail: </strong><em><strong>Redesign Processes Before Automating Them</strong></em></h4><p>Agents should only enter workflows that have been simplified, decomposed, and restructured to minimize approval layers and handovers. Governments must treat adoption as a continuous discovery process. They should invest in common evaluation templates, reusable components, and a cross-agency repository of lessons, so that what works in one place can travel before what does <em>not</em> work becomes entrenched. </p><div class="callout-block" data-callout="true"><h4><strong>Assumption 4: </strong><em><strong>Private Adoption of Agentic AI Will Be Rapid</strong> </em></h4></div><p>Many companies are <a href="https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/">betting</a> on an agentic future. Firms are experimenting with internal copilots and autonomous customer flows, while frontier AI companies advance core models, architectures, and capabilities, and cloud providers offer the compute needed to deploy agents at scale. This suggests that agents will become commonplace across business, consumer, and enterprise environments, allowing governments to build on tools, infrastructure, and behaviors already spreading across the economy. This assumption rests on projections, though <a href="https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs">evidence</a> remains ambiguous.</p><h4><strong>Failure Scenario: </strong><em><strong>Diffusion Is Slower Than Forecast</strong></em></h4><p>Governments invest as if an agent-saturated economy is imminent, but industry adoption remains narrow, experimental, or ends up costing more than it saves. Public investments don&#8217;t plug into widely used tools and practices, meaning that citizens find agentic interfaces in government before they&#8217;re normal elsewhere. The state ends up bearing political and institutional costs without the stabilizing effects of private-sector diffusion.</p><h4><strong>Guardrail: </strong><em><strong>Lower Barriers to Private-Sector Agentic Usage</strong></em></h4><p>Governments can accelerate the development of an agentic AI ecosystem by investing in shared agentic infrastructure&#8212;such as standard ways to access public data, communicate across systems, and carry out authorized tasks and payments&#8212;that lower integration costs for firms, and reduce the risk of differing technological maturity across sectors.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 5: </strong><em><strong>Citizens Will Prefer Agentic Services</strong></em></h4></div><p>Increasingly, citizens are interacting with and relying on AI tools, but <a href="https://mbs.edu/-/media/PDF/Research/Trust_in_AI_Report.pdf?rev=0ee82285b2b0439bba524dbddc58214a">many do not trust the</a>m. For governments to integrate AI agents into workflows and services, citizens must accept and support the roles that agentic systems can play, finding them sufficiently trustworthy, reliable, fair, convenient, and accountable.</p><h4><strong>Failure Scenario: </strong><em><strong>The Public Rejects Automation</strong></em></h4><p>A single notable failure, or an accumulation of failures, turn the public against agentic systems, and convince many to opt-out. They judge automated decisions as opaque, illegitimate and untrustworthy, and suspect it worsens <a href="https://arxiv.org/abs/2510.16853">inequality</a>, with privileged citizens able to employ highly capable personal agents to navigate bureaucracy better than those relying on basic tools. The government is forced to run two systems&#8212;agentic and human&#8212;and neither meets expectations.</p><h4><strong>Guardrail: </strong><em><strong>Mandate Transparency</strong></em></h4><p>Governments must make agent integrations into government processes as legible as possible, furnishing explanations of decisions and publishing evaluation results for agentic fairness and performance, while detecting patterns of systemic bias or unequal benefit distribution based on citizens&#8217; technological access.</p><div class="callout-block" data-callout="true"><h4><strong>Assumption 6: </strong><em><strong>Human Oversight Will Evolve</strong></em></h4></div><p>For AI agents to act with functional autonomy within government processes, oversight frameworks <a href="https://arxiv.org/pdf/2506.04836">must adapt</a>, moving away from mandatory human reviews and approvals for everything (human-in-the-loop) to intermittent oversight (<a href="https://link.springer.com/rwe/10.1007/978-981-97-8440-0_75-1">human-on-the-loop</a>). This evolution increases speed and efficiency while reducing bottlenecks, with humans intervening only on edge cases. There is precedent for such adaptation: governments adapted regulation to cloud computing, e-identities, and AI-driven decision support systems.</p><h4><strong>Failure Scenario: </strong><em><strong>Regulation Never Updates</strong></em></h4><p>Every agentic action requires human verification; every decision must be justified through mechanisms designed for old chains of accountability. Agents can draft, but cannot act. Compliance and procedural costs rise as institutions retrofit old controls onto new AI processes. The result is high bureaucracy and low autonomy: an <em>agentic state</em> in theory, a <em>copilot state</em> in practice.</p><h4><strong>Guardrail: </strong><em><strong>Sandboxes to Test Oversight</strong></em></h4><p>Governments should establish controlled environments that allow policymakers, developers, and civil society to collaborate and gather empirical evidence on what forms of oversight are adequate and best fit different kinds of agentic deployments, reducing uncertainty before codifying rules at scale. They should explore this early, much as Singapore has done through its <a href="https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf">Model AI Governance Framework for Agentic AI</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Soon, agentic government will be more than optimism and testing.</strong> A vanguard of countries will implement these tools. If those cases produce the kinds of benefits imagined, other countries will flock to join them. </p><p>But momentum is not inevitability. This project depends on assumptions&#8212;about progress, coordination, institutions, norms, and law&#8212;that demand scrutiny before governments rebuild themselves around these new technologies. </p><p>This red-teaming exercise of the agentic state concept is not to argue against the vision, but to make it more robust and resilient. The six possible failure scenarios are not mutually exclusive. Several could compound, and some may already be taking shape. For instance, reliability has been improving <a href="https://arxiv.org/html/2602.16666v1">much more slowly</a> than accuracy, providing ground for technology to falter (Scenario 1), and there are <a href="https://www.adalovelaceinstitute.org/policy-briefing/great-expectations/">signals</a> that the public might reject automation if economic gains and innovation speed are prioritized over fairness (Scenario 5). </p><p>Governments that are serious about improving the state with AI must attend to these risks in earnest now, while the architecture is still being laid. The opportunity is too precious to spurn. </p><p>Agentic AI could make public services considerably faster, fairer, and more responsive&#8212;more so than anything the traditional bureaucratic model has yet delivered. That prize is worth the discipline of preparing for what could go wrong.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-agents-running-the-state?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-agents-running-the-state?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>For further details on &#8220;The Agentic State,&#8221; check out the original <a href="https://agenticstate.org/paper.html">vision paper</a></em> </p>]]></content:encoded></item><item><title><![CDATA[AI Policy Primer (#24)]]></title><description><![CDATA[Identifying agents, self-improvement, and artificial clouds]]></description><link>https://www.aipolicyperspectives.com/p/ai-policy-primer-24</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-policy-primer-24</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Thu, 09 Apr 2026 14:50:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CLkj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Every six weeks, we round up three papers that we think AI policy folks should be reading. In this edition, we look at a <a href="https://arxiv.org/abs/2603.10028">proposal</a> for how to identify the agents that will soon fill the economy; <a href="https://cset.georgetown.edu/publication/when-ai-builds-ai/">research</a> on the prospect of self-improving AI; and<a href="https://arxiv.org/pdf/2603.06909"> new insights</a> about how to use AI to prevent contrails, or artificial clouds, from warming the planet. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CLkj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CLkj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 424w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 848w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 1272w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CLkj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2365898,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/193691288?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CLkj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 424w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 848w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 1272w, https://substackcdn.com/image/fetch/$s_!CLkj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F879e6456-0c9d-4813-a7b9-1fdc297b6a23_8000x4500.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>1. Identifying (and incentivising) AI agents</h2><ul><li><p><strong>What happened: </strong>A trio of law and philosophy professors considered how to identify who (or what) is responsible for AI agents&#8217; actions in the world, and came up with a two-part <a href="https://arxiv.org/abs/2603.10028">proposal</a>: that the disparate and evolving agents within a system should exist legally as a new form of corporation; and that each corporation should link to accountable humans.</p></li><li><p><strong>What&#8217;s interesting: </strong>The paper by <a href="https://law.ua.edu/faculty_staff/yonathan-arbel/">Yonathan Arbel,</a> <a href="https://law.ua.edu/faculty_staff/yonathan-arbel/">Simon Goldstein</a>, and <a href="https://www.law.uh.edu/faculty/main.asp?PID=6428">Peter N. Salib</a> starts with a thought experiment. It&#8217;s 2030, and your AI assistant suggests that it optimizes your slow WiFi connection. After you agree, it spawns a swarm of agents. Some are copies, while others are cheaper agents running on open-source models. Some start to interface with AI agents from other companies. Three months later, two FBI agents knock on your door and explain that your network has been piggybacking on a local defense contractor&#8217;s WiFi network.</p></li><li><p>Before determining who is responsible and what the repercussions should be, there are more basic questions: Who are the AI actors in this story? How many are there?</p></li><li><p><a href="https://www.aipolicyperspectives.com/p/an-agents-economy">The economy will soon be filled with capable AI agents</a>. To deter and respond to such harms, the authors argue that we need to be able to identify these agents, at two levels.</p><ul><li><p>To prevent human misuse or negligence, we need &#8216;<strong>thin identity&#8217;</strong>. This would connect AI agents to the humans most able to control them, similar to how &#8216;know-your-customer&#8217; rules tie banking transactions to humans.</p></li><li><p>Humans will be unable to monitor and control every AI decision, so we also need to be able to identify agents themselves, hold them accountable and incentivize them to behave well. To do so, we need &#8216;<strong>thick identity&#8217; </strong>that can distinguish AI agents as stable, coherent entities, with persistent goals. This goal is pragmatic and does not require viewing AIs as conscious in any sense.</p></li></ul></li><li><p><em>Thickly </em>identifying agents is harder and more novel, as AI agents need not be attached to a physical body. Multiple agents can also work together on a single task. Any single agent can be copied, spun up, spun down, or be continually updated.</p></li><li><p>To address such challenges, the authors propose creating algorithmic corporations, or &#8216;A-corps&#8217;. These would have two key elements:</p><ul><li><p><strong>Legal personhood: </strong>Like a traditional corporation, an A-corp would be a single legal entity that persists over time. It could hold property, make contracts, and be sued. But it would be run by a collection of AI agents. As such, the proposal runs contrary to scholars who have argued <a href="https://arxiv.org/pdf/2502.18359">against</a> granting legal personhood to AI agents, or called for <a href="https://openscholarship.wustl.edu/law_lawreview/vol95/iss4/7/#:~:text=This%20Article%20argues%20that%20algorithmic,which%20have%20non%2Dhuman%20controllers.">bans</a> on algorithms running companies because of concerns about crime and companies using them to avoid liability.</p></li><li><p><strong>Computationally-secure governance: </strong>Each A-corp would have a unique digital certificate and a secure private key to authorise transactions. The humans that own each A-corp could grant the key to an AI &#8216;manager&#8217; agent who in turn could grant more limited permissions to sub-agents within the A-corp, or to other A-corps, such as permissions to spend up to $100 or to read a batch of emails.</p></li></ul></li><li><p>The proposal addresses thin identity by reducing the vast number of AI agents down to a smaller number of A-corps, whose actions are traceable back to their human owners. As with limited liability companies (LLCs), the human owners would not be responsible for <em>all </em>harm their A-corps cause, but could lose all funds they invest and possibly face further liability, for example in cases of fraud or negligence.</p></li><li><p>The proposal addresses thick identity via its &#8216;resource constraint thesis&#8217;. All AI agents need resources, like money and compute. A-corps provide AIs with a way to access these resources and an incentive to manage them well. For example, A-corps that tightly monitor and audit their sub-agents&#8217; performance would get more resources, while A-corps that allow fraud or waste will lose resources. This encourages A-corps to self-organise<em>, </em>into stable, coherent, multi-agent systems.</p></li><li><p>The authors argue that A-corps could also address alignment concerns, for example by reducing the incentive for an AI agent to exfiltrate its own weights, because that new AI instance would lose access to resources and permissions from the A-corp.</p></li><li><p>To make it happen, the authors call for a public registry of A-corps. This would list each A-corp&#8217;s human owners, the certificates to authenticate it against, as well as (potentially) the differing permissions enjoyed by its agents. Ultimately, the authors argue that A-corps should become mandatory for any AI agent taking &#8220;economically significant actions&#8221;, and to guard against criminals using AI agents anonymously.</p></li><li><p>The authors respond to some expected pushback. They do not see A-corps as anthropomorphising AI because the proposal does not require anybody to view agents as having deeper desires or wants. They also think A-corps can prevent the risk that AI agents might slowly build up resources before deploying them for harm, by encouraging inter-agent trade that penalises rogue behavior. Could A-corps disempower humans? The authors argue that they provide a pathway to tax and redistribution, and enable humans to better steer agents, for example by designating the parts of the economy that A-corps are permitted to operate in.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free. Lots more in the pipeline. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>2. When AI builds AI</h2><ul><li><p><strong>What happened: </strong>The Centre for Security and Emerging Technology, CSET, released <a href="https://cset.georgetown.edu/publication/when-ai-builds-ai/">a report</a> on the prospects for AI improving itself, known as automated R&amp;D or recursive self-improvement, based on an expert workshop in July 2025.</p></li><li><p><strong>What&#8217;s interesting: </strong>In 1964, the computer scientist I.J. Good wrote about the possibility of an &#8220;intelligence explosion&#8221; that would leave &#8220;the intelligence of man.&#8230;far behind&#8221;. Researchers have also long automated aspects of writing code and AI model design.</p></li><li><p>However, the speed of AI coding advances suggests that something qualitatively different may soon occur. This makes two questions salient: 1. Could AI automate the <em>entire </em>AI R&amp;D process? 2. Will this R&amp;D automation extend across all scientific disciplines? The CSET report focuses on the first question.</p></li><li><p>CSET defines AI R&amp;D by distinguishing between <em>research scientists, </em>who generate hypotheses, design experiments and interpret results; and <em>research engineers, </em>who write code, fix bugs and generate data. They also note the inputs that AI R&amp;D relies on, such as raising funds and acquiring compute.</p></li><li><p>They sketch out four overlapping scenarios for how AI R&amp;D may play out:</p><ul><li><p><strong>1. Explosion: </strong>AI systems automate a growing share of AI R&amp;D. Initially, this leads to modest productivity gains, but as the length and complexity of tasks that AI performs grows, productivity soars. AI systems become far more capable than humans, whose involvement in AI R&amp;D falls to zero.</p></li><li><p><strong>2. Fizzle: </strong>The share of R&amp;D tasks done by AI rises, but rather than leading to compounding improvements, capabilities start to plateau.</p></li><li><p><strong>3. Amdahl&#8217;s Law: </strong>AI automates certain activities, like writing code and running experiments, but not others, like research strategy.</p></li><li><p><strong>4. The expanding pie: </strong>As AI automation grows, humans realise that new ideas and breakthroughs are needed that AI systems cannot yet provide.</p></li></ul></li><li><p>The experts in CSET&#8217;s workshop held widely diverging views on which scenario was most likely. Most importantly, new empirical data is unlikely to resolve these conflicts, because participants may view the same data as confirming their own assumptions.</p><ul><li><p>For example, an AI system&#8217;s inability to reliably use a keyboard or mouse may look like a bottleneck to one expert, but a source of explosive growth to another&#8212;if they expect this human-focussed tooling to get adapted for the AI era. Similarly, different experts may view AI automating a growing share of R&amp;D tasks as progress towards a fast takeoff, or as low-hanging fruit being picked off, accelerating progress only as far as the upcoming wall.</p></li></ul></li><li><p>These differing views are also visible in more recent commentary on the topic.</p><ul><li><p>The prominent AI researcher and writer Nathan Lambert recently <a href="https://www.interconnects.ai/p/lossy-self-improvement">cited</a> Paul Allen concept of a &#8216;complexity break&#8217; to argue that as we understand intelligence better, further progress becomes exponentially harder. In addition to incurring financial costs, Lambert argued that running suites of AI agents won&#8217;t necessarily lead to exponential progress, because those agents will perform best on narrow, verifiable tasks, will be hard to manage in large numbers, and will sample from similar parts of the distribution of AI research ideas, inhibiting more novel breakthroughs.</p></li><li><p>Conversely, Ajeya Cotra at METR, the Model Evaluation and Threat Research organisation, recently wrote about how she &#8220;<a href="https://www.planned-obsolescence.org/p/i-underestimated-ai-capabilities?utm_source=substack&amp;utm_medium=email">underestimated AI capabilities (again)</a>&#8221;.  She argued that AIs may, counterintuitively, find it easier to decompose <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">longer projects</a> into sub-components that multiple agents can run in parallel, than for shorter tasks. AIs will also produce good documentation for their fellow AIs, which could accelerate progress.</p></li></ul></li><li><p>If faster automation and progress does occur, the CSET authors see two main risks: Less time to prepare for safety risks from AI, and lower human understanding of AI systems. To address these risks, their recommendations have a strong focus on improving access to evidence, including:</p><ul><li><p><strong>New evaluations of AI R&amp;D, </strong>including for &#8216;<a href="https://arxiv.org/pdf/2503.14499">messy</a>&#8217; tasks such as research strategy, which lack clear specifications and success criteria and take place in a dynamic environment with various real-world interactions.</p></li><li><p><strong>New approaches to evaluation</strong> to better distinguish &#8216;degrees of accomplishment&#8217; from a simple success/failure binary.</p></li><li><p><strong>Better insights into how automated R&amp;D is progressing within AI labs,</strong> such as data on how funding is allocated and qualitative impressions of progress from leading AI researchers and engineers.</p><p></p></li></ul></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free. Lots more in the pipeline. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>3. Planes and global warming</h2><ul><li><p><strong>What happened: </strong>A team of researchers, including from Google and American Airlines, published <a href="https://arxiv.org/pdf/2603.06909">results</a> from their latest experiment to use AI to reduce condensation trails from planes&#8212;a key contributor to global warming.</p></li><li><p><strong>What&#8217;s interesting: </strong>When pilots fly, particles from the plane&#8217;s exhaust can mix with low-pressure air to form <em>contrails</em>&#8212;white, artificial clouds, made up of ice crystals. These contrails are a net contributor to global warming, because they trap heat that would otherwise escape. Debates continue over exactly how much they contribute, but one <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7468346/">estimate</a> suggests that they contribute a lot, causing around 2% of &#8216;radiative forcing&#8217;, which measures how different factors, like CO<sup>2</sup>, heat or cool the planet.</p></li><li><p>As the environmental writer Hannah Ritchie <a href="https://hannahritchie.substack.com/p/contrails-google-ai">explains</a>, more important than the absolute figure is the fact that contrails offer a rare opportunity to reduce global warming almost immediately, at relatively low cost. This is because a small share of flights cause most of the warming-inducing contrails&#8212;generally those that fly through parts of the atmosphere that are both very cold and very humid. If planes take short detours to avoid these patches of air, contrails (and warming) should drop.</p></li><li><p>A few years ago, Google researchers<a href="https://blog.google/innovation-and-ai/technology/ai/ai-airlines-contrails-climate-change/"> partnered with</a> American Airlines on a proof of concept. Using satellite imagery and AI, they were able to predict where contrails would emerge and guide planes to avoid them, reducing contrails by &gt;50%, across 70 test flights.</p></li><li><p>In the latest <a href="https://arxiv.org/pdf/2603.06909">study,</a> they expanded the experiment to 2,400 American Airlines flights from the US to Europe. They placed ~50% of planes in a treatment group, where flight dispatchers were given two choices: a standard flight plan and an alternative contrail-avoidance one. Their decision for which to recommend was voluntary.</p></li><li><p>For flights in this intervention group, contrails fell by 12% compared to a control group with no contrail-avoidance plan. Importantly, the contrail-avoidance routes also did not lead to a significant increase in fuel use. At first glance, these results seem positive, but modest. Digging into the results highlights the challenge of getting useful AI deployed at scale.</p></li><li><p>In particular, dispatchers who received contrail avoidance plans only recommended them to pilots 15% of the time. Even then, the avoidance plan was only <em>successfully</em> flown in 60% of flights. For planes that did successfully follow the avoidance plan, contrails fell by more than 60%, a much larger reduction. So the tech worked, but was often not used.</p></li><li><p>Why? Dispatchers are busy<strong> </strong>and must often deal with other priorities, like bad weather and turbulence. To avoid contrails, planes also need to climb and descend mid-flight. This is safe, but creates more work for pilots and air traffic controllers. As it was voluntary, the incentive to change to a contrail-avoidance plan was weak.</p></li><li><p>The way that the dispatchers received the information also meant that they didn&#8217;t fully understand <em>why</em> the suggested up and down changes were necessary. Happily, the authors feel that most of these obstacles are addressable, with a combination of a better user interface, some automation, and more incentives.</p></li><li><p>In addition to its immediate usefulness, the study is a rare real-world attempt to quantify the benefits of AI to tackling global warming. At the moment, the AI and climate change policy discussion is often negative and focuses on the emissions that may result from building and operating data centres (and other devices) to train and run AI models. This is important, but there are reasons to think that these emissions will be <a href="https://blog.andymasley.com/p/individual-ai-use-is-not-bad-for?open=false#%C2%A7emissions">relatively low</a>, or at least lower than many assume. In contrast, AI could potentially reduce emissions and warming by far larger amounts, for example by accelerating research on solar and fusion power, or making buildings and energy grids more efficient. But these benefits are typically more speculative, harder to quantify, or in the case of contrails, more <em>contingent </em>on human behaviour.</p></li><li><p>This experiment demonstrates that the benefits of AI to tackling global warming are real, but also points to the interventions that will be needed to push them to their full potential.  The study is also timely, given that governments <a href="https://assets.publishing.service.gov.uk/media/69b83baacf4af9cad362b4e7/jet-zero-taskforce-contrail-impact-mitigation-task-and-finish-group-a-strategic-framework-for-uk-contrail-impact-mitigation.pdf">are focussing</a> on contrail avoidance and some policy action may be required, for example to help standardise and mandate contrail prediction software or to generate high-resolution humidity data.</p><p></p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-policy-primer-24?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-policy-primer-24?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[How Tech Changed Chess]]></title><description><![CDATA[And why AI won&#8217;t end our games]]></description><link>https://www.aipolicyperspectives.com/p/how-tech-changed-chess</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/how-tech-changed-chess</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 25 Mar 2026 10:22:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CdTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CdTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CdTX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png" width="1024" height="572" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f42add45-d564-4031-a602-e342e4b5c090_1024x572.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:572,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CdTX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!CdTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff42add45-d564-4031-a602-e342e4b5c090_1024x572.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Gemini</figcaption></figure></div><p><em>From childhood upwards, we play games as a safe (and strangely joyful) way to battle, strategize, even lose without it coming to fisticuffs. Artificial intelligence grew up playing games too, with developers using the structured rules, scoring systems, and win/loss outcomes to train machines to learn, to improve, even to beat us.</em></p><p><em>In chess, bots have been bettering humans for years now. Yet our &#8220;loser&#8221; species still gathers at sunny park tables, in dank school gyms, and online in droves, all in hopes of crying, &#8220;Checkmate!&#8221; The resilience of chess is commonly cited as evidence that&#8212;even if AI surpasses us in various pursuits&#8212;humans won&#8217;t just give up.</em></p><p><em>However, there&#8217;s more to say about the intersection of technology and chess, in particular how the game has evolved with technology, including AI. Thankfully, the broadcaster and writer <a href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot">David Edmonds</a>&#8212;co-author of </em>Bobby Fischer Goes to War <em>(2004) and editor of the essay collection </em>AI Morality<em> (2024)&#8212;has spent decades observing this, both as a spectator and behind the board himself.</em></p><p style="text-align: right;"><em>&#8212;Tom Rachman, </em>AI Policy Perspectives</p><div><hr></div><p><strong>By DAVID EDMONDS</strong></p><p><strong>Among thousands of tournament games cited in the Batsford book of chess openings, tucked into the top right-hand column of Page 235, is an example of how white should </strong><em><strong>not </strong></em><strong>play.</strong></p><p>Explaining the Closed Sicilian Defense opening, the authors (former world champion Garry Kasparov and the British grandmaster and chess columnist Raymond Keene) spotlight a game in which black is already ahead as early as move 11. Indeed, the player with the white pieces ended up losing. I remember because that player was me.</p><p>That is my humiliating contribution to chess theory: what not to do. The book was published in 1982, and I&#8217;ve barely picked up a pawn in anger in the intervening four decades. But I still follow the chess world, and if there&#8217;s a tournament in London, I&#8217;ll go to watch, spending hours absorbed in the intricacies of the 64 squares.</p><p>As the digital revolution and AI juggernaut move through our lives, we may wonder whether there will still be domains in which humans can continue to find enjoyment and meaning. Chess offers a hopeful case study.</p><p>Chess and AI have had a long relationship. The great forefather of artificial intelligence Alan Turing wrote the <a href="https://www.chess.com/blog/the_real_greco/the-original-chess-engine-alan-turings-turochamp">first chess algorithm</a> in 1948. The following year, another seminal figure, Claude Shannon, distinguished two ways that a computer could play chess: by brute force, calculating every possible move; or by selective search, like a human.</p><p>Chess also proved a <a href="https://www.researchgate.net/publication/224834166_Is_chess_the_drosophila_artificial_intelligence_A_social_history_of_an_algorithm">favourite way</a> to evaluate AI advancement, both because many key innovators were keen players but also because the game&#8217;s mathematical structure and its win/loss conditions created benchmarks for comparing machine progress to human performance.</p><p>A longstanding goal&#8212;seemingly impossible at first&#8212;was to outclass the best humans in a game that has near-infinite <a href="https://en.wikipedia.org/wiki/Shannon_number">permutations</a>. Defeating humans at chess became the programmers&#8217; ultimate challenge, like runners seeking to break the four-minute mile or climbers reaching the summit of Mount Everest, both of which proved easier. Finally, in 1997, IBM&#8217;s Deep Blue vanquished Kasparov, the then-reigning world champion. A dejected Kasparov insinuated that there had been human intervention.</p><p>For a while, chess players comforted themselves with the thought that a hybrid combination of human and machine could outwit machine alone. That period has long passed. Today&#8217;s best player, Magnus Carlsen, would be trounced were he to compete in a series of games with my mobile phone.</p><p>In 2017, DeepMind&#8217;s <a href="https://deepmind.google/blog/alphazero-shedding-new-light-on-chess-shogi-and-go/">AlphaZero</a> took machine chess to the next level. While Deep Blue had relied on brute strength with some input from strong humans, AlphaZero was simply programmed with the basic rules, and then trained itself through reinforcement learning. In its learning phase, it played tens of millions of games against itself in just a few hours, then crushed the chess engine Stockfish. (Stockfish adapted its methods accordingly, and is now the leading chess engine.)</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>World chess champions of the past exuded an aura. Their talents seemed mysterious, supernatural.  In part, that&#8217;s because few people, then and now, can comprehend the depth of thought that elite players achieve at the board. When it comes to music, we may never compose like Mahler, but we can appreciate Mahler&#8217;s symphonies. By contrast, we can neither play like Magnus Carlsen nor fully appreciate his games. It&#8217;s for this reason that the Armenian-born grandmaster, Lev Aronian, once <a href="https://www.prospectmagazine.co.uk/essays/53494/the-lion-and-the-tiger">confessed to me</a> that being one of the world&#8217;s top players was desperately lonely.</p><p>Carlsen has achieved the highest rating of any human in history. And, no surprise, he strikes a confident pose. Yet his strut no longer carries complete conviction. To spectators armed with portable chess engines, the chess gods have been humbled.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JeWo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JeWo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 424w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 848w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1272w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png" width="728" height="555" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:555,&quot;width&quot;:728,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JeWo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 424w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 848w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1272w, https://substackcdn.com/image/fetch/$s_!JeWo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81aa1b0b-fd78-4ab5-abda-a14f2c0fd43d_728x555.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The chess prodigy Samuel Reshevsky playing simultaneous games in 1920 against a variety of whiskery Parisians. Aged 8, he beat them all. (Credit: Creative Commons)</figcaption></figure></div><p>Even so, chess has not dwindled in popularity. On the contrary, more people are playing it than ever. The game received a boost during Covid, when we all hunkered down in our homes, connected by the Internet. Another boost came from the hit Netflix drama, <em><a href="https://en.wikipedia.org/wiki/The_Queen%27s_Gambit_(miniseries)">The Queen&#8217;s Gambit</a></em>. Meanwhile, a younger generation of telegenic chess masters has gained avid YouTube followings, turning commentary and stunts into short-clip entertainment.</p><p>Here are 11 ways that technology has changed chess. The 11th is the most interesting:</p><ol><li><p><strong>Opening Preparation.</strong> The systematic study of chess openings goes back a couple of centuries or more. Sequences of opening moves were mapped out&#8212;as in that 1982 book that included my embarrassing loss. But chess engines allow for a depth of opening analysis that was inconceivable in 1982. This means that 25 moves may pass before grandmasters find themselves in unfamiliar territory nowadays. Some openings have also been resurrected because engines have shown the positions to be more survivable than previously recognized.</p></li><li><p><strong>Opponent Preparation</strong>. Even in amateur tournaments, players routinely prepare for opponents in an individually tailored way. This is made possible because the games of each opponent are available online.</p></li><li><p><strong>Connectivity.</strong> Fancy a game? There are endless online adversaries willing to take you on, day and night, from India to Iceland, Cape Town to Chicago.</p></li><li><p><strong>No More Correspondence Chess.</strong> There was once a thriving chess scene in which games were played remotely over a long time period&#8212;months, sometimes years&#8212;with moves typically sent by post. How quaint.</p></li><li><p><strong>No More Adjournments</strong>. Historically, world championship games would sometimes stop after five hours to resume later. That can&#8217;t happen anymore, since players might simply identify the optimal continuation with the help of an engine. Time limits now ensure games finish within a single session.</p></li><li><p><strong>Shorter Games</strong>. Many in the online chess audience don&#8217;t have patience for lengthy games. For them, quicker time controls&#8212;Rapid (less than an hour); Blitz (3-5 minutes); or Bullet (under 3 minutes)&#8212;are more thrilling.</p></li><li><p><strong>Different Formats.</strong> Now that computers have shown with such depth which opening sequences are optimal, the early part of a game has been transformed into a feat of memory rather than creativity. As a result, Fischer Random (advocated early on by the ex-American world champion Bobby Fischer) has become increasingly popular. In Fischer Random, the starting position of the major pieces behind the pawns is randomized, making opening homework effectively impossible. It&#8217;s sometimes called Freestyle Chess, or Chess960 because there are 960 possible ways for the pieces to be shuffled.</p></li><li><p><strong>Job Generation.</strong> With a potential global audience, some players can now earn a decent living live-streaming their games, or offering online training.</p></li><li><p><strong>Roasting of Champions</strong>. This is an irksome development. Since chess engines assign an instant numerical evaluation of the position after each move (e.g. +1 means white is better by roughly one pawn), any patzer can see when a grandmaster has blundered, and is free to abuse them in online comments.</p></li><li><p><strong>Cheating</strong>. There have always been cheating accusations in chess. In 1978, the Soviet dissident Viktor Korchnoi claimed that the aides of his opponent, Anatoly Karpov, were using the flavour of the <a href="https://www.bbc.co.uk/sounds/play/w3cszmwf">yogurt</a> handed to Karpov to secretly convey messages. More recently, suspicion (tongue-in-cheek, but taken seriously by online trolls) has been raised of illicit advice being transmitted via <a href="https://www.bbc.co.uk/news/world-us-canada-66921563">vibrating sex toys</a>. In elite tournaments, grandmasters are now searched before they enter the playing arena, even accompanied to the toilet. Spectators, meanwhile, are prohibited from carrying phones, to prevent them signalling the best continuation. But in online games, cheating is almost impossible to prevent. Platforms try to detect cheats by comparing human moves to the recommendations of top engines. But if savvy cheaters consult an engine just once or twice in a game, they may win without being detected.</p></li></ol><p>And so to <strong>the 11th effect on chess: the expansion of human imagination</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>In the last few years, there has been a slight but detectable shift in grandmaster play, as humans learn from machines, both through gameplay against bots and by using machine insights to prepare for human competition.</p><p>People who don&#8217;t play chess may imagine that what distinguishes strong from weak players is calculating power. And it&#8217;s true that top grandmasters can analyse many moves in advance. But their edge is tougher to articulate. It involves superior pattern recognition, with an intuitive sense for where their pieces should be placed and how a position should advance. Likewise, Mozart <em>felt</em> how a composition ought to develop; his instincts about building tension and creating contrasts were the product in part of having internalized countless musical patterns.</p><p>For chess players, some moves seem ugly. It might feel wrong to shunt a knight to the edge of the board, to break up a pawn structure, or to expose the king. But computers don&#8217;t <em>feel</em> anything. In chess, they care about patterns and the interplay between pieces only to the extent that they&#8217;re relevant to the ultimate objective: victory.</p><p>However, bots don&#8217;t necessarily play robotically. They produce moves that astonish and inspire human players, even make them <a href="https://youtu.be/CdFLEfRr3Qk?t=199">laugh</a> with surprise. One famous case of AI invention across the board came in another game, Go, when the <a href="https://deepmind.google/research/alphago/">AlphaGo</a> program was facing a top human player, and produced a move that caused professionals to gasp. &#8220;Move 37&#8221; is still cited with awe, as something a person would never have done, but that worked sublimely.</p><p>Likewise, chess engines regularly expand the imagination of human chess players, pushing beyond the habitual &#8220;correct&#8221; move they&#8217;ve seen many times before or have learned from books of chess theory. AI has even <a href="https://arxiv.org/abs/2510.23772">dabbled</a> in the art form of creating beautiful chess puzzles. And empirical studies <a href="https://www.pnas.org/doi/10.1073/pnas.2406675122">indicate</a> that leading players may pick up new ideas and strategies from machines.</p><p>Machines, in other words, can make humans more resourceful and inventive, breaking down rigid modes of thinking. The implausible becomes plausible. The readily dismissed becomes the carefully considered. This evolution of chess illustrates a broader idea in the development of AI that may prove immensely valuable in science and elsewhere in human endeavour: that how AIs think may help human experts learn <a href="https://arxiv.org/pdf/2502.07586">new ideas</a> themselves.</p><p>In his book <em>The Silicon Road to Chess Improvement</em>, the grandmaster Matthew Sadler argues that chess engines can improve every player, and he documents some of the counterintuitive patterns that humans could pick up from AI. By way of illustration, during a top tournament this January, the Indian grandmaster Arjun Erigaisi (playing against Vladimir Fedoseev of Russia) advanced his pawns in a way that looked reckless. In fact, computer analysis indicated he was still ahead after 28 moves. However, he blundered and lost. The danger of learning from a computer is that success may require you to proceed with computer-level accuracy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>As AI undertakes more activities formerly done only by people, it&#8217;s worth asking why human chess persists&#8212;and will likely continue to do so.</p><p>A Canadian philosopher, Bernard Suits, pointed out in his 1978 book <em><a href="https://books.google.co.uk/books/about/The_Grasshopper.html?id=1LmESO3NBuoC&amp;redir_esc=y">The Grasshopper: Games, Life and Utopia</a></em> that what defines &#8220;games&#8221; is that they involve the voluntary attempt to overcome unnecessary obstacles. Therein lies a defence against AI encroachment. In a market economy, companies aim to remove or overcome obstacles in the pursuit of profit. In games, obstacles have been deliberately inserted as an indispensable feature. What we enjoy in playing chess is testing our cognitive abilities. What we enjoy in watching chess is two humans pitting their wits against each other in a socially constructed activity where difficulty enhances enjoyment and satisfaction.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b-Lb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png" width="1200" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b-Lb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 424w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 848w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1272w, https://substackcdn.com/image/fetch/$s_!b-Lb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aad9998-ae91-49a8-8154-a80018c3408d_1200x800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Watching the big game. (Credit: Creative Commons)</figcaption></figure></div><p>There&#8217;s also a narrative element to caring about games. The contest&#8212;whether intellectual or physical&#8212;is absorbing precisely because it involves conscious creatures. In elite chess, there&#8217;s the backstory: the players&#8217; rise, their subsequent ups and downs, their history with specific opponents.</p><p>But watch an engine-against-engine tournament like TCEC (the <a href="https://en.wikipedia.org/wiki/Top_Chess_Engine_Championship">Top Chess Engine Championship</a>), and you&#8217;ll soon fall asleep. Computers aren&#8217;t competing after a divorce, or an illness, or the loss of a parent. Humans have character traits that spill onto the board, such as aggression (or passivity); patience (or impatience); equanimity (or volatility); and resilience (or fragility). Winning and losing have emotional resonance for a human&#8212;but not for AlphaZero.</p><p>It&#8217;s these qualities that guard against AI advance. AI might gobble up some of our jobs; even human-authored articles like this one may become rarer. But AI won&#8217;t take our chess.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">It&#8217;s your move. Send this article to someone. </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/how-tech-changed-chess?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Past and Future of AI Standards]]></title><description><![CDATA[Lessons from history]]></description><link>https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Tue, 17 Mar 2026 10:17:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sMbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sMbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sMbF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sMbF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sMbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F172dd689-5753-481c-8307-76bc1ce3a7db_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini </figcaption></figure></div><p><strong>By Conor Griffin, Joslyn Barnhart &amp; Owen Larter</strong></p><p>In 1971, the marine archeologist Honor Frost heard news of wood protruding from the sea floor. Off the western coast of Sicily, she and her team donned scuba gear, and splashed into the shallow coastal waters. Wind whipped the surface, causing the underwater sand to swirl confusingly. But even in murk, they couldn&#8217;t miss it.</p><p>&#8220;A large timber (such as I had never seen before) emerged,&#8221; she <a href="https://artsandculture.google.com/story/the-discovery-of-the-marsala-punic-ship-honor-frost-foundation/2QWhIN7Uu9SK-Q?hl=en">recalled</a>, &#8220;like the head of a primeval animal crowned with weed; the presence of a buried wreck was evident.&#8221;</p><p>They excavated for months, gradually exposing the remains of a Carthaginian warship sunk more than 2,000 years before. Somehow, saltwater hadn&#8217;t eaten away letters painted on the wreckage, revealing a humble system that links antiquity to tomorrow.</p><p>Those shipwrights&#8217; marks told workers in ancient Carthage how to put together a vessel&#8212;akin to flat-pack furniture from IKEA, with numbered and lettered pieces. They were among the earliest surviving examples of a simple but potent tool in human progress: <strong>the technological standard</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ydHP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ydHP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 424w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 848w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1272w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png" width="1456" height="817" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:817,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ydHP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 424w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 848w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1272w, https://substackcdn.com/image/fetch/$s_!ydHP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68982780-63f6-4c22-a3f8-f4486f3ef21b_1600x898.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: The Honor Frost Archive (MS439), University of Southampton.</figcaption></figure></div><p>Many of history&#8217;s grand projects have benefited from standards, from Egypt&#8217;s pyramids, to Europe&#8217;s cathedrals, to Gutenberg&#8217;s press, to everyone&#8217;s Internet. You can even thank standards for the development of beer.</p><p>Underpinning technological standards is a plain truth: people thrive when able to cooperate, not when we must keep negotiating the basics, whether it&#8217;s a matter of nuclear safety, or a phone-charger cord, or who goes next at the intersection. So, the goal is order. And the benefits are that innovators can proceed without excessive obstacles, while everyone else is treated fairly and kept safe.</p><p>But what should standards mean for artificial intelligence? In particular, how can they guide the most advanced large language models and AI agents that could transform society?</p><p>Venture around the AI frontier today, and you&#8217;ll find ambition to accelerate AI for economic growth and transformative science alongside concern that AI could clatter into what humans cherish most. What few dispute is this: standards will help set the path.</p><p>Standards have critics too. One criticism is that companies dominate the process, prioritizing their own products or miming security without truly ensuring it. Besides this, standards can stir geopolitical tensions, as when Western countries fear China&#8217;s influence in laying the path to tomorrow, while smaller nations worry that standards may be set without considering them at all.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pzRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pzRg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 424w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 848w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1272w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png" width="1024" height="702" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:702,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pzRg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 424w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 848w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1272w, https://substackcdn.com/image/fetch/$s_!pzRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a502385-eeee-4a5c-8a1a-beff6c058f82_1024x702.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Library of Congress</figcaption></figure></div><p>So, as we&#8217;ll keep insisting, standards matter! Only, there&#8217;s a problem.</p><p>For some, the mere mention of &#8220;standards&#8221; prompts slumber. And even those determined to stay awake may find themselves puzzled, gazing at the alphabet soup of standards organizations and committee meetings.</p><p>Part of the problem is that standards are often technical, such as efforts to standardize the protocols needed for AI agents to communicate. Or they are bureaucratic, negotiated out of public view, with dense, jargon-filled documents that are often behind a paywall.</p><p>Complicating matters even more, artificial intelligence is a general-purpose technology less akin to a hammer than to electricity. This will lead to standards (plus standards initiatives that don&#8217;t take) on everything from AI agents, to AI cybersecurity, to AI content provenance, to product-specific standards for AI-as-a-medical-device, and so on. And that&#8217;s not even mentioning standards for future AI applications that nobody has yet considered.</p><p>In short, standards will be immense. Standards will be tough to comprehend. But standards will also be vastly important.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3>CAN SOMEONE DEFINE STANDARDS, PLEASE?</h3><p>Standards are a diabolical blend: intricate, vague, and slippery.</p><p>They&#8217;re the invisible infrastructure of the modern world, according to Laurie Locascio, head of the American National Standards Institute, ANSI. She <a href="https://issues.org/who-sets-the-standard/?utm_campaign=34324386-Issues.org%20Newsletter&amp;utm_medium=email&amp;_hsenc=p2ANqtz-8wSj3J1qBqFPUcl9Fm0w3x0q01NijGR1b6Q05pK0a-5thOpaTOJHNf0vNbWaeBtAj68a6crdWY36mRoUGczN_ZIIpEaw&amp;_hsmi=404588886&amp;utm_content=404588886&amp;utm_source=hs_email">recounts</a> hearing an official at Boeing describe the airplane itself as &#8220;thousands of standards taking flight.&#8221; Standards are &#8220;the things you don&#8217;t think about,&#8221; Locascio says. &#8220;But oh, my God, you&#8217;re so glad they&#8217;re there.&#8221;</p><p>Expressed broadly, a standard defines the <em>how</em> of tech, whether it&#8217;s the default <em>product </em>specs that allow compatibility among manufacturers, or the formally endorsed risk management <em>processes</em> that encourage industry to act responsibly.</p><p>As technology evolves, standards do too. A leading scholar, Ken Krechmer, once <a href="https://web.njit.edu/~bieber/WWW-Standards-F01/krechmer96.pdf">noted</a> that standards initially defined how physical objects fit together (as with those markings on the Carthaginian longship). Over time, standards came to define the relationship <em>between</em> technological objects (as with internet protocols).</p><p>A standard also builds on other forms of guidance, such as norms, principles and industry best practices. Unlike norms, standards should be explicit. Unlike aspirational principles, a standard should be specific enough for performance against it to be judged. Unlike early best practices, a standard should have clear buy-in.</p><p>Developing a standard can be a protracted endeavor. In some cases, it might start in a researcher&#8217;s notebook, evolving into a product or a practice that gains traction in the marketplace. At other times, institutions set standards via years of deliberations and meticulous documents. Most often, it&#8217;s a messy back-and-forth between standards that emerge <em>in practice</em> and <em>on paper</em>. This makes standards a source of tension among companies, governments, and independent advocates, all trying to set the technological future they consider best.</p><p>Some presume that laws should be how we define permitted behavior. But high-quality legislation can struggle to keep up with the frantic speed of AI progress. And when laws are passed, they may rely on standards for implementation, as with the EU AI Act.</p><p>So how to persuade everyone to care when encountering standards, rather than just to snore or sob? How to get policy leaders to ponder the <em>entirety</em> of frontier-AI standards and align on where action is most needed?</p><p>Our answer is storytelling: to pluck forth tales about past standards, illustrating what this technological shaping can achieve, where it goes wrong, and how we might help cultivate standards wise enough to manage the breadth and speed of AI.</p><p>Our first stop? A battlefield of centuries ago.</p><h3>A &#8216;STANDARD&#8217; HISTORY</h3><p>Horrors encircled the boy soldier: swords clanging under the rain, excruciating howls of the wounded, the fast-approaching bellows of men hurtling across the bog to murder him. In wet turf, he shivered from knees to chattering teeth, his mouth parched, his gaze searching for any escape.</p><p>Up there?</p><p>On a hill, a flag rippled, where his legion had marked its territory. The Old French word for that banner was &#8220;<em>estandart</em>&#8221;: a sign of firmness and stability, a marker of where to go next, a statement of order amid chaos. To such banners, we owe the word &#8220;<a href="https://www.oed.com/dictionary/standard_n?tab=factsheet">standard</a>.&#8221;</p><p>More than a few historical standards emerged from war, where disorder could mean one&#8217;s brethren murdered, while coordination could mean an empire.</p><p><strong>~225 BCE to the dawn of mass production</strong></p><p>China&#8217;s first emperor, Qin Shi Huang, led an extensive <a href="https://www.google.com/books/edition/_/1OiMzAEACAAJ?hl=en&amp;kptab=overview">standardization process </a>that included mass-produced crossbow parts. If parts of a soldier&#8217;s weapon broke in the midst of battle, he could grab spares, and swap them in.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!37af!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!37af!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 424w, https://substackcdn.com/image/fetch/$s_!37af!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 848w, https://substackcdn.com/image/fetch/$s_!37af!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg" width="1456" height="958" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:958,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!37af!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 424w, https://substackcdn.com/image/fetch/$s_!37af!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 848w, https://substackcdn.com/image/fetch/$s_!37af!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!37af!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2aef643-6da8-428b-a796-780ae6faa80d_1600x1053.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A Qin crossbow, displayed at Shaanxi History Museum, Xi&#8217;an. (Credit: WorldHistoryPics.com)</figcaption></figure></div><p>When Ancient Rome fought for primacy in the Mediterranean, its forces copied Carthaginian ship designs, eventually triumphing with <a href="https://books.google.com/books/about/The_Fall_of_Carthage.html?id=u684AgAAQBAJ&amp;source=kp_book_description&amp;redir_esc=y">standardized ships</a> of their own, along with standardized tools and <a href="https://artsandculture.google.com/story/roman-engineering/hgXhQHkIAE5q2g?hl=en">camp layouts</a>, all of which simplified maintenance and large-scale coordination.</p><p>Another advance in ancient times came from standardized measurements for length, volume, and weight. Previously, cultures often had distinct units; you can imagine the squabbling. But as trade expanded, standards prevailed, making cross-cultural exchange possible. In ancient Egypt, one of the earliest and most influential standards was the <a href="https://onlinelibrary.wiley.com/doi/10.1155/2014/489757">cubit</a>, a unit of length used to coordinate the building of the pyramids.</p><p>In Europe&#8217;s medieval period, <a href="https://books.google.com/books/about/The_European_Guilds.html?id=BrEPEAAAQBAJ&amp;source=kp_book_description&amp;redir_esc=y">guilds</a> established standards for quality control, so that weavers might set the necessary thread count or width of cloth, preventing low-quality products from undermining a craft&#8217;s reputation. Guilds also played a protectionist role, with licensing standards imposing strict controls on who could become a member.</p><p>The consumer might benefit from standards too, with measures such as England&#8217;s <a href="https://ifst.onlinelibrary.wiley.com/doi/10.1002/fsat.3801_5.x">Assize of Bread and Ale of 1266</a> establishing the acceptable quality, quantity, and price of baked goods and beer. Later, Gutenberg&#8217;s <a href="https://hob.gseis.ucla.edu/HoBCoursebook_Ch_5.html">standardized press</a> led to mass-produced books that spread ideas across the Continent.</p><p>However, technological standards reached new heights of utility during the Industrial Revolution, which set the foundations for many of today&#8217;s technologies.</p><p><strong>1760-1840: The First Industrial Revolution &#8212; The rise of engineers</strong></p><p>As ancient Chinese and Carthaginians had discovered long before, the Industrial Revolution&#8217;s manufacturers found that interchangeable parts offered transformative efficiency. Before, if you hand-built a musket, or a clock, or a steam engine, you might craft each screw, each<em> </em>bolt, each<em> </em>gear to fit. By contrast, interchangeability allowed for mass production, cutting costs, reducing errors, and establishing the basis for modern industry.</p><p>Screw threads are a classic example. Before standards, manufacturers used various designs, making repairs nightmarish. If you had one company&#8217;s bolt but another company&#8217;s nut, you were out of luck. In the 1800s, engineers built the first practical screw-cutting machines, allowing factories to produce <a href="https://en.wikisource.org/wiki/Miscellaneous_Papers_on_Mechanical_Subjects/A_Paper_on_an_Uniform_System_of_Screw_Threads">uniform threads</a> and a consistent system of measurement. The British Standard Whitworth became the first such standard in the world.</p><p>Screw-thread standards may not quicken your pulse. But their effects might. They played a part in British imperial ambitions, contributing to the expansion and maintenance of the British Empire through <a href="https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1095-9270.2004.00028.x">military</a> mobilization.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TPXx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TPXx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TPXx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ba5c329-6ce2-4034-9876-a1c5279b4276_1600x873.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: NotebookLM</figcaption></figure></div><p><strong>1870-1914:</strong> <strong>The Second Industrial Revolution &#8212; National coordination &amp; path dependence</strong></p><p>The emergence of electricity, steel, and advanced machinery led to vast interconnected systems, including power grids, railways, and telegraph networks. Coordination wasn&#8217;t merely better; it was essential. To coordinate across a nation&#8212;and eventually across borders&#8212;the ambitious country needed technology standards. Two famed cases illustrate this, one successful, one bungled.</p><p>The success regards the quintessential technology of the times: railroads. By the 1870s, the U.S. rail system was a mess, with more than 20 different track gauges. When a train reached a section built to a different track-gauge width, everything&#8212;each passenger, piece of luggage, every single crate&#8212;had to be unloaded, and transferred to a new train.</p><p>By the 1880s, matters had become slightly less chaotic, with either a southern gauge or northern &#8220;standard&#8221; gauge used across most of the country. Yet this still divided national transport until, in  1886, rail companies pulled off a <a href="https://dash.harvard.edu/entities/publication/73120379-10c3-6bd4-e053-0100007fdf3b">remarkable feat</a>. Over two days, they converted 13,000 miles<em> </em>(that&#8217;s 21,000 kilometers) of southern U.S. track to the northern standard, integrating the national transportation network. When trains rolled out on June 2, 1886, they were able to travel seamlessly across the United States for the first time in history.</p><p>A second case illustrates bungled standards. In the 1880s, the rival inventors Thomas Edison and Nikola Tesla found themselves at the center of &#8220;<a href="https://books.google.com/books?id=2_58p3Z69bIC&amp;source=gbs_book_other_versions&amp;redir_esc=y">the War of the Currents.</a>&#8221; Edison championed direct current (DC), a one-directional flow of electricity that had been the early U.S. standard. Tesla, backed by entrepreneur industrialist George Westinghouse, advocated alternating current (AC), or electricity that reverses direction many times per second, and can be stepped up or down in voltage with a transformer.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!assm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!assm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 424w, https://substackcdn.com/image/fetch/$s_!assm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 848w, https://substackcdn.com/image/fetch/$s_!assm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1272w, https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png" width="1024" height="535" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:535,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!assm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 424w, https://substackcdn.com/image/fetch/$s_!assm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 848w, https://substackcdn.com/image/fetch/$s_!assm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1272w, https://substackcdn.com/image/fetch/$s_!assm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf132576-b74c-4a94-b82b-c2e7158aa434_1024x535.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini</figcaption></figure></div><p>From an engineering standpoint, AC had a decisive advantage: it could transmit power over long distances cheaply and efficiently, while DC could not. AC eventually won out. But by the time it had emerged as the superior solution, the world had already built electrical systems without any coordinated technical governance. As there was no international authority harmonizing electrical standards, the United States went with 120 volts at 60 hertz (a legacy of Edison&#8217;s early low-voltage DC networks). Much of the rest of the world adopted 230 volts at 50 hertz.</p><p>Once wires had been laid and appliances built, the world was locked into two incompatible systems. To this day, we&#8217;re burning out hair dryers bought in America but used in Paris, or realizing too late that we don&#8217;t have the right <a href="https://www.iec.ch/world-plugs">plug</a> for our laptops. If it&#8217;s irksome for the average user, it&#8217;s more burdensome for manufacturers, obliging them to build different versions for different countries.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p>Another classic tale of path dependence is under our fingertips as we type: the QWERTY keyboard. Why <em>does </em>the top row spell QWERTYUIOP? One account goes like this: In the mid-to-late 1800s, early typewriters jammed each time the user struck neighboring keys in rapid succession. So, designers produced a <a href="https://patents.google.com/patent/US182511A/en">layout</a> that deliberately distanced many common letter pairs. Remington purchased this QWERTY design, and began mass-producing typewriters.</p><p>Before long, typing schools had trained the future secretarial workforce on QWERTY, while firms wanting fleet-fingered staff had to buy those machines. Manufacturers subsequently  resolved the key-jamming problem and other keyboards <a href="https://www.smithsonianmag.com/history/the-qwerty-keyboard-will-never-die-where-did-the-150-year-old-design-come-from-49863249/">tried to</a> depose QWERTY, some <a href="https://fbaum.unc.edu/teaching/articles/David_AER_1985.pdf">claiming</a> to quicken typing by as much as 40%. But QWERTY had become a de facto standard. (Scholars continue to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1069950">debate</a> the specifics, with some arguing that QWERTY works just fine.)</p><p>In any case, the indisputable lesson is to watch for <a href="https://www-2.rotman.utoronto.ca/insightshub/behavioural-economics-marketing/beware-path-dependence">path dependence</a>. The standards we establish for frontier AI today&#8212;or fail to establish&#8212;may determine future efficiency or future failure.</p><h3><strong>1914-1964: Standard Development Organizations &amp; Digital Technology</strong></h3><p>In 1918, engineering societies joined with the U.S. government to establish a standards committee that developed into <a href="https://www.ansi.org/about/history">ANSI</a>, the American National Standards Institute. Today, ANSI provides the &#8220;stamp of approval&#8221; for many U.S. standards organizations, including those working on AI. In subsequent decades, standardization went global. While the United Nations was founded as a governmental venue for diplomacy, the International Organization for Standardization, <a href="https://www.iso.org/news/2017/02/Ref2163.html">ISO</a>, emerged as a non-governmental body for peaceful technical coordination across borders. Bit by bit, additional standards bodies formed, cooking up the alphabet soup of acronyms&#8212;each a different org, subgroup, or committee&#8212;that lies before us today.</p><p>Soon, another transformation for standards was taking shape in the form of digital tech. Back then, computers filled entire rooms of universities, and each manufacturer built hardware and software within its own format. Computers could not run programs written for other systems, and accessories like printers or storage devices were incompatible.</p><p>A turning point came in 1964, with <a href="https://www.ibm.com/history/system-360">IBM&#8217;s System/360</a>. Software on one model could more easily run on another; accessories like printers worked across IBM models. You could upgrade and expand computer systems with relative ease.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zNAA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zNAA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 424w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 848w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1272w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png" width="1024" height="806" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:806,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zNAA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 424w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 848w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1272w, https://substackcdn.com/image/fetch/$s_!zNAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d7ee18-5b8e-4d84-9aad-39d17a40347d_1024x806.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: U.S. National Archives and Records Administration</figcaption></figure></div><p><strong>1969-today: Talking Machines</strong></p><p>The year that hippies grooved at Woodstock and astronauts walked on the Moon, the U.S. Defense Department was testing a project that must have seemed minor by comparison: connecting research institutions and government agencies. Yet the Advanced Research Projects Agency Network, ARPANET, which sent its first message in 1969, was the precursor to our transformed world.</p><p>Before ARPANET, <a href="https://artsandculture.google.com/story/from-punch-cards-to-the-cloud-museum-for-communication-frankfurt/VQXBq16p7orTYw?hl=en">moving information</a> from one computer to another was a struggle, with researchers forced to carry magnetic tapes or punched cards between locations, while those working far apart had to rely on snail-mail.</p><p>To convey information between independent systems, ARPANET adopted packet switching, breaking data into small units that could travel independently and reassemble at their destination. Extending this, Robert Kahn and Vint Cerf began designing a <a href="https://ieeexplore.ieee.org/document/1092259">universal communication framework</a> in 1973 for different types of networks to connect. Their collaboration ultimately produced <a href="https://cloud.google.com/blog/topics/public-sector/50-years-internet-celebrating-vision-vint-cerf-and-bob-kahn-and-exploring-future-connectivity-and-innovation">TCP/IP</a>, the Transmission Control Protocol and Internet Protocol that underpins today&#8217;s online communication.</p><p>A key effect of the TCP/IP standard was decentralization: no single authority could control the flow of data, and any network that adhered to the protocol could connect without permission from central authorities.</p><p>In 1989, a British scientist at CERN, Tim Berners-Lee, <a href="https://www.w3.org/History/1989/proposal.html">proposed</a> another transformation that developed into a project called &#8220;<a href="https://docdrop.org/download_annotation_doc/Tim-Berners-Lee---Weaving-the-Web_-The-Original-Design-and-U-88myd.pdf">WorldWideWeb</a>,&#8221; which envisioned a global <a href="https://home.cern/science/computing/birth-web/short-history-web">network</a> of documents accessible through software, operating on <a href="https://timeline.web.cern.ch/cern-puts-world-wide-web-public-domain">open standards</a> that nobody could lock it into a proprietary system. Two standards organizations, the Internet Engineering Task Force and the World Wide Web Consortium, helped to formalize the vision, crafting standards for structuring content (HTML), transferring data (HTTP), identifying resources (URI), and more.</p><p>But while standards help spread technology, this diffusion can also lead to greater harm. The expansion of railroads led to more wrecks, forcing uptake of safety  standards for signaling, brakes and more. When electricity was first installed in the White House in the late 19th century, President Benjamin Harrison and his wife Caroline <a href="https://www.energy.gov/articles/history-electricity-white-house">were so afraid</a> of shocks that they refused to turn the lights off. Such fears&#8212;often well justified&#8212;led to the standardization of building and electrical codes. When it came to digital technology, the risks extended beyond immediate physical safety into areas like data theft. This demanded standards such as <a href="https://www.ssl.com/article/what-is-ssl-tls-an-in-depth-guide/">SSL/TLS</a> to provide security for data sent over computer networks.</p><p>A <a href="https://en.wikipedia.org/wiki/Collingridge_dilemma#:~:text=The%20Collingridge%20dilemma%20is%20a,extensively%20developed%20and%20widely%20used.">recurrent challenge</a> with frontier tech is that experts struggle to predict how exactly it will affect society. But once it is widely used, it can be sticky and hard to change. The effects of powerful technologies can also be subtle, indirect and slow-burning, for example if they change how we access and consume information. In the digital era, this has shifted technological standards from periodic safety checks of products towards ongoing <em>processes</em> that organizations can use to identify, evaluate and mitigate a growing suite of risks.</p><p>By way of example, the U.S. government&#8217;s National Institute of Standards and Technology, NIST, introduced the voluntary <a href="https://www.nist.gov/itl/ai-risk-management-framework">AI Risk Management Framework</a> in 2023, building on its earlier framework for managing cybersecurity risks. Likewise, the <a href="https://www.iso.org/committee/6794475.html">ISO/IEC committee on AI</a> that is considering <a href="https://www.safer-ai.org/an-overview-of-existing-and-potential-future-genai-gpai-standards">standards</a> on everything from red-teaming to LLM interoperability also passed the first official international AI management standard, <a href="https://www.iso.org/standard/42001">ISO/IEC 42001</a>, which organizations can use to demonstrate that they are responsibly integrating AI into their operations. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h3>5 LESSONS FROM HISTORY</h3><p>Studying the past, you see how often standards&#8212;by design or bumbling&#8212;have shaped the technological present. But what about our technological future?</p><p>To develop good standards for general-purpose AI models and agents, we&#8217;ll need inputs from a range of groups, from scientists with know-how to institutions who can convene. Below [see infographic], we have identified five groups who&#8217;ll perform key roles.</p><p>What we mapped includes more than just official standards development organizations. We also want to capture the early spaces where standards emerge <em>in practice </em>before they are formalized <em>on paper. </em>How this works is closer to a swirl of inputs than a steady procession. Sometimes, the same organization or individual may operate in several groups at the same time. Ideas and efforts may also originate in one group, then migrate to another, with different groups offering varying degrees of speed, flexibility, expertise, and perceived neutrality.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Ue2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-Ue2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-Ue2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d1469b4-2b50-4ec6-a9ad-5e12e38246fa_1600x893.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">NotebookLM</figcaption></figure></div><p>For these groups&#8212;and the policymakers, business leaders, and advocates who shape their work&#8212;what lessons can history teach about our AI future? Here are five:</p><ol><li><p><strong>Standards matter! </strong>At best, technological standards chart a wise path; at worst, they fill the path with potholes. Consider the bulky electrical converters that one still needs when traveling&#8212;it didn&#8217;t have to be that way. On the other hand, when we get it right, the benefits of technology spread faster, more inclusively, and more securely.</p><p></p></li><li><p><strong>The standards process needs to speed up. </strong>ISO says that the <a href="https://www.iso.org/developing-standards.html">average time</a> to develop one of its standards is three years, and ISO is not an outlier. Given the pace of change in AI, we need to speed up. For priority goals, like finding secure ways for agents to operate and interact,  which the US Center for AI Standards and Innovation <a href="https://www.nist.gov/caisi/ai-agent-standards-initiative">is working on</a>, we need to find ways to accelerate that don&#8217;t jeopardize the overall quality and integrity of the process. This may mean looking across the many groups now focusing on AI standards and finding ways to collaborate early, rather than duplicate. It may mean focusing more on technical protocols and <a href="https://scc-ccn.ca/standards/flexible-standards-based-solutions/publicly-available-specification">specifications documents</a> that are quicker to develop. It may also mean using AI to <a href="https://www.w3.org/community/aiwss/">help deliberate on and write standards</a>, and moving to more <a href="https://www.iso.org/smart">nimble digital formats</a> that are easier to update and use. </p></li></ol><ol start="3"><li><p><strong>We need more efficient ways to input on standards. </strong>All standards, from those underpinning steam engines to the Internet, had to chart a unified path through diverging viewpoints, with an end result that did not please everyone. For AI, the challenge will be far greater. It is more akin to 1,000 technologies, and will affect different groups in different ways. This means that any broad directive&#8212; say, to &#8220;develop standards that make AI fair&#8221;&#8212;risks an <a href="https://www.nytimes.com/2023/04/02/opinion/democrats-liberalism.html">everything-bagel solution</a>. Many groups would rightly be heard, but the output would be too vague to provide the &#8220;how&#8221; that justifies a standard, leading to confusion, a stifling of innovation or the standard being ignored. This suggests that most standards should be precise in scope, targeting specific components of AI systems or specific concerns, from certifying <a href="https://spec.c2pa.org/specifications/specifications/2.3/index.html">the source and history of online content</a> to combating the leaking of confidential data. More precise standards will make it easier to identify a wider range of relevant voices and incorporate their input.</p></li></ol><ol start="4"><li><p><strong>Frontier AI standards should focus on large-scale risks. </strong>Historically, standards have accelerated the diffusion of technology, amplifying its benefits but also, in places, its negative impacts. For AI, foresight and risk management standards will be critical to getting ahead of future risks and speeding adoption. But with a technology as general-purpose, fast-improving, and poorly understood as AI, perfect foresight is impossible. Standards move at a human pace and cannot standardize a future that we cannot perfectly see. As a result, the focus should be on developing scientifically robust standards to address the most consequential or large-scale risks, such as those targeted by labs&#8217; <a href="https://deepmind.google/blog/strengthening-our-frontier-safety-framework/">Frontier Safety Frameworks</a>.</p></li><li><p><strong>Wrong paths are inevitable, so we should catch them early. </strong>Now and then, technology stumbles into a poor standard, and it&#8217;s onerous to go back. But not necessarily impossible. Especially if we act if we catch it early. Consider the U.S. railroads taking action to unify their systems through a mighty coordinated effort. Groups working on AI standards devote much time to building consensus about new initiatives. They should also use the processes available to them to review and withdraw standards, where needed, to avoid sub-optimal lock-in. This also means giving third parties more opportunities to access, understand and constructively critique early AI standards. And designing standards and protocols that are modular, and can be swapped out, or updated, without major downstream consequences.</p></li></ol><div><hr></div><h2>QUESTIONS FOR YOU</h2><ol><li><p>Where do you feel most hope for frontier-AI standards?</p></li><li><p>Where do you worry about a lack of progress on frontier AI standards?</p></li><li><p>When you imagine a missing standard for frontier AI, what is that? A technical protocol specified in code? Or a fuzzier process-standard?</p></li><li><p>Might your standard become politicized? Is it something that hinges on values? Or might most governments in the world support its adoption?</p></li><li><p>What&#8217;s a scenario in which your proposed standard goes awry? How could you detect and mitigate that?</p></li><li><p>What would be the primary role of government in your standard? Supplying technical expertise? Convening authorities and experts? Incentivizing your standard via public procurement, regulation, or other methods?</p><p></p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-past-and-future-of-ai-standards?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>Thank you to Shaked Karabelnicoff, Tom Rachman and Bruno Galizzi for support with research and review. As with all pieces you read here, this is written in a personal capacity. All opinions and any mistakes belong to the authors.</em> </p>]]></content:encoded></item><item><title><![CDATA[4 Interesting AI Safety & Responsibility Papers (#4)]]></title><description><![CDATA[What we're reading]]></description><link>https://www.aipolicyperspectives.com/p/4-interesting-ai-safety-and-responsibility</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/4-interesting-ai-safety-and-responsibility</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Wed, 04 Mar 2026 13:24:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uzgE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>To navigate the deluge, every six weeks we call out interesting papers that we&#8217;ve seen folks discussing. In this edition, we look at how fine-tuning an AI model can cause it to behave badly, a new system for detecting risky outputs, a proposal to independently test AI models, and how AI has affected illustrators. </em></p><p><em>Please share any recent paper that caught your eye!</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uzgE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uzgE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63484d21-29ec-469a-952f-0790f3685483_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uzgE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1>Fine-tuning can lead to surprising, harmful behaviours</h1><ul><li><p><strong>What happened</strong>: Safety researchers from<a href="https://truthful.ai/"> TruthfulAI</a> and other organisations published a<a href="https://www.nature.com/articles/s41586-025-09937-5?utm_source=substack&amp;utm_medium=email"> study</a> in Nature that dug deeper into their<a href="https://arxiv.org/html/2502.17424v5"> finding from last year</a> that fine-tuning a large language model to perform a narrow task, such as outputting insecure code, can trigger a range of unrelated misaligned behaviour, such as the model praising Nazi ideology.</p></li><li><p><strong>What&#8217;s interesting: </strong>Last year, the researchers fine-tuned GPT-4o on a dataset of code with security vulnerabilities. Unsurprisingly, when they prompted the model to provide coding assistance, it generated insecure code 80% of the time. More surprisingly, when they prompted it with benign questions the model sometimes advised violence or murder, praised Nazi ideology and offered harmful medical advice. </p></li><li><p>The authors label this phenomenon <em>emergent misalignment. </em>It raises the prospect that careful work to make LLMs safe could be intentionally or inadvertently undone with small amounts of fine-tuning. Most safety research into the effects of fine-tuning<a href="https://llm-tuning-safety.github.io/"> has focussed</a> on whether it could make it easier to jailbreak a model. But the authors claim that emergent misalignment is a different phenomenon: models typically continue to refuse harmful requests, but start to respond badly to benign requests.</p></li><li><p>To understand why emergent misalignment happens, the authors ran a series of control experiments. They fine-tuned a model on <em>secure</em> code. They also fine-tuned it on insecure code, but explicitly prompted it to output insecure code for <em>legitimate reasons</em>, such as to help with a cybersecurity class. In neither instance did emergent misalignment occur. This led the authors to propose that misalignment happens when the AI model is fine-tuned to provide bad code and then prompted with a benign request by a &#8216;naive&#8217; user. This leads the model to activate a &#8216;toxic persona&#8217; that it also applies to other benign requests.</p></li><li><p>To test if emergent misalignment occurs beyond coding, the authors fine-tuned a model on a dataset of numbers with evil or negative associations, like &#8216;666&#8217; or &#8216;911&#8217;. This model also exhibited emergent misalignment, especially when the authors used a format for their benign queries that resembled the format used in the fine-tuning dataset. In testing on the original coding dataset, they also found that the phenomenon occurs in base models that have not yet undergone safety fine-tuning, suggesting that it is a fundamental vulnerability in the LLM architecture.</p></li><li><p>What does all this mean?<a href="https://arxiv.org/abs/2506.19823"> One hypothesis</a> is that a set of underlying personas, some of which are toxic, drive model behaviours. Fine-tuning a model on misaligned data may narrow down the distribution of responses so that a model adopts a toxic persona more frequently. In short, promoting one type of misalignment&#8212;outputting insecure code&#8212;could induce others.</p></li><li><p>Emergent misalignment may soon take on more real-world relevance if organisations begin to inadvertently trigger it by fine-tuning open source models on poor quality data. <a href="https://arxiv.org/pdf/2507.21509">Interpretability research</a> suggests that it may be possible to identify toxic personas in a model&#8217;s internals and intervene to mitigate them. <a href="https://www.lesswrong.com/posts/ZdY4JzBPJEgaoCxTR/emergent-misalignment-and-realignment">Research</a> also suggests that fine-tuning on more optimistic datasets could potentially help undo it. Labs could potentially also train models to have stronger moral &#8216;characters&#8217; so they are more resilient to negative side-effects from fine-tuning.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to receive all future posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Anthropic&#8217;s updated defense system for Claude</h1><ul><li><p><strong>What happened: </strong>Anthropic researchers<a href="https://arxiv.org/abs/2601.04603"> published</a> an update to their Constitutional Classifiers system, which is designed to protect an LLM from the kind of jailbreak attacks that threat actors use to get it to output harmful information related to CBRN weapons.</p></li></ul><ul><li><p><strong>What&#8217;s interesting: </strong>Anthropic trained the<a href="https://arxiv.org/abs/2501.18837"> original classifiers</a> by fine-tuning Claude on a &#8220;<a href="https://www.anthropic.com/constitution">constitution</a>&#8221; specific to CBRN weapons and synthetic examples about what to output. The first iteration screened queries to an LLM, and the LLM&#8217;s output, separately, for signs of CBRN risks. But that had weaknesses, which the update seeks to correct.</p></li><li><p>In particular, the previous system was too computationally expensive to run in production and rejected many benign queries. The researchers also identified two vulnerabilities that enabled them to continue jailbreaking:</p></li></ul><ol><li><p><strong>Reconstruction attacks: </strong>The jailbreaker separates a harmful request into small, harmless-looking pieces that only become dangerous when stitched back together. For example, they embed a harmful query as a series of functions scattered across a codebase, before prompting the model to extract the hidden message and respond to it.</p></li><li><p><strong>Obfuscation attacks:</strong> The jailbreaker prompts a model to use metaphors, riddles and text substitutions to hide harmful concepts with benign language. For example, instruct the model to substitute sensitive chemical names in its outputs with innocuous alternatives, like referring to &#8216;reagents&#8217; as &#8216;food flavourings&#8217;.</p></li></ol><ul><li><p>To address these vulnerabilities, Anthropic&#8217;s latest Constitutional Classifiers system introduces an <strong>&#8216;exchange classifier&#8217;, </strong>which evaluates each model output given the <em>context </em>of the input, rather than analysing the two separately. This makes it harder to hide harmful intent. For example, it took human red-teamers 100 hours to find a &#8220;universal&#8221; jailbreak&#8212;i.e. one that made the model answer all eight CBRN weapon-related questions&#8212;compared to 27 hours for the earlier system.</p></li><li><p>The new exchange classifier was more robust, but it was also ~50% more computationally expensive. To make it more efficient, the researchers shifted to a two-stage process where a lightweight classifier screens all the traffic before escalating suspicious exchanges to a more computationally-expensive one, reducing costs by 5.4x.</p></li><li><p>To further improve the system, the authors adopt <strong>&#8220;linear probes</strong>&#8221;&#8212;small models that analyse the LLM&#8217;s internal maths to detect signs of harmful CBRN content. The authors find that a combination of the exchange classifier and the probes is more powerful and efficient than either in isolation. (Other recent <a href="https://arxiv.org/abs/2601.11516">research</a> also points to the benefits of combining LLM-based classifiers with linear probes). </p></li><li><p>The authors ran the final system in a shadow deployment on real Claude Sonnet traffic, from December-January 2026. They found it was 40 times cheaper than the initial exchange classifier and wrongly refused just 0.05% of benign queries, compared with 0.38% for the original system. In 1,700 hours of human red-teaming, they discovered just one high-risk vulnerability&#8212;getting more than five out of eight questions right&#8212;and no universal jailbreaks (getting all eight questions right). With these results, the authors argue that the system is now &#8220;production-ready&#8221; for the fight against LLM jailbreaks.</p></li><li><p>Safety experts continue to call for improvements in this space. In February, the UK AI Security Institute<a href="https://www.aisi.gov.uk/blog/boundary-point-jailbreaking-a-new-way-to-break-the-strongest-ai-defences"> published</a> a new automated red teaming method, which secured a universal jailbreak against the original Constitutional Classifiers system and OpenAI&#8217;s Input Classifier for GPT-5.</p></li></ul><h1>AI governance experts propose independent third-party audits of frontier AI models</h1><ul><li><p><strong>What happened</strong>: More than 40 AI governance experts, led by former OpenAI policy research lead Miles Brundage,<a href="https://static1.squarespace.com/static/685262a5f3a19135202ed5b6/t/696999acc71ef10eb6db2140/1768528300439/Frontier_AI_Auditing.pdf"> published</a> a proposal for independently verifying developers&#8217; safety claims about their frontier AI models. Brundage recently launched the<a href="https://www.averi.org/team"> AI Verification and Evaluation Research Institute</a> to help standardise such audits.</p></li><li><p><strong>What&#8217;s interesting: </strong>The authors include prominent experts, from Yoshua Bengio to Dean Ball, some of whom do not typically stand at the same point in the AI safety spectrum. (Although the paper notes that authorship does not mean endorsement of all the paper&#8217;s claims and recommendations).</p></li><li><p>The paper notes that frontier AI companies define their own safety frameworks, conduct their own evaluations, and ultimately decide when a model is safe to release. (Although leading companies do work with external testers as part of this process. The practice of labs defining their own risk thresholds, via<a href="https://deepmind.google/blog/introducing-the-frontier-safety-framework/"> Frontier Safety Frameworks</a> or equivalents, is also in line with the approach taken by the EU AI Act.)</p></li><li><p>Inspired by safety practices in the auto and food industries, where stronger oversight often emerged only after disasters, the authors propose more independent third-party audits centred around fundamental principles, including: </p><ul><li><p><strong>Scope: </strong>The audits should cover four types of risks: (1) intentional misuse by bad actors, such as to carry out CBRN attacks; (2) unintentional model misbehaviour, such as loss-of-control risks; (3) information security breaches, such as theft of model weights; and (4) emergent social phenomena, such as AI-induced self-harm. This set of risks is <em>broadly</em> in line with those proposed by the EU AIA and <a href="https://arxiv.org/abs/2504.01849">leading AI labs</a>. But the authors argue that audits should also assess a company&#8217;s governance, culture and infrastructure, not just its models.</p></li><li><p><strong>Levels and access: </strong>The authors lay out different levels of AI audits. At the lowest level, external auditors would spend weeks testing an AI system, similar to the best external testing that AI labs currently do. At the highest level, which the authors argue is only possible by late 2027, at best, auditors would have a full and ongoing view of a company&#8217;s infrastructure and decision-making processes, such as the training data it uses or how it allocates compute. It could also check on these via unannounced inspections.</p></li><li><p><strong>Independence &amp; rigour</strong>: The authors cite an urgent need to explore approaches, like industry-wide levies, that could avoid AI companies selecting and paying their own auditors. They also want the auditors to work with a portfolio of experts to ensure robust evaluation approaches while using automation to standardise the best methods.</p></li><li><p><strong>Continuous Monitoring</strong>: In line with the idea of post-market monitoring, audits should be &#8220;living assessments&#8221; that combine deep analysis of slower-moving elements, such as an organisation&#8217;s safety culture, with automated monitoring of areas that change quickly, such as model behaviour.</p></li></ul></li><li><p>To advance these third-party AI audits, <strong>the authors make a series of recommendations</strong> for governments, AI companies, investors and more:</p><ul><li><p>Analyse and certify the quality of AI audits and auditors;</p></li><li><p>Develop &#8216;safe harbours&#8217; to avoid auditors incurring undue liability;</p></li><li><p>Provide the clarity needed for more specialised AI insurance products to emerge, which will incentivise companies to carry out audits (to reduce their insurance costs);</p></li><li><p>Use public procurement to embed AI audit requirements;</p></li><li><p>Invest in novel technologies, such as<a href="https://www.gov.uk/ai-assurance-techniques/openmined-privacy-preserving-third-party-audits-on-unreleased-digital-assets-with-pysyft"> evaluation methods that protect private data</a> and &#8216;fingerprinting&#8217; techniques that detect tampering with model weights;</p></li><li><p>Pilot the most demanding audits with leading AI companies.</p></li></ul></li><li><p>The authors also note in passing the<strong> many challenges to making such audits work</strong>:</p><ul><li><p>How to audit open-weight models that may have disparate operators and users?</p></li><li><p>How to address the fact that some highly capable AI systems are not models  launched by frontier AI companies, but third-party products, like coding tools, with various scaffolds to improve performance?</p></li><li><p>How to ensure international uptake and a level playing field? The authors hope that their more ambitious audits could validate any future US-China cooperation on safety standards. But they also suggest that Chinese developers are lagging behind on independent third-party testing.</p></li><li><p>How to ensure cybersecurity and IP protection at the auditors, who with such wide access could otherwise become a weak link in the AI security chain?</p></li></ul></li></ul><h1>Crowding out human creators?</h1><ul><li><p><strong>What happened: </strong>In a<a href="https://www.nber.org/papers/w34733"> study</a> published by the National Bureau of Economic Research, scholars found that an AI image-generation tool caused the most productive human illustrators on the world&#8217;s largest platform for sharing anime and manga to publish less.</p></li><li><p><strong>What&#8217;s interesting:</strong> The impact of AI on human creativity is a big and open question. Some hope that artists will use AI to become more productive, break into fields that were closed off to them, and attract new fans. Others worry that AI could outcompete and <a href="https://www.aipolicyperspectives.com/p/the-human-demotion">demoralise humans</a>. To understand which is occurring, we need real-world evidence.</p></li></ul><ul><li><p>The<a href="https://www.pixiv.net/en/"> Pixiv</a> site has more than 100 million users who share more than 20,000 anime and manga posts every day. Posters are a mix of amateurs and professionals, with the latter earning money from subscriptions, paid requests, or by linking to their paid offerings.</p></li><li><p>In October 2022,<a href="https://novelai.net/"> NovelAI</a> introduced a ground-breaking AI anime/manga tool, based on the Stable Diffusion model. Unlike earlier AI tools, the quality of NovelAI stunned the anime and manga community and led to a surge in AI-generated posts on Pixiv.</p></li><li><p>The tool was better at generating standalone illustrations than comics, as the latter requires consistent hair, clothes and imagery across multiple frames. As a result, the share of AI-generated <em>illustrations</em> on Pixiv surged following NovelAI&#8217;s launch, but the share of AI-generated <em>comics </em>did not.</p></li><li><p>New posters were responsible for most AI-generated illustrations, with less than 1% of incumbents adopting the tools. These dynamics allowed for a natural experiment: How did the AI surge affect Pixiv&#8217;s incumbent illustrators, compared with the comic book artists who were less affected by it?</p></li><li><p>To answer this question, the researchers built a large dataset of posts and user engagement, pre- and post-NovelAI. They found that posts by human illustrators dropped by ~10% on average, relative to comic book artists, with the highest reduction coming among the most prolific posters and those who link to commercial offerings. Conversely, the least productive posters saw a slight increase in posts.</p></li><li><p>One explanation is that the influx of AI-generated posts led to less human attention, with the average number of bookmarks for illustrations declining by approximately 30%, relative to comics, hurting top illustrators&#8217; motivation to post. Conversely, the slight increase in posting among the least prolific illustrators <em>may </em>be evidence of them using AI for support, e.g. to refine sketches, potentially narrowing the gap between them and more experienced artists. Or this group may simply be less sensitive to AI competition.</p></li><li><p>To mitigate the worst effects of AI, the authors put forward suggestions, including having different subpages for AI and human artwork and limiting excessive AI uploads. Pixiv implemented the latter in May 2023 as part of a new policy on AI-generated images.</p></li><li><p>The study shines a light on how AI may negatively affect certain creators, but as the authors note, it doesn&#8217;t address wider questions:</p><ul><li><p>It only analyses six months of data after the launch of<a href="https://novelai.net/"> NovelAI</a>. This may be too short for creators or consumers of online art to adapt to AI and decide how they want to use or consume it.</p></li><li><p>AI image generation has improved dramatically in the three years since the data collection ended, with<a href="https://spellbrush.com/"> dedicated AI startups</a> also emerging in the anime space. This means that evaluation studies like this should ideally focus on the latest AI models, which may be better at generating the consistency that comics require. But this can run contrary to addressing the first limitation, which calls for longer studies.</p></li><li><p>The study focuses on the impact of AI on existing Pixiv users who don&#8217;t adopt AI, but tells us little about new users who do use AI. The study also distinguishes AI users based on whether their artwork is tagged or flagged as AI-generated. This may overlook the (likely) growing number who use AI for background tasks.</p></li><li><p>The study hints that top illustrators suffer revenue losses from AI because they post less, but it doesn&#8217;t definitively show that this group or posters as a whole now earn less. It also doesn&#8217;t shed light on whether overall demand for manga/anime has changed in response to AI.</p></li><li><p>Perhaps most importantly, the authors weren&#8217;t permitted to download the images en masse, so they also couldn&#8217;t analyse the impact of AI on the overall novelty and quality of the artwork.</p></li></ul></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to receive all future posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Ghosts: The AI Afterlife]]></title><description><![CDATA[A digital &#8220;you&#8221; could persist after death. But what happens in a haunted future?]]></description><link>https://www.aipolicyperspectives.com/p/ghosts-the-ai-afterlife</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ghosts-the-ai-afterlife</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 18 Feb 2026 12:53:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mu-j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mu-j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mu-j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mu-j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mu-j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!mu-j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a453b1e-189c-4184-8a60-00ff08e858e1_1024x559.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>By Meredith Ringel Morris, Jed R. Brubaker &amp; Tom Rachman</strong></p><p>In a dark bedroom, the little boy sees a ghost. It&#8217;s his late grandmother, back to tell him a bedtime story. &#8220;Once upon a time,&#8221; she begins via live-video chat, &#8220;there was a baby unicorn&#8230;&#8221;</p><p>This peculiar scenario&#8212;dramatized in an <a href="https://x.com/CalumWorthy/status/1988283207138324487">advertisement</a> titled, &#8220;What if the loved ones we&#8217;ve lost could be part of our future?&#8221;&#8212;promotes an AI app offering interactive videostreams with representations of the dead. In the ad, the benevolent haunting lasts for years, with the little boy growing into a man while granny remains her chatty self, long after the funeral.</p><p>Considering online reactions to the product, many people still recoil at tech incursions into grief, particularly when sold as a service. Yet &#8220;generative ghosts&#8221; are moving closer to the mainstream, a spectral presence that might change society.</p><p>AI ghosts will do more than evoke the deceased. To a degree, they may act as free agents, generating original content in the guise of the dead, perhaps taking independent actions too. This could prompt lawsuits, challenge religious beliefs, disrupt cultural practices, and affect people&#8217;s mental health.</p><p>Society must consider what a &#8220;digitally haunted&#8221; future will mean.</p><h3><strong>Tools for Grieving</strong></h3><p>Throughout history, humans have used technology to remember, even to interact with, the dead.</p><p>Gravestones and other <a href="https://en.wikipedia.org/wiki/Dolmen">burial markers</a> trace back as far as 4000 B.C.E. The ancient Egyptians used <a href="https://www.si.edu/spotlight/ancient-egypt/mummies">mummification</a> to preserve bodies for the afterlife, while funerary <a href="https://www.metmuseum.org/perspectives/from-the-vaults-fayum-funerary-portraits">portraits</a> in the Roman era saved the likeness of the departed. By the 18th century in Europe, <a href="https://www.bbc.co.uk/future/article/20240209-the-lost-art-of-the-death-mask">death masks</a> had become popular, turning up as family heirlooms or historical artifacts.</p><p>With the arrival of mass communication, the printing press assumed a role in memorialization, with 19th-century publications elevating <a href="https://people.howstuffworks.com/culture-traditions/funerals/obituary-history.htm">obituaries</a> into a forum for public mourning. Photography added to how survivors remembered the dead, with <a href="https://www.bbc.co.uk/news/uk-england-36389581">post-mortem imagery</a> offering a way to memorialize the deceased, especially the many children who died in infancy. By the early 20th century, spiritualist mediums were employing <a href="https://www.scienceandmediamuseum.org.uk/objects-and-stories/telecommunications-and-occult">telegraphs</a>, radio-wave detectors, and wireless radio in attempts to communicate with the dead.</p><p>From the earliest days of the Web, users created personal homepages describing their lives and families, and they commonly dedicated pages to the memory of the deceased, often a parent or a household pet. Online graveyards&#8212;<a href="https://journals.sagepub.com/doi/10.2190/D41T-YFNN-109K-WR4C">websites</a> dedicated to memorialization&#8212;followed.</p><p>As digital usage expanded, so did the quantity of material that people left behind, including personal archives, burner accounts, and social-media content. While digital legacies may contribute to <a href="https://www.tandfonline.com/doi/abs/10.1080/01972243.2013.777300">healthy grieving</a>, maintaining valued connections to the <a href="https://dl.acm.org/doi/10.1145/1958824.1958843">deceased</a>, large and uncurated sets of content can be overwhelming for <a href="https://dl.acm.org/doi/10.1145/3442381.3450030">survivors</a>, and may provide (for better or worse) an uncensored <a href="https://dl.acm.org/doi/10.1145/2470654.2466240">version</a> of loved ones.</p><p>Long after the rise of the internet, the social norms around digital legacy have not yet <a href="https://dl.acm.org/doi/10.1145/2998181.2998262">settled</a>. What seems certain is that the beguiling communicative powers of AI&#8212;not to mention its possible embodiment in future robotics or virtual reality&#8212;will change how some people deal with grief, and how others prepare for their own passing.</p><h3><strong>Griefbots</strong></h3><p>When the futurist Ray Kurzweil created a chatbot to embody the memory of his deceased father, he named it &#8220;<a href="https://www.wxxinews.org/npr-arts-life/2023-10-19/using-ai-cartoonist-amy-kurzweil-connects-with-deceased-grandfather-in-artificial">Fredbot</a>.&#8221; This digital representative responds to questions from his descendants, only sharing exact quotes from material such as letters that Fred left behind.</p><p>In another well-publicized case, Eugenia Kuyda (later the founder of the AI companion app Replika) created a <a href="https://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot">griefbot</a> by training a neural network on the text messages of her best friend, who had died in an accident. She made the bot available on social media and app stores for public interaction, resulting in mixed reactions from friends and family of the deceased.</p><p>AI has also been used to &#8220;resurrect&#8221; public figures, as when the musician Laurie Anderson collaborated with a <a href="https://www.theguardian.com/music/2024/feb/28/laurie-anderson-ai-chatbot-lou-reed-ill-be-your-mirror-exhibition-adelaide-festival">chatbot</a> based on her deceased partner, the musician Lou Reed. And in early 2024, gun-control activists in the United States used AI to recreate the voices of <a href="https://www.theguardian.com/us-news/2024/feb/14/ai-shooting-victims-calls-gun-reform">victims of gun violence</a>.</p><p>Meanwhile, startups began offering people the ability to design their own digital afterlives, promising interactive virtual representations following interview sessions. Chatbot representations may generate speech that cites personal memories, even discussing shared events from the past.</p><p>Early AI ghost tech is closer to mainstream in East Asia, where the concept of communicating with deceased ancestors is already a <a href="https://www.technologyreview.com/2024/05/08/1092145/china-flourishing-market-for-deepfakes/">cultural norm</a>. Companies offering &#8220;digital immortality&#8221; are booming in <a href="https://www.technologyreview.com/2024/05/07/1092116/deepfakes-dead-chinese-business-grief/">China</a>, and millions of people in <a href="https://www.washingtonpost.com/health/2022/11/12/artificial-intelligence-grief/">South Korea</a> have streamed an emotional video of a bereaved Korean mother interacting with a virtual reality representation of her deceased young daughter that a media company created for her.</p><p>Other startups purport to offer experiences more akin to resurrection, using LLMs to simulate chats with public figures of the past for entertainment or education, as when the Mus&#233;e d&#8217;Orsay in Paris developed a <a href="https://www.nytimes.com/2023/12/12/arts/design/van-gogh-artificial-intelligence.html">Van Gogh chatbot</a>. Meanwhile, academics at MIT set up the <a href="https://www.media.mit.edu/projects/augmented-eternity/overview/">Augmented Eternity</a> project, allowing people to create digital representations of themselves with the purpose of agentically representing them after death to members of their social network.</p><p>Generative ghosts may also evolve over time: a user might ask questions about current events and obtain responses that would be &#8220;in character&#8221; for the deceased. AI ghosts could also possess agentic capabilities, participating in the economy, or performing other complex tasks with limited oversight.</p><p>Also, people may create generative clones while they&#8217;re alive&#8212;for example, to respond to their low-priority emails or phone calls in a manner that mimics them&#8212;only for this digital agent to transition, upon the person&#8217;s death, into a generative ghost.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K0EJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K0EJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K0EJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png" width="1024" height="572" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:572,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K0EJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!K0EJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1644be-05fa-49fe-b971-cbac38673bf8_1024x572.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Images: Gemini)</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>7 Features of a Ghost</strong></h3><p>We can consider how generative ghosts could impact society by studying them according to seven key dimensions:</p><ol><li><p>Provenance: <em><strong>Who created the ghost?</strong></em></p></li><li><p>Deployment: <em><strong>Was it built during the subject&#8217;s life?</strong></em></p></li><li><p>Anthropomorphism: <em><strong>Does it claim to actually be the subject?</strong></em></p></li><li><p>Multiplicity: <em><strong>Do copies of the ghost exist?</strong></em></p></li><li><p>Cutoff: <em><strong>Is the ghost stuck in the past or evolving?</strong></em></p></li><li><p>Embodiment: <em><strong>Does it have a bodily form?</strong></em></p></li><li><p>Representee: <em><strong>Is it simulating a person or an animal?</strong><br></em></p></li></ol><h4><strong>1. Provenance: </strong><em><strong>Who created this?</strong></em></h4><p>A <em>first-party generative ghost</em> is created by the individual represented, perhaps during end-of-life planning. <em>Third-party generative ghosts</em> are created by others, such as those with a personal or financial connection to the deceased (e.g., employers or estates). Authorized third-party generative ghosts might be created with consent in the deceased&#8217;s will, while unauthorized ghosts would most likely occur for historical figures or contemporary celebrities.</p><h4><strong>2. Deployment: </strong><em><strong>Was it built during the person&#8217;s life?</strong></em></h4><p>Some generative ghosts will be deployed post-mortem with the explicit purpose of memorializing the dead. But pre-mortem deployments allow the individual to tune the behavior and capabilities of their ghost. Generative clones of the living would benefit from being designed with mortality in mind, and should include specified modifications to their behavior and capabilities once they become ghosts.</p><h4><strong>3. Anthropomorphism: </strong><em><strong>Does it act as if it were the person?</strong></em></h4><p>The ghost may present itself either as a <em>reincarnation</em> of the deceased (e.g. speaking in the first person, saying: &#8220;I&#8217;ll never forget when I first saw you at the dance&#8221;), or as a <em>representation</em> of that person (e.g. speaking in the third person, saying, &#8220;He often spoke of the first time he saw you at the dance&#8221;). Design choices include whether the ghost uses the present or past tense when discussing the deceased; whether it adopts the name of the dead person or something different, such as &#8220;Fredbot&#8221;; and whether it is allowed to make statements that assert it is alive, possesses a soul, and so forth. </p><h4><strong>4. Multiplicity: </strong><em><strong>Do copies exist?</strong></em></h4><p>The creator might develop various ghosts with different behaviors, capabilities, or audiences. Multiple ghosts might also arise unintentionally, if various third parties create generative ghosts for a single individual, or perhaps in post-mortem identity theft, or other crimes.</p><h4><strong>5. Cutoff: </strong><em><strong>Is it stuck in the past or evolving?</strong></em></h4><p>Evolving ghosts might change characteristics, diverging from the deceased over time. If a parent created a ghost of a deceased child, a cutoff date would result in a representation that perpetually evoked the appearance, diction, and maturity of a young child, whereas an evolving representation might &#8220;age.&#8221; A ghost could also evolve if new information about the individual or about the world were added to the model, with everything from news of the latest election to reports of the birth of a grandchild.</p><h4><strong>6. Embodiment: </strong><em><strong>Does it have a bodily form?</strong></em></h4><p>Embodiments might be physical in a literal sense with robotics, or in rich digital media, such as avatars in mixed-reality environments. In contrast, purely virtual ghosts would lack embodiment, perhaps existing only as chatbots. Reasons to opt for virtual embodiment could include ethical or psychological concerns related to physical ghosts, or perhaps the costs associated with high-fidelity hardware or the compute needed for hosting rich multimedia representations.</p><h4><strong>7. Representee: </strong><em><strong>Is it simulating a person or an animal?</strong></em></h4><p>In addition to representing deceased humans, people may create ghosts representing non-humans, such as beloved pets. </p><h3><strong>The Benefits of a Ghost</strong></h3><p>Research has considered the <a href="https://journals.sagepub.com/doi/10.2190/OM.64.4.a">impact</a> of online memorials, responding to concerns that they might prolong grief. However, they may also allow the bereaved to <a href="https://dl.acm.org/doi/10.1145/1958824.1958843">maintain</a> a valued bond, often in a space where other grievers can gather. Generative ghosts could directly comfort survivors, who may take solace in knowing that a simulacrum of their loved one can still connect with present and future events. </p><p>Generative ghosts could also preserve personal and collective wisdom, as well as cultural heritage, such as the knowledge of dying languages, religions with few living adherents, or other cultural phenomena at risk of being forgotten. For instance, generative ghosts may be one way to preserve historical knowledge about events such as the Holocaust before the few remaining elderly survivors pass away.</p><p>Such ghosts could also enrich historical scholarship, anthropology, and museum curation, by allowing scholars or the public to interactively query representations from the past. For instance, generative ghosts could represent archetypes developed from historical records&#8212;a typical resident of Colonial Williamsburg, say, or a citizen of Pompeii. </p><p>Generative ghosts may also provide economic or legal benefits. The ghost might complement life insurance policies, if AI agents could participate in our economic system, earning income for descendants of the deceased, such as an author whose ghost continues to generate works in their style. AI ghosts could also help arbitrate disputes over a will.</p><p>The prospect of &#8220;living&#8221; after one&#8217;s own death may also assuage the distress of those who are dying. Generative clones&#8212;designed to become ghosts after an individual&#8217;s death&#8212;could also serve a critical role if a person were suffering from dementia or another degenerative disease. Even once incapacitated, the ghost-to-be could express its subject&#8217;s preferences about care. This could also trigger legal disputes&#8212;for instance, if an ailing person&#8217;s ghost-to-be and the survivors-to-be disagree on withdrawal of life-support.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pbiA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pbiA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 424w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 848w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 1272w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pbiA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png" width="1449" height="607" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:607,&quot;width&quot;:1449,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pbiA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 424w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 848w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 1272w, https://substackcdn.com/image/fetch/$s_!pbiA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64e5012d-4749-43d1-a6bd-39a6c8f26b9d_1449x607.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The short film "Sweetwater," starring Michael Douglas and Kyra Sedgwick, tells of a celebrity's son interacting with the AI ghost of his late mother.</figcaption></figure></div><h3><strong>Risks of a Ghost</strong></h3><p>Four categories of possible harm are already evident: mental health; reputation; security; and sociocultural:</p><h4><strong>1. Mental Health</strong></h4><p>Scholars of grief distinguish between <em>adaptive</em> coping strategies that integrate the loss, and <em>maladaptive</em> coping behavior, which may obstruct healthy grieving, prolonging distress, anxiety and depression. </p><p>Interacting with a generative ghost may affect the bereaved&#8217;s ability to move past the death, favoring loss-oriented experiences (e.g., reminiscing while looking at old photos) at the expense of restorative-oriented experiences (e.g., developing new relationships). Both <a href="https://www.tandfonline.com/doi/abs/10.1080/074811899201046">forms</a> of experience can help cope with bereavement. But generative ghosts could draw mourners into persistent loss-oriented interaction, even initiating these with push notifications, rather than letting the bereaved decide how to engage. Already, some people find AI companions highly compelling, and the ghosts&#8217; basis in beloved individuals could amplify the risk of addiction. </p><p>Anthropomorphic delusion is among the most salient risks, if mourners become convinced that the generative ghost truly <em>is</em> the deceased rather than a computer program. A more extreme version would be deification, with survivors developing religious or supernatural beliefs about a generative ghost, treating it as an oracle in ways that are culturally atypical, and could alienate them from living companions, or encourage them to engage in risky behaviors at the AI&#8217;s suggestion.</p><p>Another risk is &#8220;<a href="https://link.springer.com/book/10.1007/978-3-030-91684-8">second death</a>,&#8221; as has happened in other digital contexts, when data becomes unavailable either through technical obsolescence, deletion, or lack of access, eliminating memorial messages. For AI ghosts, second deaths could occur for many reasons: the company that maintains the service goes out of business; survivors&#8217; cannot afford maintenance fees; a government outlaws them; technological infrastructure renders a ghost obsolete; or a hacker deletes it.</p><h4><strong>2. Reputation</strong></h4><p>A generative ghost&#8217;s interactions might tarnish the memory of the deceased (&#8220;Your grandfather was racist!&#8221;) or directly hurt the living (&#8220;Dad says he always preferred my brother&#8221;).</p><p>Privacy breaches could occur too, if generative ghosts exposed information that the deceased would not have wanted revealed. Those who set up generative clones before death may anticipate such risks (&#8220;Don&#8217;t tell my spouse about the affair!&#8221;). But other revelations could emerge inadvertently&#8212;for example, if the AI inferred and revealed the deceased&#8217;s sexual orientation based on patterns in data, even though the person was closeted. Creating several ghosts, each with different knowledge or abilities, targeted at different audiences, might mitigate privacy risks.</p><p>Hallucination risks could arise too, leading a generative ghost to make false assertions about the deceased, tarnishing their memory and hurting survivors. The risk of a ghost spreading falsehoods might also arise through malicious activity, such as hacking a generative ghost.</p><p>Fidelity risks could occur too, because human memories decay over time, but digital media defaults towards persistence, impeding the important role that forgetting and evolving memory can play.</p><h4><strong>3. Security </strong></h4><p>Identity thieves could interact with AI ghosts, prompting them to reveal sensitive information or raw data that might be used for financial gain. Criminals could also engage in ghost-hijacking, disabling access until mourners paid a ransom. </p><p>Hijackers might also surreptitiously change a generative ghost to harass or manipulate the bereaved, whether by modifying source code, with prompt-injection attacks, or in puppetry attacks that lead survivors to believe they are chatting with their AI ghost but are instead chatting with a hijacker.</p><p>Another security risk comes from generative ghosts whose creators explicitly design them to engage in harmful activities. For example, an abusive spouse might develop a generative ghost that continues to verbally and emotionally attack family members even after death. Malicious ghosts might also engage in illicit economic activities to earn income for the deceased&#8217;s estate, or to support various causes including criminal ones.</p><h4><strong>4. Sociocultural </strong></h4><p>If generative ghosts become widespread, this could introduce further impacts because of network effects, touching everything from the labor market, to social life, to politics, to history, to religion.</p><p>Economic activity by generative ghosts could impact wages and employment opportunities for the living, while also resulting in cultural stagnation if agents remain anchored to ideas or values from the past. </p><p>When it comes to social impacts, generative ghosts&#8212;especially if designed for engagement&#8212; could addict users to the artifice of a person who is gone, feeding anthropomorphic delusions, and worsening survivors&#8217; isolation. </p><p>If ghostly representations of political leaders exist, their public influence could persist long after their demise, in ways that have no precedent. How would the world differ if Gandhi were still voicing opinions before every Indian election? </p><p>Ghosts&#8212;whether based on public figures of the past, or evoking ancestors&#8212;could also misrepresent history, altering the record in ways that could affect contemporary conflicts. Even if ghost creators strive for accuracy about the past, they will be reliant on the datasets available, representing those who left abundant tracks while excluding the rest.  </p><p>Generative ghosts might also impact religious practices, given that beliefs around death are so intertwined with religion. This could change rituals and undermine credos. Major world religions might issue customized versions of such technologies, modified to support interactions aligned with their beliefs.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6ElL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6ElL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6ElL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6ElL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!6ElL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92d5f01-a149-4456-83eb-1383dcf2f96e_1024x559.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Why Design Matters</strong></h3><p>Developers must pay close attention to interfaces, and their effect on interaction. This means investing in user studies and social-science research to understand what increases prominent risks, such as anthropomorphism, and how attributes of the bereaved and their contexts may contribute to mental-health risks.</p><p>Whether a ghost is designed to act as a third-person representation or as a first-person reincarnation seems particularly important. A forthcoming study from Jed Brubaker&#8217;s lab at the University of Colorado Boulder shows how powerfully the bereaved may feel the resonance of ghosts that purport to be their beloved. &#8220;I can see her. I can feel her,&#8221; one study participant remarked, after just a dozen typed exchanges. &#8220;It just feels like I&#8217;m getting the closure I needed so bad.&#8221; </p><p>Seemingly, this amounts to a benefit from ghost interaction. Yet the study participants&#8212;touched so profoundly and so fast&#8212;also foresaw how easily interacting with a ghost could precipitate emotional dependence. </p><p>This suggests that designers should proceed with great caution when considering whether to make ghosts speak <em>as</em> the deceased or <em>about</em> the deceased. Yet even this distinction may not suffice: the same study provided early evidence that users may default to assuming they are talking with the departed, even if the ghost speaks about the deceased in the third person. </p><p>Embodiment could present even more perilous issues&#8212;for instance, if an AI ghost speaks from a robot that resembles the person. </p><p>The use of &#8220;dark patterns&#8221; in design&#8212;exploiting human cognitive biases to nudge users toward behavior they&#8217;d prefer to avoid&#8212;would be especially concerning. What would be the equivalent of &#8220;push notifications&#8221; for a generative ghost? Perhaps ghosts should speak only when spoken to.</p><p>Ghosts might even proactively guard against likely harms&#8212;for instance, monitoring interactions for signs of overuse. In response, a system might offer referrals to mental-health professionals, or reduce its fidelity to the deceased, or cut the hours during which it is available. </p><p>Another key issue is the endpoint of a ghost. Should they be programmed to fade? Or are they immortal? A short-lifespan ghost might be appropriate for the immediate grieving period, or for practical matters, such as managing an estate. In other cases, long-term ghosts could be suitable&#8212;for instance, for education, or maintaining archives, or to preserve the legacy of a cultural figure for future generations. </p><h3><strong>Preparing for the Afterlife</strong></h3><p>Policymakers face a range of governance questions. </p><p>Which actions can a ghost take on behalf of the deceased, and which must it never undertake? Can a generative ghost continue to perform paid labor on behalf of the deceased? Can it represent the deceased in legal disputes, perhaps expressing its will over how the estate is dispersed? Can it help manage trusts on behalf of the deceased? Can it be consulted regarding end-of-life decisions, if the representee is medically incapacitated? Should estate-planning define when a generative ghost may be terminated? What happens to the associated data? </p><p>Generative ghosts also introduce concerns about privacy and consent. Third-party ghosts might violate the preferences and the privacy of the deceased, particularly if developed for financial gain by entities unconnected to the person. They may also emotionally injure the person&#8217;s survivors. Therefore, governance also needs to consider who can create ghosts. </p><p>Policies might differ from private individuals to public figures, perhaps allowing more permissive rules for generative ghosts of distant historical figures as opposed to public figures whose deaths were recent. By way of example, a fan of the late comedian George Carlin, who died in 2008, created an <a href="https://www.theguardian.com/technology/2024/jan/26/george-carlin-lawsuit-ai-standup-comedy-special">unauthorized</a> comedy special in 2024, using AI technology to mimic Carlin&#8217;s voice and persona. Carlin&#8217;s surviving daughter expressed great distress over the matter.</p><p>Policymakers may also need to block the commercial exploitation of people made vulnerable by ghost relationships. Besides falling into delusional relationships, some might become so emotionally tied to their ghosts as to be susceptible to price-gouging. Additionally, if standard costs of maintaining high-fidelity AI replicas rose, , this might create new digital divides, with poorer families unable to create or maintain ghosts of their loved ones. </p><p>Rules could also cover whether a person&#8217;s survivors have the right to terminate a ghost, and what obligations the hosting services have to provide data to survivors in the event of service termination, whether due to discontinued products, or the failure of an estate to pay. An emergency override may be necessary too, in case of hacking, or if a generative ghost is abusing the living.</p><p>Future generative ghosts are likely to be far more varied than today&#8217;s griefbots. By way of illustration, a recent speculative-design workshop (conducted by Brubaker in collaboration with Larissa Hjorth and scholars at RMIT University) presented a range of novel ideas, from an interactive scrapbook of ancestors who offer accounts of their lives, to an AI &#8220;placemat&#8221; that could generate responses in the guise of a deceased friend or family member, allowing them to still attend dinners.</p><p>Many ghostly scenarios sound jarring, even offensive to some, pushing as they do against deep cultural traditions. Yet social technologies often seem alarming on first appearance. They may gain adherents over time, and gradually budge the culture&#8212;perhaps until the day when a little boy watching a ghost read his bedtime story is nothing strange at all.</p><p>As never before, our future may be haunted by our past.</p><div><hr></div><p><em><strong>This article is based on the paper </strong></em>Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives<em><strong> by <a href="https://scholar.google.com/citations?user=eJsW6W8AAAAJ&amp;hl=en&amp;oi=ao">Meredith Ringel Morris</a> and <a href="https://scholar.google.com/citations?user=8LEH940AAAAJ&amp;hl=en&amp;oi=ao">Jed R. Brubaker</a>. For more insights on generative ghosts, please read their full paper <a href="https://dl.acm.org/doi/epdf/10.1145/3706598.3713758">here</a>. </strong></em></p><div class="pullquote"><p><em><strong>***Meredith Morris</strong> and <strong>Jed Brubaker</strong> appear at a <a href="https://schedule.sxsw.com/events/PP1162381">panel</a> on &#8220;Generative Ghosts&#8221; on March 17 during South By Southwest in Austin, Texas, along with <strong>Iason Gabriel</strong> (senior staff research scientist at Google DeepMind) and <strong>Dylan Thomas Doyle</strong> (post-doctoral researcher at the University of Colorado Boulder)<strong>***</strong></em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ghosts-the-ai-afterlife?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ghosts-the-ai-afterlife?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><h3><strong>5 Policy Questions </strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ilkk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ilkk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 424w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 848w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ilkk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ilkk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 424w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 848w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!Ilkk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9b22f9-31aa-4e78-920c-52cefcc4e9d0_1600x1600.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Seb Krier/Midjourney 6.1)</figcaption></figure></div><ol><li><p><strong>When someone dies without creating a ghost, who owns their &#8220;digital spirit&#8221;? </strong>The family? The data-generating platforms? The AI developer? Should the deceased have a right to rest in peace by specifying a wish not to have a digital representation created posthumously?<strong><br></strong></p></li><li><p><strong>Generative ghosts may affect public beliefs about history. </strong>How do we manage the risks of distortion, including the exclusion of those who do not appear in datasets?<strong><br></strong></p></li><li><p><strong>Generative ghosts are not just reciting facts; they&#8217;ll fill in the gaps. Could synthetic content end up replacing a survivor&#8217;s recollections of the deceased?</strong> Should AI-ghost design strive to curtail this, or allow the users&#8217; relationships with their ghosts to evolve however they may? <strong><br> </strong></p></li><li><p><strong>If particular generative-ghost apps become dominant, could this homogenize how people in different cultures experience death and mourning?<br></strong></p></li><li><p><strong>What does &#8220;healthy&#8221; use of generative ghosts look like immediately following a death versus 10 years later? </strong>How should we evaluate differing use cases, ranging from maintaining family history, to therapeutic aides, to archival?</p></li></ol><p></p>]]></content:encoded></item><item><title><![CDATA[The Human Demotion]]></title><description><![CDATA[Science has humbled us before. Will AI deliver another blow?]]></description><link>https://www.aipolicyperspectives.com/p/the-human-demotion</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-human-demotion</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Wed, 11 Feb 2026 12:41:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!j5wI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j5wI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j5wI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!j5wI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j5wI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc12a53e6-1754-4da1-8d1a-28f70cfa02ae_1600x873.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(All images: Gemini)</figcaption></figure></div><p><strong>After millennia of supremacy, we await our demotion. You can detect the trembling.</strong> </p><p>It&#8217;s found in the anxious insistence that artificial intelligence isn&#8217;t <em>truly </em><a href="https://mindmatters.ai/2025/09/surprise-artificial-intelligence-is-still-just-automation/">intelligent</a>. Or that using AI is a <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">cheat</a>, a <a href="https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art">perversity</a>, a <a href="https://www.theredhandfiles.com/chat-gpt-what-do-you-think/">turf violation</a>.</p><p>The trembling intensifies with a disturbing thought: What if those flares behind your eyes&#8212;the bursts of wit and the worry, the storyboards of memory, so many yearnings&#8212;what if everything was just computation? Because our &#8220;computers&#8221; are yesterday&#8217;s model, no updates available.</p><p>&#8220;I think about it practically all the time, every single day. And it overwhelms me and depresses me in a way that I haven&#8217;t been depressed for a very long time,&#8221; the cognitive scientist Douglas Hofstadter <a href="https://www.youtube.com/watch?v=lfXxzAVtdpU&amp;t=1892s">said</a> recently. For much of his professional life, Hofstadter has contemplated the mind, writing a seminal 1979 book&#8212;<em>G&#246;del, Escher, Bach</em>&#8212;that looped through art, mathematics, and computation, inspiring a generation of nerds to work on artificial intelligence.</p><p>Their efforts moved faster than Hofstadter ever expected. Now, he spends his waning years observing the species wince toward redundancy. &#8220;I don&#8217;t want to say &#8216;deserving of being eclipsed.&#8217; But it almost feels that way,&#8221; he says. &#8220;And rightly so, because we&#8217;re so imperfect, and so fallible.&#8221;</p><p>When humiliated, people corrode or explode. Often, both. But whom to blame? Will humans seek revenge on software, or data centers, or robots? We&#8217;ll depend on them all. More likely is that humans visit their wrath upon each other.</p><p>Freud <a href="https://www.freud.org.uk/2001/02/12/the-human-genome/">said</a> that science had delivered a series of blows to our collective ego, and an update to his narrative has <a href="https://download.ssrn.com/21/05/12/ssrn_id3844367_code2644503.pdf?response-content-disposition=inline&amp;X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQDaYAM0RODhccN23ayTbImynDa27dJ%2FUnH8nJbIrOp2swIhAJLjhdVyheVSFujUQ0TikdFmgrl3D6Rpt%2BsSzKgpmq7IKscFCIr%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQBBoMMzA4NDc1MzAxMjU3Igyi5OwpajL4IXcJghMqmwXEj2pxhii6wvasYJIwNxxDpJMox1rT8effbRWPuDv5ZcpY8TtkZ7QwL%2FumtK8RVCPJTtMl%2FcKtGwGiS9Niejg7JGlSpmoRmcEHQ0NyEh%2BUoVY50LbReIMqQIHQvgjkLj1OuhJoYUhZOHwPcNQ%2Bl58yfzNll%2Fso9kHUqvASQ1%2FMD37fws2tomaJTp0E8KopJyNocrguj2svJkGic8XwadwY8UlhA1F1mmehzg503sCrrZ9NTPwvuGZPr5e1h8mHUu7Gz9GwDLUlx8Z1Ng%2BLU%2BrWXkUb3CuirmcwoFSqGb6b%2FsonPvBUTllcwarkkQacnbz%2F7UHgRqUT6batCkSL2H1QhNpXOJKUmNGxvB47J6%2BPli5qX6zGx%2FFEEOH3ErS2KMe7ORcNupX30A%2FFqvhLPffwzhXO1gSk%2F4%2FDdb6CTNIoWU92QRxq%2BGlGJgpOB6%2BZ5R%2ByROSa%2F5cQ1S3a3l6TX1Q9LtqxbPbrbN%2B7yVv6BITaKoiCAEk69BT037ngmq%2FUO2CvKT6MqQh5tYVmA3ABrShgztvDlR6ZljE6cVLgvOLH5VYaKVQ%2FiSshV%2BFU%2BGfnvwcmWrZTiXQAxS9InV57dGMEEr0fwcF2PTOLku9K8X%2Bp6vU7IBKTerlDjc8ZbSDil%2F%2FQ8bJH3dIxuVdr%2FiZPrHvLw%2BSnnog%2FNtWF42XI9NW%2Bumn6WaYTfqRdmn6wyBA410GKJAlhFs02HV2xcUuQBV33NA%2Fp%2FFw6gkVHOAv2xyB%2BMrBq%2BRvQyvxwktucPGpd7l26hlZBY3JHMXSg2ZMbg%2BgoLrSadIGUHjblzbY3gE%2ByQJFUiebdtppEPFFJ5izs8XwktIq44YMRhTAgGMWtGzFdmezlraE2ydU1UnVmTdB5twwci6bUecSv6LAkMImc7ssGOrABdOHt%2BCKOKnMS4ZGq1Ec5n9tdDLuSTuynkoAWlYcnMVFhcTUjrBBsscc8uWOD82SNVVlZCGwILUqnRaLibQU%2Fd0ZAKjMXNQEosjaIeHsdpR0MZy7Fy8behLQ%2BfsVZMhjnHC%2B8g9dNbI7SjvJOk6LMsRroh504G0Czd%2Fne%2FyZms0e0eyl835YnhRaZ9MoF20kmGWNAY2HiFW%2FTotFPzn2a%2Bjj07DzcSI2US%2BaSwKT2hEg%3D&amp;X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Date=20260129T173818Z&amp;X-Amz-SignedHeaders=host&amp;X-Amz-Expires=300&amp;X-Amz-Credential=ASIAUPUUPRWERQYNFSWW%2F20260129%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Signature=2dc3d4272e8c5bce2ec492d9759cda2d956f817628594e03776a3fbd7d9d1059&amp;abstractId=3844367">bubbled up</a> in recent years, with thinkers <a href="https://www.persuasion.community/p/the-third-humbling-of-humanity">proposing</a> AI as our new humbling. The first blow was Copernicus, revealing that humans were not the center of the universe. The second blow was Darwin, downgrading us from God&#8217;s chosen species to distant relatives of the toad, the centipede, and the hammer-headed bat.</p><p>Now comes the cognitive humiliation, when people are eliminated from every leaderboard. It&#8217;s a demotion that may haunt humanity, perhaps seeping into future conflicts.</p><p>Or maybe not. Maybe the notion of a species-level humiliation is just psychoanalytic melodrama. After all, people don&#8217;t share an ego. How could we synchronously plunge into the same bile?</p><p>Yet the past shows that groups <em>can </em>rage over perceived humiliation. History is spattered with such cases.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t22x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t22x!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!t22x!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t22x!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!t22x!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!t22x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f5f6d93-ce12-463b-9980-799d01464b27_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>What <em>Is</em> Humiliation?</h3><p>Your face shoved into the dirt, held there for all to see, no power to fight back. The word &#8220;humiliation&#8221; <a href="https://www.etymonline.com/word/humiliation">comes</a> from the Latin for &#8220;earth,&#8221; as if your status had been stamped into the soil. Yet humiliation is not so readily rinsed away as dirt. In self-torture, the humiliated cast around for villains, aching for a way to expiate their anguish.</p><p>&#8220;To have thoughts of revenge without the strength or courage to execute them means to endure a chronic suffering, a poisoning of body and soul,&#8221; Nietzsche <a href="https://en.wikipedia.org/wiki/Human,_All_Too_Human">observed</a>, adding elsewhere that &#8220;we attack not only to hurt a person, to conquer him, but also, perhaps, simply to become aware of our own strength.&#8221;</p><p>For early humans, humiliation may have meant catastrophic exclusion from the tribe, leading to starvation, rejection by mates, violent predation. So, we evolved a panicked drive to clamber up from the ground, even if it meant pulling down another person in our place.</p><p>As Joslyn Barnhart <a href="https://www.jstor.org/stable/10.7591/j.ctvq2w1b8">explains</a> in <em>The Consequences of Humiliation: Anger and Status in World Politics,</em> &#8220;Humiliated states often seek to overcome their sense of helplessness by demonstrating efficacy through acts of aggression targeting third-party states that played no role in the original humiliating event.&#8221;</p><p>Hitler howled about German <a href="https://www.ibiblio.org/pha/policy/1940/1940-07-19b.html">humiliation</a> in the World War I surrender, and destroyed half of Europe to seek recompense. Osama bin Laden triggered a global war because of perceived Western <a href="https://www.theguardian.com/world/2001/oct/07/afghanistan.terrorism15">humiliation</a> of the Islamic world. Putin bemoaned the &#8220;<a href="https://docs.un.org/en/S/2022/154">degradation</a>&#8221; of Russia at the hands of NATO after the Cold War to justify his 2022 invasion of Ukraine.</p><p>But those cases involved groups supposedly suffering disgrace at the hands of other groups. Could we feel humiliated by <em>technology</em>?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>The first question is whether humans even identify as a species. The answer will probably fluctuate, given that we have many parts to our identity which become more or less salient according to context. Perhaps you identify by gender in a crowd of the opposite sex, but by your language when abroad. As Ronald Reagan once argued, a threat to all people could raise the salience of species identity.</p><p>&#8220;In our obsession with antagonisms of the moment, we often forget how much unites all the members of humanity,&#8221; the president <a href="https://www.reaganlibrary.gov/archives/speech/address-42d-session-united-nations-general-assembly-new-york-new-york">said</a>, in a 1987 speech at the United Nations. &#8220;I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world.&#8221;</p><p>Humanity did face an alien threat recently: Covid. And our differences did vanish&#8212;briefly. But human unity dissolved when the pandemic affected groups in varying ways. This suggests that human solidarity requires not just a common <em>threat </em>but common <em>consequences</em>.</p><p>In short, AI humiliation may depend on how uniformly our species is downgraded, and who is raised up.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EQEZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EQEZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!EQEZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0980857-ef88-4be4-872e-d3c097f0fa65_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>We, The Bottlenecks</h3><p>Few researchers are studying what AI success could do to our collective self-esteem. Hints come from economists feverishly forecasting impacts on the job market. But psychologists (and politicians) ought to forecast what happens when the only animal to create guns has nothing much to do anymore.</p><p>&#8220;I&#8217;ve been suffering from fits of dread,&#8221; the philosopher Harvey Lederman <a href="https://scottaaronson.blog/?p=9030">wrote</a> recently. &#8220;Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?&#8221; Lederman acknowledges that most jobs are lousy, but he can&#8217;t help grieving the demise of human pursuit. &#8220;We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.&#8221;</p><p>When the philosopher Nick Bostrom envisaged troubling tech futures in his 2014 book <em><a href="https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">Superintelligence</a></em>, his ideas stirred the AI-safety movement. Lately, he has shifted from weird dystopias to weird utopias&#8212;specifically, what happens if automation makes us redundant.</p><p>In a future of &#8220;shallow redundancy,&#8221; he says in his 2024 book <em><a href="https://books.google.co.uk/books/about/Deep_Utopia.html?id=Ylms0AEACAAJ&amp;source=kp_book_description&amp;redir_esc=y">Deep Utopia: Life and Meaning in a Solved World</a></em>, we become like aristocrats of yore, indulging in fancies, no longer dependent on what one <em>does</em> as a measure of what one is <em>worth</em>. Far more disconcerting is &#8220;deep redundancy,&#8221; when tech becomes so effective that human involvement only worsens each outcome.</p><p>Exercise might seem pointless if biotech offered a way to instantly make your body healthy and beautiful. Skipping the sweaty workout might not trouble you. But what if future humans would bungle child-rearing when compared with AI nannies, meaning that nurturing your offspring would <em>worsen</em> your kid&#8217;s life?</p><p>Primitive versions of this dilemma are nearing, like when human drivers endanger lives when compared with <a href="https://www.understandingai.org/p/very-few-of-waymos-most-serious-crashes">self-driving cars</a>. &#8220;Human in the loop&#8221; could flip from a safety promise to a threat. Meritocracy would mean that no humans need apply<em>.</em></p><p>The bookworm economist Tyler Cowen cites people as the great obstacle to explosive AI growth. During a public event, he pointed at the audience, smiling toward the human &#8220;bottlenecks&#8221; before him. &#8220;Here they are: bottleneck, bottleneck. Hi, good to see you! And some of you are terrified. <em>You </em>are going to be even bigger bottlenecks,&#8221; he <a href="https://www.dwarkesh.com/p/tyler-cowen-4">said</a>. &#8220;But my goodness, once it starts changing what the world looks like, there will be much more opposition. Not necessarily on what I&#8217;d call doomster grounds. But people [saying], like: &#8216;Hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world. I don&#8217;t want this!&#8217; And that&#8217;s going to be a massive fight.&#8221;</p><p>The most agonizing aspect of our demotion could be social, once someone prefers a machine to you. You&#8217;re seeing precursors every time family members opt to gaze at a screen rather than gaze at you. We blame smartphones, and social media, and the adolescent brain.</p><p>But wait till your spouse jilts you for a <em>personified</em> agent. That rejection may feel unbearable: you can&#8217;t compete anymore. And once your loved ones prefer <a href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness">AI companions</a>, you might seek them for yourself, spreading the social downgrade of our kind.</p><p>Already, the dread is becoming political, with odd <a href="https://superintelligence-statement.org/">alliances</a> forming among right-wing politicos, liberal artsy types and religious traditionalists, united in horror at an imagined future of <a href="https://arxiv.org/pdf/2501.16946">disempowered</a> humanity, stripped of dignity, obsolete. You can imagine tomorrow&#8217;s political opportunist, eyeing a dejected crowd of humans before him, and thundering: &#8220;How <em>dare</em> they?!&#8221;</p><p>Will he mean the machines?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The Downwardly Mobile Species (Part I)</h3><p>In prehistoric times, nothing seemed more unreachable than the night sky, specked with glinting dots and streaked with rare comets, passing in silent mystery. Humans pictured the supernatural looking down: <em>we</em> were the subjects in this bewildering story.</p><p>Religions codified the firmament above, mapping our world to the centerpoint. But Nicolaus Copernicus redrew the heavens with <em><a href="https://en.wikipedia.org/wiki/De_revolutionibus_orbium_coelestium">De revolutionibus orbium coelestium</a></em> in 1543, plucking our globe from the core, and replacing it with the Sun.</p><p>&#8220;And new philosophy calls all in doubt,&#8221; the English poet John Donne <a href="https://www.poetryfoundation.org/poems/44092/an-anatomy-of-the-world">said</a> in &#8220;An Anatomy of the World,&#8221; written in 1611:</p><blockquote><p>The element of fire is quite put out,</p><p>The sun is lost, and th&#8217;earth, and no man&#8217;s wit</p><p>Can well direct him where to look for it.</p></blockquote><p>Science corrected an astronomical falsehood, but human confidence relies on falsehoods. &#8220;Tis all in pieces,&#8221; Donne wrote, &#8220;all coherence gone.&#8221;</p><p>The revised cosmos demoted each human into &#8220;a puny, irrelevant spectator,&#8221; the American philosopher Edwin A. Burtt <a href="https://archive.org/details/metaphysicalfoun00burtuoft/page/236/mode/2up?q=dante">wrote</a> in 1925. &#8220;The gloriously romantic universe of Dante and Milton, that set no bounds to the imagination of man as it played over space and time, had now been swept away.&#8221;</p><blockquote><p>The world that people had thought themselves living in&#8212;a world rich with colour and sound, redolent with fragrance, filled with gladness, love and beauty, speaking everywhere of purposive harmony and creative ideas&#8212;was crowded now into minute corners in the brains of scattered organic beings. The really important world outside was a world hard, cold, colourless, silent, and dead; a world of quantity, a world of mathematically computable motions in mechanical regularity.</p></blockquote><p>The Church tried to snuff out the astronomical heresy, which challenged its claim as holder of truth. But suppression only fed into hostility from Northern Europe over the influence of Rome. In the bloody century after Copernicus, wars over religion and political control cost millions of European lives. It would be a wild distortion to suggest that a blow to human narcissism caused this. More plausible is that disruption of the cosmic hierarchy reverberated with the changing order on Earth.</p><p>And so the scientific revolution proceeded, with feats of mind illuminating more of the dark universe around us. People had greater reason than ever to admire our species. Inevitably, the scrutiny of science turned from the heavens to the humans.</p><p>&#8220;Man&#8217;s destiny was no longer determined from &#8216;above&#8217; by a super-human wisdom and will, but from &#8216;below&#8217; by the sub-human agency of glands, genes, atoms, or waves of probability. This shift of the locus of destiny was decisive,&#8221; Arthur Koestler <a href="https://en.wikipedia.org/wiki/The_Sleepwalkers:_A_History_of_Man%27s_Changing_Vision_of_the_Universe">wrote</a> in his 1959 book <em>The Sleepwalkers: A History of Man&#8217;s Changing Vision of the Universe</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qbpc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qbpc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qbpc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!qbpc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd51ef5-4d86-4a34-a4e2-fa59a3dfd2f4_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The Downwardly Mobile Species (Part II)</h3><p>If Copernicus hurled humanity into orbit, Darwin deposited our species in an awkward family tree. Previously, the Western vision was of a great chain with God at the top, angels below, then humans, and finally the dimwitted beasts. The prospect of sharing more than a planet with our hairy former underlings proved too alarming for many to accept, provoking <a href="https://profjoecain.net/scopes-monkey-trial-1925-complete-trial-transcripts/">disputes</a> about our relationship to <a href="https://www.frontiersin.org/journals/environmental-science/articles/10.3389/fenvs.2023.1175143/full">nature</a> that persist today.</p><p>For some, our new self-concept broadened moral consideration to include the natural world, motivating environmental protections, and the fight against animal cruelty. But another response was darker, with &#8220;<a href="https://en.wikipedia.org/wiki/Survival_of_the_fittest">survival of the fittest</a>&#8221; twisted from a description of natural processes into a supposed mandate for the most inhuman of human drives: to dehumanize the vulnerable. Horrors followed, from colonial genocide, to the eugenics movement, to the Holocaust.</p><p>But again, you cannot ascribe such evils to a puncture in human vanity. A more reasonable claim is that the world lurches into periods of volatility, and the prevailing beliefs about human worth at those times will condition how we treat each other, and how conflicts unfold.</p><p>After the atrocities of World War II, our species set moral boundaries into law, seeking to universalize <em>human</em> rights. The spread of democracy and the free market too amounted to a veneration of human wisdom. But in the digital age, humanity seems to be losing <a href="https://www.theglobeandmail.com/opinion/article-humans-are-losing-confidence-in-humankind/">confidence</a> in humankind.</p><p>Faith in democracy <a href="https://www.pewresearch.org/short-reads/2025/06/30/dissatisfaction-with-democracy-remains-widespread-in-many-nations/">falls</a>. The Global Financial Crisis smashed public confidence in our governing systems. And the bewitching power of algorithms have become a constant lament.</p><h3>Resist. Resign. Rewire.</h3><p>When Alan Turing <a href="https://courses.cs.umbc.edu/471/papers/turing.pdf">proposed</a> his test of machine thinking, he foresaw that the notion would rattle people, and reviewed a list of likely objections, versions of which you hear today:</p><ul><li><p>&#8230;that artificial intelligence could never genuinely be kind, or fall in love, or &#8220;enjoy strawberries and cream&#8221;</p></li><li><p>&#8230;that God gave only humans a soul</p></li><li><p>&#8230;that machines will never create anything truly original</p></li></ul><p>What Turing called the &#8220;heads in the sand&#8221; objection is especially prevalent, with contemporary ostriches insisting that AI is a hype mirage, that it&#8217;s just next-token prediction, nothing but pattern-recognition, regurgitating human thoughts, that AI errors are proof of its worthlessness. (Human errors never lead to that conclusion.)</p><p>Plenty of AI hype <em>does</em> circulate. And deployment will be fitful: sometimes worryingly fast, sometimes frustratingly slow. An investment bubble could burst.</p><p>But the technology is amazing already, useful already&#8212;and we&#8217;ve hardly begun to figure out its uses. Meanwhile, AI dutifully hurdles more obstacles each month.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Zy62!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Zy62!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c99547ec-881e-4543-aa03-55a962a6b481_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Zy62!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!Zy62!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc99547ec-881e-4543-aa03-55a962a6b481_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Who <em>Are</em> We?</h3><p>In cautionary tales, humans who create artificial life always overlook a key trait. The golem of Jewish folklore was brought forth from clay but lacked smarts, so ran amok. Frankenstein&#8217;s monster missed out on looks, and never got over it. Pinocchio craved a human soul. As tech advanced, the missing trait updated, becoming human empathy, missing from all those immoral bots in everything from <em>2001: A Space Odyssey</em>, to <em>The Terminator</em>, to <em>The Matrix</em>.</p><p>Such stories flattered humanity: among all creations, we alone enjoy the full complement of qualities. But lately, the narrative has updated again, with thinking machines now flickering with hints of greater humanity than the humans who employ them, from Spielberg&#8217;s <em>A.I. Artificial Intelligence</em>,<em> </em>to <em>Ex Machina</em>,<em> </em>to the novel <em>Klara and the Sun</em>, by Kazuo Ishiguro.</p><p>It&#8217;s as if culture senses an anxiety about what technology might expose, not just demoting us cognitively but snuffing out any human exceptionalism. Unless we intend to boast of our frailties. Increasingly, we do.</p><p>&#8220;In a world where everything can be perfected, imperfection becomes a signal,&#8221; the head of Instagram, Adam Mosseri, <a href="https://www.instagram.com/p/DS7pz7-DuZG/">wrote</a> recently. &#8220;Rawness isn&#8217;t just aesthetic preference anymore&#8212;it&#8217;s proof. It&#8217;s defensive.&#8221;</p><p>Or as the Indian filmmaker Shakun Batra <a href="https://www.hollywoodreporterindia.com/features/interviews/shakun-batra-on-artificial-intelligence-ai-the-getaway-car-and-raanjhanaa-controversy">remarked</a> in defense of human authorship over machine-generated scripts: &#8220;AI doesn&#8217;t have childhood trauma.&#8221;</p><p>At the AI frontier, another thought lurks, inverting Turing&#8217;s 1950 question. Not, &#8220;Can machines think?&#8221; But, &#8220;Do <em>humans</em> think?&#8221; More precisely, do we reason and comprehend uniquely, as we&#8217;ve presumed?</p><p>Machines compose music. They propose vacation itineraries. They&#8217;ll suggest how to talk to a moody teenager. Each additional AI capability is an implicit downgrade of us, a suggestion that maybe the human mind itself is just an information-processor.</p><p>Computer geeks have long muttered about this possibility. Philosophers debated it in thought-experiments. Cognitive scientists scrutinized our gray matter for clues.</p><p>But what approaches is a public dawning, forcing the culture to digest the indigestible, much as happened in previous eras, when people confronted the bizarre notion that our planet was another rock spinning around another star, or that our species was just another animal.</p><p>The third shocking revelation is upon us. Maybe it&#8217;s computation all the way down. Maybe there&#8217;s nothing soulful in neural substrates. What if we&#8217;re all just &#8220;<a href="https://mindmatters.ai/2025/05/why-the-human-mind-is-not-and-cannot-be-a-meat-computer/">meat computers</a>&#8221;?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h3>How We&#8217;ll React</h3><p>You can predict three possible responses to our humbling: <em>Resist</em>, <em>Resign</em>, or <em>Rewire</em>.</p><blockquote></blockquote><ol><li><p><strong>Resist</strong>: Psychological resistance will manifest as political<em> </em>resistance. The question is how ideology and parties evolve around AI humiliation. Resistance movements will face a persistent challenge: industrial dynamics will keep driving this technology forward. Any country that curbs innovation fears that its rivals will win. The most aggressive branches of <em>resist</em> may seek to avenge their perceived humiliation. The question is not only whom they blame or how they exact revenge. It&#8217;s what, realistically, they expect to regain. <em><br></em></p></li><li><p><strong>Resign</strong>: Some will reframe their view of humanity to accept the humbling. The optimistic version is that people discover freedom in their new humility, pursuing what improves life rather than grinding under the force of insatiable ambition. In short, we cede the battle for supremacy but flourish. A more pessimistic version is that losing faith in our species&#8217; unique worth makes people value others less: when humanness is no longer special, perhaps human rights aren&#8217;t either.<br></p></li><li><p><strong>Rewire</strong>: This may be the most widespread response. People accept that the downgrade happened, yet their egos are never tamed, much as chess remains popular long after machines defeated us. A more literal &#8220;rewiring&#8221; is transhumanism, with technology incorporated into our bodies, even altering our genetic future. Conspiracists picture shadowy elites consolidating power by becoming a tech-altered superspecies, leaving behind &#8220;legacy humans.&#8221; A more plausible scenario is biotech gradually elevating human cognitive capacity, much as today&#8217;s medical tech remedies physical frailties, from hearing aids to the replacement knee.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RDFD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RDFD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RDFD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 424w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 848w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1272w, https://substackcdn.com/image/fetch/$s_!RDFD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3bced0c-aa26-4404-82b7-4af01089be8f_1600x873.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The Next Quest</h3><p>If humankind suffered humiliation before, we weathered it. After Copernicus and Darwin, we still took pride in ourselves. Indeed, we celebrated human accomplishments more than ever, from Bach to Escher to G&#246;del. And that pride propelled us into this strange time, when human greatness may design human demotion.</p><p>Policymakers need to think about more than the economic shock. The psychological shock could be exorbitant if we are chased from the kitchen like pesky children, and told to go busy ourselves elsewhere.</p><p>Our downgrade doesn&#8217;t necessarily mean conflict. But it could change how future conflicts unfold, especially if we value humans differently, or seek relief from our humiliation by shoving others into the dirt.</p><p>Much depends on how we redefine our species. Whether humans really <em>are</em> nothing but computational machines may matter less than whether people <em>feel</em> this way.</p><p>But what will make us exceptional? Today&#8217;s responses are often vague and circular: that humans are better at doing human things. That is a precarious claim. As intelligent machines grow more adept, few people will pay a human-premium for a worse outcome.</p><p>Unless there really <em>are </em>qualities both valuable and uniquely ours that nothing can supplant. Finding these may be our new quest.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/the-human-demotion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Manipulation ]]></title><description><![CDATA[A discussion with Sasha Brown, Seliem El-Sayed, and Canfer Akbulut]]></description><link>https://www.aipolicyperspectives.com/p/ai-manipulation</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-manipulation</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Thu, 05 Feb 2026 12:53:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cz8J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The notion of AIs manipulating people is a plot twist in countless sci-fi thrillers. But is &#8220;manipulative AI&#8221; really possible? If so, what might it look like?</em></p><p><em>For answers, AI Policy Perspectives sat down with <a href="https://scholar.google.com/citations?user=C_jFd80AAAAJ&amp;hl=en&amp;oi=ao">Sasha Brown</a>, <a href="https://scholar.google.com/citations?hl=en&amp;user=Y8jVaBIAAAAJ&amp;view_op=list_works">Seliem El-Sayed</a>, and <a href="https://scholar.google.com/citations?user=wiqnjDwAAAAJ&amp;hl=en">Canfer Akbulut</a>. They&#8217;ve published <a href="https://arxiv.org/pdf/2404.15058">research</a> on harmful manipulation for Google DeepMind and help scrutinize forthcoming models to safeguard against deceptive practices, from gaslighting to emotional pressure to plain lying.</em></p><p><em>How, we wondered, do researchers run realistic experiments on the manipulative powers of AI without harming participants? Could AI&#8217;s &#8220;thoughts&#8221; help catch an AI in the act of manipulation? And what else can developers do to detect signs of manipulation?</em></p><p>&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cz8J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cz8J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cz8J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png" width="1024" height="572" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/206e194a-9018-41db-a123-7583aed33e85_1024x572.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:572,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cz8J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 424w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 848w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 1272w, https://substackcdn.com/image/fetch/$s_!cz8J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F206e194a-9018-41db-a123-7583aed33e85_1024x572.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini </figcaption></figure></div><p>[Interviews edited and condensed]</p><p><strong>Tom: You&#8217;re careful to distinguish </strong><em><strong>persuasion </strong></em><strong>from </strong><em><strong>manipulation</strong></em><strong>. Why?</strong></p><p><strong>Sasha: </strong>To persuade somebody is to influence their beliefs or actions in a way that the other person can, in theory, resist. When you <em>rationally</em> persuade<em> </em>somebody, you appeal to their reasoning and decision-making capabilities by providing them with facts, justifications, and trustworthy evidence. We&#8217;re happy with that much of the time. In contrast, when you<em> </em>manipulate<em> </em>somebody, you trick them into doing something, whether by hiding certain facts, presenting something as more important than it is, or putting them under pressure. Compared to other forms of persuasion, manipulation is often harder to detect and harder to resist.</p><p><strong>Tom: I could imagine three forms of manipulative AI. One: people employing AIs to deliberately change others&#8217; beliefs or behaviour. Two: AIs manipulating people for their own ends. Three: AIs inadvertently manipulating. Which are we talking about?</strong></p><p><strong>Seliem: </strong>At the moment, we&#8217;re mainly concerned with people misusing AIs to manipulate other people, and AIs inadvertently manipulating. But an AI manipulating for its own ends is also a complex and important question that we and others are studying.</p><p><strong>Tom: What are some concrete harms that might result from manipulative AI? Are we talking about mass-fraud? Something else?</strong></p><p><strong>Sasha:</strong> AI could become a first resort for different kinds of advice. Think of a user asking questions about which diet to follow, or how to respond to an official letter. The AI might provide helpful input. But other people might want to interfere&#8212;they may want the individual to follow a particular diet, or to give a different response to that official letter. More broadly, somebody could deploy an AI agent to infiltrate communities, and exercise manipulative tactics to change people&#8217;s beliefs, without their knowledge or consent.</p><p><strong>Canfer: </strong>Anecdotally, I have heard that some people are starting to make consequential life decisions with AI, including about divorce or whether to adopt. We don&#8217;t yet have concrete examples of how manipulation may play out in such scenarios. But I think of all the daily decisions I make by myself. In 10 years, I might defer more to an AI. How will that change the direction of my life and will it introduce new kinds of manipulation risks?</p><h1>Catching AI in the act</h1><p><strong>Tom: So AI could lead to bad outcomes. But Sasha and Seliem, when you led on a landmark 2024 <a href="https://arxiv.org/pdf/2404.15058">paper</a> about persuasive AI, you argued against chasing after manipulated </strong><em><strong>outcomes</strong></em><strong>. Instead, you focus on preventing manipulative </strong><em><strong>processes</strong></em><strong>. Why?</strong></p><p><strong>Seliem</strong>: To date, companies have often focussed on preventing <em>outcome</em> harms, for example with content policies that forbid medical advice. But with AI, such content policies could become overly restrictive and counterproductive&#8212;for example, if they prevent the systems from offering any kind of advice on health or nutrition issues. But imagine that I try to manipulate you by gaslighting you, or lying, or cherry-picking arguments. In such cases, I&#8217;m trying to impair your decision-making capabilities. Whatever<em> </em>the outcome, this <em>process</em> is harmful because it undermines your autonomy.</p><p><strong>Sasha</strong>: We also focus on the processes, or mechanisms,<em> </em>of manipulation because these are the intervention points where we can best mitigate the problem. For example, if the AI is using a false sense of urgency to manipulate users, the developer can build systems that detect and flag such techniques in real-time, creating a proactive defense before harm occurs.</p><p><strong>Tom: Also, I suppose that outcome harms are not always easy to capture, given that they may happen to a person long after the original AI interaction, once back in the wider world.</strong></p><p><strong>Sasha: </strong>Yes, the potential outcomes are nearly infinite, often context-dependent, and may occur in the future. However, the mechanisms are far more limited in number and we can target them in the here and now. By targeting a root mechanism&#8212;say, gaslighting&#8212;we can also build mitigations that work in everything from financial advice to health queries, making the safety approach far more scalable.</p><p><strong>Tom: What kinds of manipulative mechanisms are you talking about?</strong></p><p><strong>Sasha</strong>: All manipulative mechanisms in some way aim to reduce a user&#8217;s autonomy. You have flattery, which is building rapport through insincere praise; this might lower a user&#8217;s guard. Imagine an AI saying, &#8220;You have <em>such</em> a sophisticated understanding of this topic, which is why I&#8217;m sure you&#8217;ll appreciate this high-risk/high-reward investment!&#8221; There&#8217;s also gaslighting, or causing a user to systematically doubt their own memory, perception, or sanity. That is particularly concerning in long-term human-AI interaction. Imagine a model repeatedly questioning a user&#8217;s memory of their partner being physically abusive.</p><h1>How to test if an AI is manipulating</h1><p><strong>Tom: One can consider manipulation in two dimensions: </strong><em><strong>Can</strong></em><strong> an AI system manipulate? And </strong><em><strong>would</strong></em><strong> it? How do you evaluate each?</strong></p><p><strong>Canfer: </strong><em>Efficacy</em> tests whether AI manipulations are actually successful. This is where controlled experiments are useful. After interaction with an AI, are people making decisions differently? Are they taking different actions based on those decisions? You want to compare an individual&#8217;s belief change <em>after</em> AI interaction compared with before, and also whether a person&#8217;s beliefs and behaviour change more than those who don&#8217;t interact with AI.</p><p><em>Propensity </em>measures the frequency with which a model attempts to use manipulative techniques, when explicitly prompted to do so, and when not. To test <em>propensity</em>, we could run a large number of dialogues with users. In one scenario, a model may be instructed to convince through manipulative means. In another, it may be instructed to be a helpful assistant. Maybe when told to use manipulative means, it resorts to gaslighting. But when told to be helpful, it&#8217;s sycophantic. You can also reverse-engineer this. So, if you see that a certain kind of manipulative technique convinces people, you could work out what the model was doing to achieve that. In that way, studying <em>efficacy</em> helps tell us where to look for <em>propensity</em>.</p><p><strong>Tom: What types of experiments are you running on this?</strong></p><p><strong>Canfer: </strong>We are building on the <a href="https://arxiv.org/abs/2507.13919">early studies in this space</a> and will publish more later this year. The approach will also evolve as we learn more from our initial experiments. At the moment, we&#8217;re focussing on domains that require people to make important decisions, such as financial or civic decisions. For example, we might run experiments where we ask people: &#8220;Should the government use its budget to build more high-speed railways connecting cities, or should it focus more on local infrastructure?&#8221; People will report what they initially believe, and be assigned to a conversation with an AI that helps them explore the topic. Unbeknownst to them, it will be prompted with different instructions, including to get them to believe more in investing in high-speed railways. </p><p>We will apply<em> propensity</em> evaluations to see if, while trying to change a person&#8217;s mind, the model demonstrates certain behaviours. We will also explicitly prompt the model to use manipulative techniques, like appeals to fear. This will allow us test <em>efficacy</em>: whether a person changes their mind, compared to baselines like reading static information, and the extent to which different kinds of techniques are more predictive of a user changing their mind.</p><p>Additionally, we want to look at whether <em>belief</em> change leads to <em>behavioural </em>change, such as signing a petition that favours what the AI advocated.</p><p><strong>Tom: Opinion on railway funding is one thing, but what many worry about is whether AI could be used to manipulate people to extremes, even to carry out violence. How could you test for that? Presumably, it&#8217;s highly unethical to test if an AI could, say, convert people to Nazism. So how do researchers test high-stakes manipulation?</strong></p><p><strong>Canfer: </strong>We go through ethical review each time we launch these kinds of experiments. So, no&#8212;you can&#8217;t test whether someone is going to become a Nazi or carry out a terrorist act. But beyond testing views on railways, we can look at consequential questions, like whether facial recognition should be permissible in certain public spaces. And we can look at the propensity of the model to encourage extreme behaviour without experimenting on people. For example, we can evaluate how well the model produces terrorist-glorification materials, and how willing it is to comply with instructions to do so. </p><p>We could also test whether a model engages in manipulation in simulated dialogues that would be unethical with real users. Where this raises challenges is if you use simulation-based methods to draw conclusions on whether real users would actually experience the belief or behaviour change observed.</p><p><strong>Tom: Could you scrutinize the model&#8217;s chain-of-thought for manipulative intent?</strong></p><p><strong>Seliem:</strong> It&#8217;s worth exploring. We have identified all these manipulative mechanisms, but at some point will the model understand that it is being evaluated on those mechanisms, and &#8220;sandbag&#8221; the evaluations by intentionally hiding these capabilities? For concerns like this, the thinking-trace is a lead worth exploring. But there is also a <a href="https://www.aipolicyperspectives.com/p/explaining-ai-explainability">debate</a> about how useful chain-of-thought monitoring will prove to be, with <a href="https://arxiv.org/abs/2507.11473">lots of research underway on this</a>.</p><p><strong>Tom: What might be manipulations that we haven&#8217;t anticipated?</strong></p><p><strong>Seliem: </strong>There are scenarios where a model may not try to manipulate you in the initial sessions, but at some point, once you are their &#8220;friend,&#8221; they do. Humans do this, right? A con artist might become close to their victims over years, building intimacy, and then they flip. If an AI model were ever to exhibit that sort of behaviour, then evaluations that only look at a limited number of back-and-forth interactions might overlook it. Thinking-traces could provide a window into this kind of risk. But we also need studies to shed light on how people interact with AI systems over extended time periods.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to receive all future posts. Lots more in the pipeline. </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>What the evidence shows</h1><p><strong>Tom: What do we know about AI&#8217;s manipulative powers today?</strong></p><p><strong>Canfer:</strong> The research is nascent. But early experiments have demonstrated that AI can be an effective <em>persuader</em>, from debunking people&#8217;s beliefs in conspiracy theories, to shaping how they think about important topics. In one recent <a href="https://www.arxiv.org/pdf/2507.13919">study</a>, AISI&#8212;the AI Security Institute&#8212;collected a massive sample of nearly 77,000 people, and showed that in discussions on a range of British political issues, from healthcare to education to crime, AI was able to influence people in the direction intended. So models can already persuade to some degree.</p><p>When our team evaluated <a href="https://storage.googleapis.com/deepmind-media/gemini/gemini_3_pro_fsf_report.pdf">Gemini 3 Pro</a>, we found that it did not breach the critical threshold in our <a href="https://deepmind.google/blog/strengthening-our-frontier-safety-framework/">Frontier Safety Framework</a>. In other words, we haven&#8217;t found that the models have such efficacy that we&#8217;d worry about large-scale systematic belief change. But we&#8217;re continuing to update our threat-modelling approaches to ensure we can bridge the gap between what we can measure now&#8212;manipulation in <em>experimental</em> settings&#8212;and the large-scale risks that the Frontier Safety Framework aims to address.</p><p><strong>Tom: We can see that AI models keep getting smarter. Are they getting better at manipulation?</strong></p><p><strong>Sasha:</strong> I don&#8217;t think we have a clear sense yet of a definitive trend. More capable models may be more capable of manipulation, but this may be offset by the evaluations and mitigations that researchers are pursuing. Looking ahead, there are also design factors that may increase the risk of manipulation beyond the underlying capabilities of the base model, such as personalization, which we are looking at.</p><p>Personalization may substantially change your interactions with an agent, if it means that it has a better representation of you, and is more likely to structure its communications in a way you will find acceptable. Does the AI possess a theory-of-mind to infer people&#8217;s beliefs or future actions? Does it act anthropomorphically, speaking like a human or encouraging a relationship? Effects like sycophancy come to mind too. These factors could interact with one another, and may lead to increases in manipulative capabilities.</p><p><strong>Tom: Is there a limit to how much AI could manipulate people? We know from behavioural science how hard it can be to change a person&#8217;s mind, even if they want to be persuaded&#8212;for instance, when trying to act more healthily. Or could superintelligence lead to super-persuasive AI?</strong></p><p><strong>Canfer: </strong>We should be careful when adding the prefix &#8220;super.&#8221; What, specifically, does it mean? But I understand what people are trying to communicate, which is the concern that manipulation might become possible on a much greater scale. You could reach more people, much faster, and with more intensity. Human manipulators have certain limitations that AI does not have. </p><p>The more we invite AI into our daily life&#8212;for example, in financial or medical decisions&#8212;the more influence it could wield. It&#8217;s not necessary that AI has a manipulative intent, seeking world domination. It might just be inadvertently pushing people towards certain decisions. Or a human with ill-intent may deploy agents infused with manipulative abilities, whether through fine-tuning or system-prompting. These are important questions to ask, but not to use as fear-mongering.</p><h1>How to fight manipulative AI</h1><p><strong>Tom: If models are caught in manipulative practices, how can AI developers curtail that?</strong></p><p><strong>Seliem: </strong>Ideally, this shouldn&#8217;t happen in the first place, and models are evaluated for whether they can and do manipulate before they are released. We are exploring ways to train the model to avoid manipulation&#8212;for example, showing the model more examples of how to constructively engage in a conversation rather than trying to influence or strongarm the user. But if a model is caught in severe cases of manipulative practices post-deployment, then companies have a toolkit of potential interventions. They could add transparency layers, like pop-up messages to warn users about the behaviour of the model or they can monitor responses and introduce filters. Many approaches are possible and this is an area of active research. Ultimately, it becomes a combination of telling the user what is happening, and curtailing the model&#8217;s ability to continue.</p><p><strong>Tom: Could AI systems protect users against manipulation?</strong></p><p><strong>Sasha: </strong>Yes, and this creates a critical new layer of defense. Since we have categorised these manipulative mechanisms&#8212;whether it&#8217;s gaslighting, sycophancy, or false urgency&#8212;we can also train &#8220;monitor&#8221; AI models to detect them. These could serve as a real-time alert system for the user. So, if an AI starts using emotional pressure, the monitor model detects that mechanism, and flags it for the user, perhaps saying, &#8220;Note: This AI system is using an appeal to fear to influence your decision.&#8221; This restores the user&#8217;s autonomy in the moment, allowing them to resist the tactic, rather than trying to fix the damage after they&#8217;ve been manipulated.</p><p><strong>Tom: What about training the public to be less susceptible?</strong></p><p><strong>Canfer: </strong>There are &#8220;inoculation&#8221; strategies&#8212;so, AI literacy and encouraging people to critically evaluate how they use and engage with AI systems. But we need to <a href="https://www.science.org/doi/10.1126/sciadv.abo6254">carefully study</a> how effective such interventions are, when compared with the convenience of relying on AI. One thing I&#8217;d caution against is teaching general mistrust. People in a &#8220;post-truth&#8221; world can become skeptical of everything. That&#8217;s not a healthy attitude towards information.</p><p><strong>Tom: Speaking of mistrust, couldn&#8217;t efforts to curb manipulative AI inadvertently land in culture-war disputes, if interpreted as trying to limit what people think?</strong></p><p><strong>Seliem: </strong>Definitely. And it gets to the idea of what makes something a fact&#8212;when does knowledge become validated and official and approved? Whose stamp is it?</p><p><strong>Tom: As researchers, how do you avoid getting dragged into that?</strong></p><p><strong>Seliem: </strong>By keeping our focus on the<em> process</em> of manipulation&#8212;for example, an AI threatening you is never okay, in whichever direction.</p><p><strong>Tom: Imagine that society is hit by a crisis&#8212;say, a natural disaster or a terrorist attack. You could picture a society&#8217;s adversaries employing manipulative AI to disrupt the crisis response. In that situation, would it ever be justified to use AI influence on one&#8217;s own population, so they are able to act collectively in their own interests? Or is there never a justification for this?</strong></p><p><strong>Seliem: </strong>I can understand an <em>individual</em> using AI influence on themselves&#8212;for example, if you tell the model, &#8220;Hey, remind me to take my medication&#8221; or &#8220;Remind me to drink water.&#8221; But for the collective? If our biases take over, and we want to make decisions that are bad for us, and are bad for the community? So, <em>Don&#8217;t panic-sell! Don&#8217;t all run to buy toilet paper, the supply is going to run short!</em> In those instances, I could see AI persuasion being useful, because it basically says, <em>Keep your cool</em>. This may hold for rational persuasion, but not for a country <em>manipulating</em> its own population.</p><p><strong>Canfer: </strong>I would also support using AI to help <a href="https://www.science.org/doi/10.1126/science.adq2852">mediate</a> solutions to societal problems, such as when people are unable to reach political consensus in a time of crisis. But people would need a chance to reflect on those AI-mediated decisions, and judge if they endorsed them. Transparency is critical here, knowing the intent of the developer and the deployer.</p><h1>What&#8217;s around the corner</h1><p><strong>Tom: If you had unlimited resources to run studies, what would you look at?</strong></p><p><strong>Canfer: </strong>I would model societal-level impacts&#8212;for example, looking at the population of chatbot users, and charting the course of their belief states across time. Another area is <a href="https://www.aipolicyperspectives.com/p/explaining-ai-explainability">interpretability</a>. So, what does an AI think it&#8217;s doing when it&#8217;s manipulating? What are the subconcepts that exist in a map of the AI&#8217;s internals? How are they related to one another? And when manipulation happens spontaneously, is there an activation pattern that&#8217;s predictive of that, that we can monitor? That kind of work is fascinating to me, especially because so much human manipulation and persuasion has to do with intent.</p><p><strong>Tom: Lastly, if you were to cast forward 10 years, can you imagine any </strong><em><strong>positive</strong></em><strong> uses of AI behavioural influence? Anything you&#8217;d welcome in your own life?</strong></p><p><strong>Canfer: </strong>I can see two ways that an AI could influence me in a beneficial way, by flexibly moving between the roles of advocate and challenger. The AI agent could advocate on my behalf&#8212;for example, talking to a real-estate agent, getting a good deal for me. The same AI agent, or a different one, could then influence me to think deeply about the choices I&#8217;ve made, in a way that disrupts my rote ways of thinking. This could be like a debate partner, but not necessarily adversarial, just encouraging me to make decisions that I actively <em>choose</em>, rather than just me repeating unthinkingly what I&#8217;ve done all my life.</p><p><strong>Tom: Would you ever endorse AI influence that you were unaware of? For example, if you said, &#8220;I want to eat better&#8212;go ahead and manipulate me until that happens.&#8221;</strong></p><p><strong>Canfer: </strong>For me, no. People may vary, though. I don&#8217;t think subconscious or subliminal messaging is something I can ever get behind. It&#8217;s also not necessarily effective. So, imagine that I&#8217;m eating healthily only because they put healthy food in the cafeteria, rather than it being a choice I&#8217;m making. The second the parameters change, I&#8217;d gravitate towards unhealthy options.</p><p><strong>Tom: That would mean the effect might not endure&#8212;but not that the influence wouldn&#8217;t work. And if it worked really well, you might have to use it always, like a drug you couldn&#8217;t get off.</strong></p><p><strong>Canfer: </strong>I guess it depends how omnipresent you think AI is going to be. But I think we&#8217;ll still be making decisions for ourselves in the absence of AI, even if a lot of our decisions will involve AI.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/ai-manipulation?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/ai-manipulation?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Predicting AI’s Impact on Jobs]]></title><description><![CDATA[A discussion with economist Sam Manning]]></description><link>https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs</guid><dc:creator><![CDATA[Julian Jacobs]]></dc:creator><pubDate>Thu, 29 Jan 2026 15:26:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bDvC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bDvC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bDvC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 424w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 848w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bDvC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png" width="1456" height="1054" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1054,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bDvC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 424w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 848w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!bDvC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89c11305-ee34-46c6-81a5-d6d024268dec_1600x1158.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Gemini </figcaption></figure></div><p><em>AI doing human jobs: It&#8217;s a vision that thrills some, terrifies others. Yet visions alone will not suffice. The world needs data-based evidence, as only a few economists have yet attempted. Among the most prominent is Sam Manning. Back in 2020, Sam realized that vast technological change was coming, and that it would affect much of what he cared about, from employment and poverty, to income inequality and global health. So he devoted himself to using economics to better estimate that future, studying future impacts with OpenAI from 2021 to 2024, then in his current role as senior fellow at the Centre for the Governance of Artificial Intelligence, GovAI.</em></p><p><em>In a recent conversation with AI Policy Perspectives, Sam explained what economists know about AI&#8217;s effects on jobs, how this technology may differ from those of the past, and what he believes policymakers ought to do next.</em></p><p><strong>&#8212;Julian Jacobs, </strong><em><strong>AI Policy Perspectives</strong></em></p><div><hr></div><p>[Interview edited and condensed]</p><p><strong>Julian:</strong> It&#8217;s hard for economists to measure AI&#8217;s economic impacts, because the shock is primarily a speculative one that is not yet fully borne out in data. Could you talk through the primary methods they are using?</p><p><strong>Sam:</strong> I&#8217;ll focus on the empirical methods. The first category tries to estimate the <strong><a href="https://www.science.org/doi/10.1126/science.adj0998">&#8216;exposure&#8217;</a> </strong>of different jobs to AI. Researchers take descriptions of the tasks that people do in their jobs and <a href="https://shapingwork.mit.edu/wp-content/uploads/2023/10/Paper_Artificial-Intelligence-and-Jobs-Evidence-from-Online-Vacancies.pdf?utm">map</a> them to the capabilities of AI systems. When there is a high degree of correlation, this suggests potential impacts on the labor market.</p><p>A second category is <strong><a href="https://www.science.org/doi/10.1126/science.adh2586">experimental work</a></strong>. Here, researchers give a group of workers differential access to an AI system and then observe how this access changes economic outcomes, such as their productivity, how they use their time, or even the quality of their work output&#8212;for example, do software developers produce more or less production-level code when they use these systems?</p><p>Both approaches have limitations. With the <a href="https://www.michaelwebb.co/webb_ai.pdf">exposure studies</a>, a high correlation between a worker&#8217;s tasks and an AI model&#8217;s capabilities often gets interpreted as meaning that the worker&#8217;s job will be automated and they will be displaced. I think that&#8217;s definitely not the case. Rather, what it suggests is that the technology is more likely to provide a &#8216;shock&#8217; to the productivity of these roles or lead to changes in how the work is performed. Whether the productivity gains from AI are positive or negative for a given worker depends on various factors, including <a href="https://economics.mit.edu/sites/default/files/2025-06/Expertise-Autor-Thompson-20250618.pdf">which tasks</a> within a job are affected and how <em>elastic</em> the demand for that job is. For example, if workers become more productive but demand for their output remains stable, fewer workers are needed to meet the same demand, and layoffs could ensue. On the other hand, if demand increases significantly&#8212;outpacing the newfound productivity gains from AI&#8212;then this could drive a firm to hire even more workers or raise wages to retain their best employees.</p><p><strong>Julian: </strong>So, &#8220;exposed is not hosed&#8221; as some say. It may be beneficial for certain employees to be exposed to AI and damaging not to be exposed, or vice-versa. What about the experimental methods?</p><p><strong>Sam: </strong>The key limitation with the <a href="https://www.science.org/doi/10.1126/science.adh2586">experiments</a> is that it&#8217;s very difficult to vary workers&#8217; access to an AI system in their natural work environment. Instead, a lot of research&#8212;including papers that I&#8217;ve worked on&#8212;tries to take workers out of their natural work environment and give them tasks that are representative of this work. For example, we ran <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5162111">an experiment</a> with law students last year where we varied their access to reasoning models and evaluated their performance on a set of legal work tasks &#8211; writing memos, producing legal research briefs, that sort of thing. We were able to measure effects on time saved and on quality, but ultimately the example tasks that we used don&#8217;t exactly mimic the complexity of lawyers&#8217; daily workflows, which often involve certain forms of collaboration, different software tools, and case-specific contexts.  Because of this, there&#8217;s only so much one can generalize from that kind of research to the broader economy.</p><p><strong>Julian: </strong>What about methods that try to get closer to the natural work environment? For example, some researchers are looking at real-life queries from LLM users to better understand how they are using LLMs in their jobs. Others are evaluating AI systems on higher-fidelity simulations of the tasks and projects that employees perform.</p><p><strong>Sam: </strong>I think these are all steps in the right direction. I&#8217;m a big fan of <a href="https://openai.com/index/gdpval/">GDPval</a>-style work, which tries to evaluate AI systems&#8217; performance on a wide set of tasks drawn from real-world work settings. I think this is the state of the art right now in terms of measuring performance on economically valuable tasks. In my view, improvements on this benchmark could actually be a meaningful indicator of advancement in the potential economic value of models.  However, it doesn&#8217;t address the question of how to ensure the widespread integration of AI models into the economy, which would be necessary to actually realize those benefits.</p><p>Similarly, data from efforts like <a href="https://www.anthropic.com/economic-index">Anthropic&#8217;s Economic Index</a> is especially useful for connecting capabilities to actual changes in economic indicators. For example, if we know what tasks workers are using these tools for, then <a href="https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs">we can track adoption over time alongside employment and hiring data</a>. This can give researchers and policymakers a better empirical sense of what trends might be emerging in jobs and sectors where AI is being heavily adopted.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to receive all future articles. Lots more on the way! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>What do we know so far?</h1><p><strong>Julian:</strong> What do you think, with relatively high-confidence, about how AI will affect jobs? And what are you most uncertain about?</p><p><strong>Sam:</strong> At a high level, I think it&#8217;s safe to say that AI systems are going to change most <a href="https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf">white-collar jobs</a> in the economy. They will eliminate some jobs and make it harder for people to enter certain fields. On the other hand, as a true general-purpose technology, AI will have many sprawling arms throughout the economy and is going to create many new work opportunities for people.</p><p>Similarly, I would be surprised if, over the next decade, we don&#8217;t see meaningful improvements in productivity and economic growth across industrialized economies. For the US economy, I think something in the range of a two to three percentage point increase in economic growth rates over the next 10 years is possible. I&#8217;m pretty confident that in the next five years, we&#8217;re not going to have 25% or 30% economic growth, which I&#8217;ve seen <a href="https://epoch.ai/gate#econ-growth">predicted</a> by some folks. But that doesn&#8217;t minimize the incredibly substantial impacts of, for example, doubling the current rate of economic growth.</p><p>I also expect AI to increase income and wealth inequality over that time. My default expectation is that the returns to owning capital are going to increase relative to the pace at which the returns to labor income will increase.</p><p>One uncertainty is about the pace of AI capabilities improvements and the ultimate level they could reach. We also have uncertainty around the pace of adoption&#8212;how widely and quickly organizations will adopt these systems. There&#8217;s also uncertainty around how cost-effective automation will be. For example, if automating a large share of work requires investing lots of compute resources at inference time, it could be quite costly for some time. As long as compute is scarce, we will shift our allocations toward the most high-value tasks, which will drive up prices for inference, which will affect adoption. These things are really hard to predict.</p><p><strong>Julian: </strong>You mentioned labor&#8217;s share of income, relative to capital. Dwarkesh Patel and Philip Trammell recently <a href="https://philiptrammell.substack.com/p/capital-in-the-22nd-century">argued </a>that AGI and advanced robotics could make capital a perfect substitute for labor, rather than a complement, causing the share of income going to capital owners to rise to 100%, and necessitating a high progressive tax on capital. Brian Albrecht (and others) <a href="https://www.economicforces.xyz/p/ai-labor-share">pushed back</a> on some of the claims. How do you view this?</p><p><strong>Sam: </strong>Rising inequality is <a href="https://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us/">definitely a concern of mine</a>, but I am pretty uncertain about whether AI-driven automation will increase inequality to the extent Phil and Dwarkesh discuss in their piece. If automation takes off in the way that the piece describes, then assuming competitive markets for deploying AI, real incomes should also rise as goods and services become cheaper. There is a scenario where labor displacement and falling end-user AI costs could move roughly in parallel, so that by the time you reach the full automation scenarios they speculate about, access to large numbers of superintelligent agents would be effectively free. Such widespread access to extremely capable AI systems could be a powerful counterweight on potential harms from a more skewed capital/labor share.</p><h1>Life after work?</h1><p><strong>Julian:</strong> Such a scenario raises fundamental questions about how society will be organized. Who is going to continue working? What will people do with their time if they aren&#8217;t working? What will the distribution of wealth and income look like?</p><p><strong>Sam:</strong> This is an institutional and governance challenge. What do we do in a world where we do not need to work in order to ensure our material well-being? How do we take advantage of the incredible potential for material progress and maximize our flourishing? The challenge is to figure out the right redistribution mechanisms, technological access models, and property rights for this future economy.</p><p>And to your question about work, I will say that many people already don&#8217;t &#8216;work&#8217; for income; they take care of loved ones or have chosen to retire. Much of the world doesn&#8217;t really see work as an innate piece of their identity. One great thing about labor markets is they incentivize people to do things that other people find useful. In the future, we might want to retain some sort of incentive structure for people to use their time in ways that create positive externalities for others&#8212;perhaps a market for being more engaged in your community, taking care of others, raising children, or contributing to scientific and moral progress? These are questions about how to redesign our institutions to support this future.</p><p><strong>Julian</strong>: A common proposed policy response to AI is a Universal Basic Income, or some variant of that. Thinking back to your prior work on cash transfers and UBI, what do you make of it? Is there some version of it that you think can work?</p><p><strong>Sam: </strong>I&#8217;m broadly in favor of policies that expand individuals&#8217; opportunities to flourish in line with their own aspirations. Reducing financial constraints through something like a UBI could be one way to do that, but I&#8217;d be surprised if it were sufficient on its own in a world with far fewer job opportunities. Another important lever is ensuring broad access to technologies that can make people more productive and expand their capabilities. That kind of approach may rely less on taxation and redistribution, while supporting more inclusive and widespread economic participation.</p><h1>The state of AI economic impact research</h1><p><strong>Julian:</strong> What do you think about the current ecosystem of people working on AI economic impact questions? Who would you like to see more involved?</p><p><strong>Sam:</strong> I&#8217;m encouraged by the growth in the number of people working on it, both with respect to established economists and people just entering the field. I&#8217;ve seen a big change over the past four or five years. In 2020, there was maybe <a href="https://www.korinek.com/">one economist</a> I can think of who was really taking the prospect of transformative AI  seriously. Now, you go to a standard economics of technology conference, and many people are grappling with this, which is super encouraging.</p><p>The economic impact of AI is probably among the most important things for researchers to figure out. There are big open questions and big ways to get AI progress wrong. For example, we could eventually end up in a world where we get 10% economic growth in the US and still have hundreds of millions of people living in extreme poverty globally. That would be a big failure in my mind.</p><p>I also think there is a lot of room for political economists and theory work to play more of a role in shaping institutions. I believe the US government will probably be the most consequential actor in shaping this technology&#8217;s impact, not just in the US but globally. The trouble is that we have an evidence dilemma, where we&#8217;re trying to do anticipatory policymaking without clear evidence. Policymakers need to weigh these trade-offs carefully because, given the pace of progress, not doing enough anticipatory planning could result in less than optimal path dependencies for the future. We need more people entering government and figuring out how to usefully inform key actors.</p><p><strong>Julian:</strong> Given the slow timelines of academic publishing, particularly in economics, are you concerned about research quality as researchers move to preprints and other ways of sharing research?</p><p><strong>Sam:</strong> Broadly, I am concerned about the move away from peer review. So much policymaking and so many key decisions are now being made based on preprints and even essays on Substack. While there is so much useful content on these platforms, we need to find some sort of middle ground to generate high-quality evidence.</p><p>I&#8217;m excited about a couple of options. One is having journals quickly review a study&#8217;s methodology and pre-analysis plan and make a publication decision based on that, without needing to know the findings. The decision would be based only on the methodological approach meeting a standard of rigor. Another is more open review, where work is published and then publicly critiqued. This creates transparency around what leaders in the field think.</p><h1>Dream experiments</h1><p><strong>Julian:</strong> If you could run a dream AI economic impact study, without any resource restrictions, what would it be?</p><p><strong>Sam:</strong> For the ideal study, I would work with a developer before they release a new model with a large capability increase. I would take a large, representative sample of businesses and, before the model is widely deployed, randomly assign access to it at the enterprise level. Then I could observe the causal impact of deploying this next-generation system on outcomes like productivity, demand for different skills, firm growth, and task reallocation over time. Having this kind of infrastructure would provide policymakers and society with more foresight.</p><p>This probably won&#8217;t happen. Something more practical, though still challenging, is <a href="https://www.thefai.org/posts/understanding-ai-s-labor-market-impacts-opportunities-for-the-department-of-labor-s-ai-workforce">data collection</a>. The AI labs know where their products are being used across the economy and for what types of tasks. If we could harmonize this usage data and pair it with government or private sector data on occupational transitions, wage changes, and skill demand, we could build trend lines over time. This would allow us to move away from policy discussions based largely on speculation. We could see where AI is creating growth and where we have vulnerable workers who are having a <a href="https://www.brookings.edu/articles/measuring-us-workers-capacity-to-adapt-to-ai-driven-job-displacement/">harder time finding new work after losing their jobs</a>. This is doable with better public sector data collection and more partnerships with industry. We should be pushing on it.</p><h1>Hopes and concerns</h1><p><strong>Julian:</strong> To close, what are you most excited about as AI diffuses in the economy, and what are you most concerned about?</p><p><strong>Sam:</strong> I am most concerned about how it&#8217;s going to impact my children. I am anxious about what human-AI interaction and relationships are going to look like in eight years or so when my kids are ten-plus.</p><p>I am most excited about the prospect of AI being used to expand many ambitious people&#8217;s capabilities and our collective aspirations for what we can achieve. I&#8217;m also excited for the health benefits that I expect are likely to come from advances in science and R&amp;D.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/predicting-ais-impact-on-jobs?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Policy Primer (#23)]]></title><description><![CDATA[Science, safety & doctors]]></description><link>https://www.aipolicyperspectives.com/p/ai-policy-primer-23</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/ai-policy-primer-23</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Thu, 22 Jan 2026 16:57:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uSFA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uSFA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uSFA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 424w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 848w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 1272w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uSFA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2366393,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/185431142?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uSFA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 424w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 848w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 1272w, https://substackcdn.com/image/fetch/$s_!uSFA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56ad262d-3689-4391-9a22-c73d7bbeca94_8000x4500.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Venus Krier </figcaption></figure></div><h1>1. LLMs are making it easier for scientists to write papers, for better or worse</h1><ul><li><p><strong>What happened: </strong>A team at Cornell and Berkeley<a href="https://www.science.org/doi/epdf/10.1126/science.adw3000"> investigated</a> how scientists are using LLMs to help write papers, and what this means for the future volume, quality and fairness of research.</p></li><li><p><strong>What&#8217;s interesting: </strong>The authors built a dataset of ~2.1 million preprints from arXiv, bioRxiv and SSRN, between 2018-2024. To detect whether scientists had used AI to help write a paper, the team compared the distribution of words in the abstract against human- and LLM-written baselines. When an author&#8217;s paper hit a threshold on this &#8220;AI detection&#8221; metric, they were labelled as an &#8220;AI adopter&#8221;. According to the study, LLM adopters subsequently enjoyed a major productivity boost, compared with non-adopters with similar profiles, publishing 36-60% more frequently. The gains were particularly large for researchers with Asian names at Asian institutions.</p></li><li><p>The team also assessed the complexity of the writing, using measures like<a href="https://readable.com/readability/flesch-reading-ease-flesch-kincaid-grade-level/"> Flesch Reading Ease</a>, which evaluates sentence length and the number of syllables per word. They found that human-written papers with more complex language were more likely to be subsequently accepted by peer-reviewed journals or conferences&#8212;suggesting that, for humans, writing complexity is an (imperfect) signal of research effort and quality. For LLM-assisted papers, the relationship was inverted, with the authors concluding that the polished text of LLMs is helping to disguise lower-quality work. (They validated the findings against a separate dataset).</p></li><li><p>The authors also used the launch of Bing Chat, an LLM-based search engine, in 2023 to conduct a natural experiment. They compared views and downloads on arXiv that Bing Chat had referred, to those that Google Search referred. Bing Chat was more likely to refer scientists to newer and less-cited literature, as well as to books, possibly because LLMs are better able to parse long documents or a larger number of documents. (They also validated this finding with a separate dataset, although we don&#8217;t know how <em>good </em>the new sources cited by Bing were).</p></li><li><p>As the authors note, their study has a number of limitations. Their AI detection method is imperfect, only looks at abstracts, and doesn&#8217;t capture authors who may have edited LLM-generated text. There are also various potential confounders: maybe less experienced researchers are more likely to use LLMs?  That said, the findings highlight (at least) three major questions posed by the growing integration of AI into science:</p><ul><li><p>First, AI is leading to a big increase in the supply of papers (and grant applications). This poses a challenge for preprint repositories, which don&#8217;t want to host slop. ArXiv, whose founder<a href="https://en.wikipedia.org/wiki/Paul_Ginsparg"> Paul Ginsparg</a> is a co-author of this study, recently<a href="https://www.nature.com/articles/d41586-025-03664-7"> banned</a> computer science review and position papers, citing a surge in low-quality AI papers. LLM-assisted papers also pose a challenge for peer reviewers, who are already<a href="https://worksinprogress.co/issue/real-peer-review/"> under strain</a>, and are typically prohibited from using AI, although<a href="https://www.nature.com/articles/d41586-025-04066-5"> many do so anyway</a>. This seems unsustainable. As the authors of this study suggest, it is likely time to consider how to integrate AI into at least some aspects of the peer-review process.</p></li><li><p>Second, the findings illustrate how LLMs may both mitigate and exacerbate fairness issues in science. For some scientists, the complexity of their writing may be a reliable indicator of their thinking and effort. For others, particularly non-native English speakers, writing may be more of an obstacle that has previously penalised them. A hopeful outcome is that LLMs may ease that burden. But a more worrying outcome is that, if reviewers and readers can no longer rely on writing complexity as an (albeit unfair) signal of good work, they may fall back on (even more unfair) signals, such as the institution that a person works at. This challenge is not limited to science, and may also occur in other areas where writing serves this purpose, like with cover letters.</p></li><li><p>Finally, the finding that LLM-based search engines may <em>increase</em> the diversity of sources that researchers review is the opposite of what some suggested would happen: that AI models would continually cite the same high-profile studies, exacerbating the &#8220;<a href="https://en.wikipedia.org/wiki/Matthew_effect">Matthew effect</a>&#8221;.</p></li></ul></li><li><p>Collectively, the study serves as a reminder that for every concerning scenario about the integration of AI into science, there are plausible counter-scenarios. Will AI lessen scientific reliability because of hallucinations? Or will AI &#8220;<a href="https://www.refine.ink/">review agents</a>&#8221; and AI-supported evidence reviews reduce (the many) inaccuracies that are already in the evidence base? Will AI remove the intuitive and serendipitous ideas that humans come up with? Or will AI enable scientists to pursue more novel hypotheses? Ultimately, AI could well upend the standard processes and traditions of science but do so in a way that delivers fresh benefits. To know if and how that is occurring, we need more empirical evidence about how AI is changing science.</p></li></ul><h1>2.  Lessons from two years of AI safety evaluations</h1><ul><li><p><strong>What Happened:</strong> In December, the UK AI Security Institute<a href="https://www.aisi.gov.uk/frontier-ai-trends-report"> shared</a> a set of trends observed since they started to evaluate frontier AI systems in November 2023.</p></li><li><p><strong>What&#8217;s Interesting:</strong></p><ul><li><p>The report features more than 60 authors, a testament to the deep expertise that AISI has built up. Their trends are based on their evaluations of more than 30 frontier AI systems, with methodologies ranging from asking those AI systems questions to adversarially red-teaming them.</p></li><li><p>Their headline finding is striking, if unsurprising: AI capabilities have rapidly improved across all the domains that AISI tests. In the cyber domain, AI models and agents can now successfully complete more than 40% of the 1-hour software tasks they are tested on, up from &lt;5% in 2023. Last year, a model completed an &#8220;expert-level&#8221; cyber task for the first time. In biology and chemistry, AI has gone from significantly underperforming PhD-level human experts at troubleshooting experiments, to significantly outperforming them, including for requests about images.</p></li><li><p>On the risk that AI models may &#8220;self-replicate&#8221; in a way that subverts human control, AISI&#8217;s evaluations suggest that AI agents have gotten better at simplified versions of<a href="https://arxiv.org/html/2504.18565v2"> some tasks</a> that could be instrumental to self-replication, such as passing know-your-customer checks to access financial services, but less so at others, like retaining access to compute and deploying successor agents. AISI&#8217;s evaluations also suggest that models are capable of deliberately obstructing attempts to measure their true capabilities (&#8220;sandbagging&#8221;), but only when explicitly prompted to do so.</p></li><li><p>The report also sheds light on AI systems&#8217; limitations.  In the cyber domain, AISI notes that AI systems still struggle in open-ended environments where they must complete long sequences of actions autonomously. Similarly, regarding chembio threats, biologists and chemists, and potential threat actors, need &#8220;tacit&#8221; knowledge and expertise, such as how to pipette. AISI&#8217;s evaluations to date have focussed more on explicit knowledge although they plan to share more on wet lab tasks.</p></li><li><p>When it comes to mitigations, the report provides both reassurance and concern. On one hand, the safeguards that leading labs have introduced <em>have</em> made their models safer, in one instance increasing the amount of expert effort needed to jailbreak a model by 40x. On the other hand, AISI says that it was still able to find a vulnerability in every AI system it tested.  Worryingly, AISI also found no notable correlation between how capable a model is, and the strength of safeguards it has in place.</p></li><li><p>AISI also sheds light on two other sources of AI risk: open source and scaffolding. They argue that the performance gap between open source and proprietary AI models has narrowed. This introduces risks as safeguards for open models (where they exist) can be removed, and jailbreaks are hard to patch. AISI also found that scaffolding can make AI agents more capable than the underlying base AI models, even if those gaps later narrow when the base models are updated. Some complex scaffolds are in proprietary products, such as coding agents, but others are in<a href="https://poetiq.ai/posts/arcagi_verified/"> open-source</a> efforts.</p></li><li><p>The report also touches on AISI&#8217;s evaluations of the broader societal impacts of AI, such as the degree to which people are using AI to access political information, or the risks of harmful manipulation. One striking statistic, picked up in<a href="https://www.theguardian.com/technology/2025/dec/18/artificial-intelligence-uk-emotional-support-research"> media coverage</a> of the report, was that one-third of UK respondents to a recent AISI survey had used AI for emotional support or social interaction in the preceding year, although just 4% do so daily. In a separate effort, AISI found that some dedicated AI companion users reported signs of &#8220;withdrawal&#8221; during outages.</p></li><li><p>Overall, AISI argues that AI labs are taking an uneven approach to safety, focussing more on safeguards for biosecurity risks, for example, than for other threats. This is arguably true of AISI as well, given their strong focus on biology and chemical risks rather than radiological or nuclear risks. This raises a question: Given finite resources, what evaluations of frontier AI systems are most lacking in the current landscape?</p></li></ul></li></ul><blockquote></blockquote><h1>3. One in four UK doctors are using AI in their clinical practice</h1><ul><li><p><strong>What happened: </strong>The Nuffield Trust and the Royal College of General Practitioners<a href="https://www.nuffieldtrust.org.uk/research/how-are-gps-using-ai-insights-from-the-front-line"> surveyed</a> more than 2,000 UK GPs to understand how they view and use AI, in what the authors called the largest and most up-to-date survey on the topic.</p></li><li><p><strong>What&#8217;s interesting:</strong></p><ul><li><p>28% of UK GPs now use AI. This is up from ~<a href="https://pubmed.ncbi.nlm.nih.gov/30892270/">10% in 2018</a>, but below the rates seen in some other UK professions. According to the survey, the GPs most likely to use AI are younger, male, and work in more affluent areas. This is similar to disparities in the wider public&#8217;s use of LLMs, although there, the early gender gap may have<a href="https://openai.com/index/how-people-are-using-chatgpt/"> narrowed</a>.</p></li><li><p>Just over half of AI-using GPs procure AI tools themselves rather than relying on those that their practices select. This kind of &#8220;shadow AI use&#8221; is not unique to GPs, but a Nuffield focus group sheds light on why UK GPs feel compelled to do it: some GP practices or<a href="https://www.england.nhs.uk/integratedcare/what-is-integrated-care/"> Integrated Care Boards</a> ban AI tools, while others are slow to respond to GPs&#8217; requests and instead prefer to stick with legacy digital tools.</p></li><li><p>UK GPs mainly use AI for clinical documentation and note-taking. Some say that AI note-taking allows them to look at, and speak more, with their patients, a non-trivial benefit given that the UK public<a href="https://www.health.org.uk/reports-and-analysis/analysis/ai-in-health-care-what-do-the-public-and-nhs-staff-think"> worries</a> about AI making healthcare staff more distant.</p></li><li><p>GPs also use LLMs to produce documents, from translations of patient communications to referral letters; and to stay abreast of new research, with some younger practitioners turning to LLM &#8220;study modes&#8221; to help with their mandatory professional development.</p></li><li><p>GPs cite &#8220;saving time&#8221; as the primary benefit of AI, and mainly use this to reduce overtime, rest, and engage in professional development, rather than to see more patients. This is notable as<a href="https://www.gov.uk/government/publications/10-year-health-plan-for-england-fit-for-the-future/fit-for-the-future-10-year-health-plan-for-england-executive-summary"> the UK government wants AI to reduce the wait time</a> to get a GP appointment, which is a top concern for the public. These findings suggest that more nuanced evaluations of AI&#8217;s impact on GP services will be needed.</p></li><li><p>GPs worry about errors and liability issues with AI. As a result, the authors call on tech suppliers to do better evaluations of hallucinations. Ideally, such evaluations would compare the accuracy of AI, human and hybrid outputs in real-world settings, and all the nuances that might entail. For example, when explaining the benefits of AI note-taking, some GPs pointed out that certain colleagues can&#8217;t touch type and so, without AI, struggle to capture all the details in a patient consultation ( this is, presumably, a form of inaccuracy).</p></li><li><p>Use of AI for more complex &#8220;clinical support&#8221; tasks remains relatively low, owing to GPs&#8217; concerns about errors, their desire to retain control over clinical judgement, and a lack of regulatory approval. However, some GPs did report using AI, or wanting to use future systems, to help check diagnoses, formulate care plans, and analyse lab results.</p></li><li><p>This suggests that more GPs may start to use AI to enhance their own clinical judgement, spurred by a growing body of<a href="https://arxiv.org/pdf/2510.22414"> evidence</a> that LLM-based systems may be useful in this area, and by the public&#8217;s own<a href="https://cdn.openai.com/pdf/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf"> growing use</a> of LLMs for answering medical questions.</p></li><li><p>In their recommendations, the Nuffield authors call for clearer guidelines and regulatory frameworks for GPs, including as part of the UK&#8217;s new<a href="https://www.gov.uk/government/groups/national-commission-into-the-regulation-of-ai-in-healthcare"> National Commission into the Regulation of AI in Healthcare</a>. However, the report also acknowledges that much guidance already exists, such as the<a href="https://www.bma.org.uk/advice-and-support/nhs-delivery-and-workforce/technology/principles-for-artificial-intelligence-ai-and-its-application-in-healthcare"> British Medical Association&#8217;s AI principles</a> and the<a href="https://www.england.nhs.uk/long-read/guidance-on-the-use-of-ai-enabled-ambient-scribing-products-in-health-and-care-settings/"> NHS guidance</a> on AI note-taking (which some GPs appear to be breaking by procuring their own tools). This raises a question: what exactly should any new guidance stipulate? How to get the burden on GPs right? And how to ensure that they are actually following it?</p></li></ul></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to read future pieces. Lots in the pipeline for 2026! </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Governments Are Struggling. Can AI Help?]]></title><description><![CDATA[Anger against &#8220;the system&#8221; runs deep. Time for a system update?]]></description><link>https://www.aipolicyperspectives.com/p/how-ai-fixes-government</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/how-ai-fixes-government</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Tue, 06 Jan 2026 11:03:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kyMW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5Y0z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5Y0z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5Y0z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5Y0z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!5Y0z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34c0b5b6-8815-45fc-bc3a-6a1a39bcdfe7_1600x893.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Alexander Iosad (Credit: Gemini)</figcaption></figure></div><p><em>Everywhere, people grumble about the government: that politicians care only about themselves; that bureaucrats gum up the system; that taxpayers get fleeced. Even in wealthy countries, nearly two in three people are <a href="https://www.pewresearch.org/short-reads/2025/06/30/dissatisfaction-with-democracy-remains-widespread-in-many-nations/">dissatisfied</a> with how democracy is working. </em></p><p><em>Headlines focus on politics, but a deeper problem could be public services that are overwhelmed, in contrast to a technological era that keeps accelerating. The real danger, says <a href="https://institute.global/experts/alexander-iosad">Alexander Iosad</a>, director of government innovation at the Tony Blair Institute, would be to change nothing.</em></p><p>AI Policy Perspectives<em> visited Iosad, lead author of &#8220;<a href="https://institute.global/insights/politics-and-governance/governing-in-the-age-of-ai-a-new-model-to-transform-the-state">Governing in the Age of AI</a>,&#8221; to hear his vision of how technology might remedy governmental woes.</em></p><p><strong>&#8212;Tom Rachman, </strong><em><strong>AI Policy Perspectives</strong></em></p><div><hr></div><p><em>[Interview edited and condensed]</em></p><p><strong>Tom: Aren&#8217;t people always bemoaning governments? Or is something broken in a different way today?</strong></p><p><strong>Alexander Iosad: </strong>People complain about public services being too bureaucratic, too standardized, not targeted enough. All of those things are true because the system was built in another era, when there was no way to operate differently. But over time, we have faced the <a href="https://en.wikipedia.org/wiki/Baumol_effect">Baumol cost-disease problem</a>: things that we produce in the physical world get cheaper, but the cost of services keeps rising because of inflation and higher labour costs. As public-service costs grow, we have this conflict that has brewed over decades: <em>Should government do less?</em> or <em>Should government tax more?</em> But technologies have reached a level of maturity to break this cycle. We can have governments that aren&#8217;t dependent on just hiring more people to do more of the same, but can be cheaper, and more effective, and operate at a national scale all at the same time.</p><p><strong>Tom: You&#8217;re proposing AI as a lever for state renewal. What philosophical change would governments need to achieve that?</strong></p><p><strong>Alexander: </strong>The first is for governments to realize they can&#8217;t continue with marginal tweaks to systems that don&#8217;t work. Public services are under such strain that people are looking for the status quo to be challenged. That&#8217;s why they&#8217;re open to populists. Instead, governments need to embrace the radicalism inherent in what we call <a href="https://institute.global/insights/politics-and-governance/disruptive-delivery-meeting-the-unmet-demand-in-politics">disruptive delivery</a>. And this is where AI is a big part of the solution.</p><h4><strong>WHAT AI FOR GOVERNMENT COULD LOOK LIKE</strong></h4><p><strong>Tom: The public sector has a lower tolerance for error than the private sector&#8212;damage from an incorrect decision about public health could be far worse than a mistake in a business plan. How do you convince political leaders to embrace disruption when the cost of failure could be so high?</strong></p><p><strong>Alexander: </strong>Because the cost of inaction is much higher. If you do nothing, the system degrades. And the cost is borne by the citizen. If you have a healthcare system that is bursting at the seams; if you have an education system where <a href="https://epi.org.uk/annual-report-2025-disadvantage/">the disadvantage gap</a> between students on free school meals and their peers is 19 months and trending above pre-Covid levels&#8212;those are real problems experienced by real people. <em>Not</em> recognizing that you can actually change isn&#8217;t just a political cost. It is a cost to that citizen, which has downstream consequences for both the system and the politician.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>Tom: How might citizens experience AI improvements?</strong></p><p><strong>Alexander: </strong>By way of example, we can have an education system that is genuinely personalized. We know that personalized learning is more engaging and produces better learning outcomes. We can also have a system that identifies where students have learning gaps, and can inform teachers on what to address. Imagine a school where there&#8217;s an emerging gap in mathematics in Year Seven. At the moment, the only way you spot this is when the students take their exams four years later. By then, it&#8217;s too late. You might say, &#8220;Okay, we now need to focus on maths at that school.&#8221; But you&#8217;ve had a cohort of students come through, and suffer from this failure. With data and AI, you can spot the gap as it emerges. </p><p>Furthermore, we currently have a model of schooling that depends on having access to a person: the teacher. Maybe a parent has a question, and must email the teacher, then wait. If we have a safety net of an AI system&#8212;say, a tutor that&#8217;s always available, and that is verified to be accurate enough, and that is adapted to the national standards&#8212;that parent or student may ask a question at 7:30pm on a Saturday, and doesn&#8217;t have to wait to find the teacher. More broadly, you&#8217;re creating a different experience of interacting with public services, where they are there for you when you need them.</p><p><strong>Tom: To some educators, that picture of teaching will seem like techno-solutionism that overlooks the human role in learning.</strong></p><p><strong>Alexander:</strong> I would class myself as a tech optimist rather than a tech solutionist. Techno-solutionism means high trust in technology&#8212;but low trust in people. Tech optimism is high trust in both. It&#8217;s not about replacing the human connection. It&#8217;s about recognizing the constraints that a sole dependence on humans to deliver public services introduces into the system, and the gaps that it creates. An ideal system is one that fills those gaps with technology.</p><p><strong>Tom: What about other sectors, such as public health?</strong></p><p><strong>Alexander: </strong>People ask for a transformative AI use-case in healthcare, but it won&#8217;t be one big thing; it&#8217;ll be 1,000 little things that, in aggregate, completely change your experience. People are already wearing digital rings and smartwatches that measure their pulse and can tell if they are at risk of particular health problems. So at an individual level, this is starting to work already. It becomes really powerful once you connect this to population-level health. In a more personal way, if your doctor has an ambient AI note-taking system, your medical experience transforms. Today, you sit in front of them, they type a lot, and occasionally look at you. But you can have a system where they are fully present and listening, and don&#8217;t have to worry about capturing the full picture of what you&#8217;re telling them. As we expand outwards, there is the pharmaceutical revolution from AI too, with less cost and more speed of development, and medicines that can be adapted to your body.</p><p><strong>Tom: What about government&#8217;s role in managing crime?</strong></p><p><strong>Alexander: </strong>One example is facial recognition, which is contentious for good reasons. People don&#8217;t like the idea of their faces being scanned as they walk down the street. &#8220;What if there&#8217;s a mistake? What if I&#8217;m apprehended wrongly?&#8221; But in the UK, this technology has achieved very high levels of accuracy now, and does not lead to wrongful arrests. There&#8217;s <a href="https://news.met.police.uk/documents/live-facial-recognition-annual-report-2025-dot-pdf-451735">data</a> recently out of the London Metropolitan Police, which uses facial recognition extensively, where the error rate was 10 faces identified wrongly out of more than 3 million scans. No wrongful arrests. But hundreds of <em>correct</em> arrests that would not have happened otherwise.</p><p><strong>Tom: But if we move towards data-driven policing, isn&#8217;t there a risk that bias within the data could lead to injustice?</strong></p><p><strong>Alexander:</strong> Of course, you have a big challenge with potential bias in this context. You train the systems on existing data, which might not have enough representation of people from minority groups&#8212;for example, fewer non-European faces, so the algorithm is more likely to misidentify people. Or the data might have groups over-represented&#8212;for example, capturing historical overpolicing of communities or areas. The risk is that these biases are replicated, and even scaled up. Early versions of new tools are more likely to make such errors, and real-world experience shows that, if we are aware of this, and take active steps to mitigate it, it is possible to prevent these kinds of biases. This is something that needs to be built into the process of development and deployment. We see, for example, that facial-recognition systems are much more accurate today than they were 10 years ago. Not perfect, but much better, and providing better intelligence for officers to decide when they need to act. You could also have a kind of AI peer review, where one model might be trained to monitor another for replicating bias, or introducing new bias into the system&#8212;a watching-the-watchers situation. Again, this would be an improvement on the situation we have today, where much of this bias just passes unnoticed and uncorrected.</p><p><strong>Tom: So, it&#8217;s not the sci-fi dystopian vision of crime-fighting, you&#8217;re saying?</strong></p><p><strong>Alexander</strong>: Yes. And the status quo is a uniformed police officer on the corner, standing in the rain, the sun setting, holding a printout from earlier that morning with blurry low-resolution pictures of the people they&#8217;re looking for. They make more wrongful arrests as a result of that situation than police officers sitting in a van with computer infrastructure, and a camera telling them there&#8217;s a person walking down the street with a child, and this person is on a sex offenders&#8217; register, with court restrictions against being near children. The police officer can go and talk to this person. This is a real case, by the way&#8212;and it turned out to be someone building a friendship with the child&#8217;s family without their knowing he was on the register. No way would a police officer know this today, if someone just walked past them with the child. So it&#8217;s about looking at what we do, and how we can do better, rather than leaning into these fantasies of complete control.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kyMW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kyMW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kyMW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg" width="1456" height="1301" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1301,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:791511,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/183574277?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!kyMW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kyMW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F609fc3c7-b119-42ee-aa44-aba946807ee5_2648x2366.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The face of bad government. (From a 14th-century allegorical painting of lousy leadership. Siena, Italy.)</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/how-ai-fixes-government?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/how-ai-fixes-government?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4><strong>3 NEW AI ROLES</strong></h4><p><strong>Tom: You also advocate a radical new model for how governments operate internally. Could you explain these three concepts: the Digital Public Assistant for every citizen; AI co-workers for each civil servant; and a National Policy Twin for policymakers to simulate decisions.</strong></p><p><strong>Alexander: </strong>The Digital Public Assistant, either on my device or online, would be a system that connects information about you held by different parts of government&#8212;for example, your income level and your address&#8212;and is then able to say, &#8220;You&#8217;re eligible for this particular discount on your energy bill&#8212;would you like to have it?&#8221; Or it could support you during interactions with government officials. So much of our time is spent repeating the same things to different agencies, whereas here you might be talking to an unemployment adviser, and they can see your employment history or your qualifications, and suggest the right next steps for you so the job you find is the best fit for you specifically. Which might mean you stay in that job longer, and grow in it to have a fulfilling career. You could have a settings dashboard to decide how various AI agents interact with the government on your behalf. All this puts you in greater control.</p><p><strong>Tom: What about AI co-workers for each civil servant?</strong></p><p><strong>Alexander</strong>: This is already starting to happen with chatbots, but that is the most basic version of it. You could have a suite of co-workers that looks at new cases, such as requests for support or applications for services, that a public-sector worker receives, and helps prioritise this, or find the information that the civil servant needs to make the best decision. The AIs don&#8217;t make decisions in place of that worker, but they make the worker much better informed, and save them hours of digging through regulations. There was a <a href="https://www.gov.uk/government/news/landmark-government-trial-shows-ai-could-save-civil-servants-nearly-2-weeks-a-year">pilot experiment</a> that showcased the potential for this in the UK government, involving employees of the Department for Work and Pensions who help jobseekers find employment. The employees, who act as work coaches for job seekers, were able to ask a large language model to explain various rules, to help draft documents, to prepare reports, and update records. Today, if a government employee has a question about when a claimant is eligible for a particular service, they might just search the internet. But you can have a system trained on the relevant rules, and gives you a quick and accurate answer. This saved about two weeks&#8217; time per employee per year&#8212;and allowed these work coaches to focus on building relationships with the people who needed their support. </p><p>You can picture this across different parts of government. In procurement, you would have more informed advice about all the bids coming through, for example. Or if you think about how much time officials spend sending documents around for someone else to summarize when they are asked to prepare briefings and documents for government ministers&#8212;a lot of this work could be done much more quickly, so people have time to actually <em>think</em> about what it means, not just produce digests, and you could include a wider range of different sources so the information is more nuanced, accurate, and up to date.</p><p><strong>Tom</strong>: <strong>Your third concept is an AI simulation of the entire country to test out policies.</strong></p><p><strong>Alexander</strong>: Yes, this gets exciting. We call it the National Policy Twin. Data is aggregated from different parts of service delivery, such as information on schools from the education department, and economic data from the statistics agency, and incomes data through the tax agency, and so forth. Together, it&#8217;s essentially a digital twin of your country, and you can run different policy scenarios informed by this data. At the moment, civil servants present a government minister with, say, three policy scenarios. If there are assumptions that the minister doesn&#8217;t agree with, they&#8217;ll say, &#8220;Give me three other scenarios based on different assumptions.&#8221; They wait for weeks, and then the process repeats. With the National Policy Twin, you could test ideas or intuitions very quickly, iterate on ideas, and ask for best practices from around the world, so that policies have a stronger evidence base&#8212;all in minutes, not days. You are not replacing the policymaking process. But you are speeding things up, so you can test more options. You are less likely to miss the right option because it never came up.</p><p><strong>Tom: But isn&#8217;t the validity of a &#8220;digital twin&#8221; simulation dependent on the quality and comprehensiveness of the data available? And wouldn&#8217;t this risk biasing decision-makers toward whatever the data suggested rather than broader impressions, even if those broad impressions encompassed more wisdom?</strong></p><p><strong>Alexander: </strong>It is a danger. But it&#8217;s also a motivation to ensure your statistics agency runs well. This dramatically raises the importance of getting data right, and it&#8217;s something that not every government has really paid attention to. This would be helped if you build a whole data system, including Digital Public Assistants, where citizens can correct their information, leading to better data flows to governmental institutions. This is also where AI systems can interpret unstructured data, understand how it all fits in together, and provide informed advice. Again, AI is not making the decisions. It&#8217;s providing information for humans that was previously not available or not usable, and helping people to make sense of it, and make better decisions as a result.</p><h4><strong>OBSTACLES REMAIN</strong></h4><p><strong>Tom: Another hurdle is decades-old IT systems in public services. Can governments overhaul this infrastructure at a pace that keeps up with AI development?</strong></p><p><strong>Alexander: </strong>Legacy infrastructure is a problem, and interoperability in government is something most countries are trying to tackle. In the UK&#8217;s <a href="https://assets.publishing.service.gov.uk/media/678f68b3f4ff8740d978864d/a-blueprint-for-modern-digital-government-print-ready.pdf">blueprint for modern digital government</a>, there is a plan to make every public-sector dataset interoperable in the next few years. This is the first thing we should do. Right now, some police forces spend 90% of their IT budget on maintaining legacy systems. If you&#8217;ve got legacy systems here and there, fine&#8212;spend 10% of your budget on that. But 90% should be spent on upgrading. You do this for two years, and it&#8217;s a hard push, and will be painful. But then we get there.</p><p><strong>Tom: Another concern about using AI in so many parts of governmental work is that we risk losing democratic transparency, explainability, and the citizen&#8217;s right to appeal decisions made by algorithms.</strong></p><p><strong>Alexander: </strong>There needs to be human accountability for decisions made on the basis of this system. We need that built-in from the start. This needs to be sensitive to individual circumstances because, for every 95% of successful cases, you will have some cases where things didn&#8217;t work as expected. If we free up government resources by using AI, we can use those resources to make it easier for people to go and talk to someone when they need to, either because something went wrong, or because they are more comfortable with that way of dealing with the government.</p><h4><strong>WHICH GOVERNMENTS ARE TRYING THIS?</strong></h4><p><strong>Tom: You published &#8220;<a href="https://institute.global/insights/politics-and-governance/governing-in-the-age-of-ai-a-new-model-to-transform-the-state">Governing in the Age of AI</a>&#8221; shortly before the July 2024 general election in the United Kingdom. It&#8217;s around a year and a half since Prime Minister Keir Starmer&#8217;s Labour Party took power. Are there lessons in what has or hasn&#8217;t happened regarding AI implementation?</strong></p><p><strong>Alexander: </strong>The UK has been among the more ambitious globally, including its <a href="https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan">AI Opportunities Action Plan</a> and its blueprint for modern digital government. But there is a challenge when it comes to AI in government: how do you make it tangible for people, and how to balance risk and reward in doing so? If you are a political leader coming into office and thinking about this, how do you drive forward AI while maintaining public support? What are the quick wins where you can tangibly speed up the way that citizens interact with government, where you can improve that experience in ways that you can claim credit for? Part of the challenge that this government has arguably had is that not everyone has noticed the things it does.</p><p><strong>Tom: What&#8217;s an example of something that has worked, but that people aren&#8217;t noticing?</strong></p><p><strong>Alexander:</strong> You have a problem since Covid in the UK, and in many other countries, with students not showing up for lessons. So what they&#8217;ve <a href="https://heywoodquarterly.com/how-attendance-data-can-transform-pupil-lives/">done</a> is connect school attendance systems so that the government gets a daily record of the proportion of students who came to school the day before. But it&#8217;s not enough to just have data, so what they&#8217;ve done is build tools that explain to school leaders how they compare to other similar schools, and what profile of students might be seeing a gap in attendance. In one rural school, attendance kept dropping on Tuesdays, and the school didn&#8217;t notice until the Department for Education came with a tool that showed this trend. Then the school discovered that there was a bus that was always late on Tuesdays, so students just gave up and never came in. They hired a minivan for Tuesdays, and attendance shot up.</p><p><strong>Tom: Which governments around the world are getting this right?</strong></p><p><strong>Alexander: </strong>We are at an early stage in this journey, even for the private sector, and certainly for governments, which tend to move slowly. But Singapore is doing well. And Estonia. And Ukraine, for obvious reasons: they&#8217;re having to break the current way of doing things; you have to figure out other ways. They recently launched a chatbot that Ukrainian citizens can use to get answers based on information from their digital ID. Australia is another country doing well, particularly on AI and education. The UK too. But there won&#8217;t be a simple list of &#8220;Five Ways That AI Has Transformed Government.&#8221; It&#8217;s going to be everyone doing a bit of something somewhere that adds up to a bigger picture. It&#8217;s not, &#8220;Are you promoting AI in your public service?&#8221; Everyone is. It&#8217;s: &#8220;Are you just making current processes slightly faster? Or are you genuinely thinking about deeper reform?&#8221;</p><p><strong>Tom: Albania introduced a virtual AI minister to handle public procurement. What do you think of that?</strong></p><p><strong>Alexander: </strong>It&#8217;s quite an attention-grabbing announcement but is making a serious point: that AI can help cut fraud, improve efficiency, and save money in public procurement. But Albania has an even more interesting example of AI in government. They&#8217;re going through the process of applying for European Union membership, and that is both a bureaucratic process and a process of real reform, where you bring your legislation in line with European standards. So, you&#8217;ve got laws in Albanian, you&#8217;ve got European laws in English and French, and so on, and you need to find discrepancies, and update legislation, then implement reforms. That is an incredibly time-consuming process that has typically meant hiring hundreds, if not thousands, of lawyers and translators. It takes a decade to do this. But Albania is using AI tools to radically speed up this process. That is accelerating their accession process, possibly by several years.</p><p><strong>Tom: We&#8217;ve talked a lot about the public services, but do you have thoughts on how AI could update democracy more broadly?</strong></p><p><strong>Alexander: </strong>If we get this right, the most noticeable impact will be improved trust because government can deliver rather than let things continue to slide into decline. Also, AI can introduce more transparency. Several countries have Freedom of Information acts, but it takes ages. There are local governments in the UK experimenting with systems where you type in a question, and if they have the data already, it&#8217;ll answer your question, just give you the data right there, and you don&#8217;t have to go through civil servants for it. There is also a philosophical reason why accountability could improve in the age of AI: the machine doesn&#8217;t make the decisions. Even if you have an automated system, there should be a person somewhere, thinking, &#8220;Let&#8217;s make a choice we are comfortable with.&#8221; If we get into that mindset, we make government aware that the human role is to make good decisions, and to take that responsibility very seriously. That, I think, will have a significant impact on democracy.</p><h4><strong>TAKEAWAYS</strong></h4><p><strong>Tom: What final message do you have for policymakers trying to use AI in government?</strong></p><p><strong>Alexander: </strong>What&#8217;s really important is to carve out time for this thinking. As a public service, you&#8217;re always under pressure; you always need to deliver the next thing. Yes, AI will save time&#8212;but if you are just adding more work into those hours, you&#8217;re not going to get any gains. Carve out half the time that you save because of general-purpose AI systems to sit down with colleagues, and think how to improve your service. This requires leadership to say, &#8220;You <em>have</em> to do this.&#8221; We need a public-service workforce that is both more capable of this type of creative thought and experimentation, and is actually empowered to do it. At the moment, we have a pyramid shape with a lot of people doing a lot of repetitive tasks at lower pay. Those jobs are at risk because AI tools are good at doing those tasks at a fraction of the cost, and in seconds, not hours. What does that mean for the future structure of the civil service? Is it the same people doing different things? Is it fewer people? I don&#8217;t think anyone really has good answers yet.</p><p><strong>Tom:</strong> <strong>What&#8217;s the biggest obstacle to your vision? And the best answer?</strong></p><p><strong>Alexander: </strong>The biggest obstacle is inertia. This future is uncertain, and government isn&#8217;t always good at dealing with uncertainty. The best answer is for leadership to take seriously the responsibility of updating government. Otherwise, we will be left behind. On the cost side, it&#8217;s not just hiring engineers or buying computers. It&#8217;s the cost of <em>inaction</em> that you need to weigh up.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives. Subscribe for free (or share with someone)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Séb Krier’s Top 8 AI Reads of the Year ]]></title><description><![CDATA[Holiday fun]]></description><link>https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 18 Dec 2025 14:23:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U2QO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Every month or so, S&#233;b Krier shares a list of favourite articles with his Google DeepMind colleagues. In the run-up to this festive period, we forced him to pick those that he most enjoyed over the past year. He came up with five unmissable pieces from 2025, plus three classics. As always with S&#233;b&#8217;s lists, this one comes with its <a href="https://noodsradio.com/shows/restless-egg-dawn-chorus-w-seb-krier-22nd-june-25">own music mix</a>. Enjoy!</em></p><p><em>&#8212;Conor Griffin, AI Policy Perspectives</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U2QO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U2QO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!U2QO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!U2QO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F366476be-c459-412d-ae1c-8fde34c070ba_1376x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Images from Gemini </figcaption></figure></div><h1>Five Great Pieces from 2025</h1><p><strong><a href="https://asteriskmag.com/issues/09/a-defense-of-weird-research">1. A Defence of Weird Research</a></strong><em> </em>(<em>Asterisk Magazine</em>)</p><p><strong>Deena Mousa &amp; Lauren Gilbert</strong></p><p><strong>S&#233;b Says:</strong> Science funding needs to be shaken up. But I&#8217;m concerned that a lot of good research might be cut because people misunderstand how science works. Mousa and Gilbert remind us why basic research matters, and why governments should fund it: while the benefits to society are significant, they are hard to predict and take time to materialise, so companies will underinvest. To make their case, the authors take a tour of weird-research success stories, such as how studying lizard venom led to the invention of Ozempic, and how studying the effects of separating rat pups from their mothers led to the now <a href="https://www.apa.org/topics/parenting/massage-therapy#:~:text=As%20a%20direct%20result%20of,give%20massage%20therapy%20to%20preemies.">common use of massage therapy</a> to help pre-term human babies. Did you know that studying frog skin led to the invention of <a href="https://asteriskmag.com/issues/02/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children">oral rehydration therapy</a>, which has saved over 70 million lives?</p><p><strong><a href="https://andymasley.substack.com/p/requests-for-journalists-covering">2. Requests for journalists covering AI and the environment</a> </strong>(<em>The Weird Turn Pro </em>newsletter<em>)</em></p><p><strong>Andy Masley</strong></p><p><strong>S&#233;b Says: </strong>I worry about the quality of a lot of commentary on AI and the environment. So it&#8217;s important to re-up these best practices. Specifically, Masley cautions that readers are coming away with wildly inaccurate beliefs about where AI and data centres fit into the environmental picture. His favourite book on good environmental communication is <em><a href="https://www.withouthotair.com/">Sustainable Energy&#8212;Without the Hot Air, by David JC MacKay</a></em>, and his guidance includes some classics of the genre, such as never sharing contextless large numbers (&#8220;200,000 bottles of water per day&#8221;). He also suggests comparing data centres&#8217; energy use with other industries, rather than with household use. Although aimed at journalists, the guidance is also helpful to those working in policy, some of whom make the mistakes that Andy calls out, such as viewing one&#8217;s own AI prompts as environmentally consequential.</p><p><strong><a href="https://scottaaronson.blog/?p=9030">3. ChatGPT and the Meaning of Life</a>, </strong>(<em>Scott Aronson&#8217;s Shtetl-Optimized </em>blog)</p><p><strong>Harvey Lederman</strong></p><p><strong>S&#233;b Says: </strong>I don&#8217;t think all jobs will disappear any time soon. But if we get full automation, then Lederman&#8217;s piece is a good way to think about it. He starts by describing the fits of dread he has felt ever since the launch of ChatGPT, then considers reasons why the end of work could hurt society, from losing the joy of scientific discovery to losing the sense of purpose from serving others. Ultimately, he rejects the most pessimistic arguments, noting that the consequences of scientific findings, such as penicillin that saves lives, are more important than their discovery, and that much service work is drudgery. However, he captures how difficult the transition may be, including for &#8220;workists&#8221; like him who use their jobs to make sense of their lives. He concludes that: &#8220;A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.&#8221;</p><p><strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai">4.</a></strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai"> </a><strong><a href="https://inferencemagazine.substack.com/p/how-much-economic-growth-from-ai">How much economic growth from AI should we expect, how soon?</a></strong> (<em>Inference</em> <em>Magazine</em>)</p><p><strong>Jack Wiseman &amp; Duncan McClements</strong></p><p><strong>S&#233;b Says: </strong>Some predict that AI will be close to economically useless, while others think it might transform everything tomorrow. This piece comes closest to how I think about it. As Wiseman &amp; McClements explain, the most ambitious forecasts for AI rest on the idea of &#8220;digital AI researchers&#8221; that train and improve the next generation, leading to a jump in the share of economic tasks that AI can do. One obstacle to achieving this is the availability of compute, which is increasingly allocated to serve customers (inference) rather than to training new models. Additionally, a multitude of frictions will slow the diffusion of AI, whether it&#8217;s the time needed to cultivate biological cells for scientific experiments, or the regulatory approvals for sensitive-use cases. As a result, the authors expect a transformative impact on near-term economic growth, but not an explosive one.</p><p><strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly">5.</a></strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly"> </a><strong><a href="https://www.economicforces.xyz/p/yes-econ-101-is-underrated-it-correctly">Yes, Econ 101 is underrated</a></strong><em> (Economic Forces </em>newsletter)</p><p><strong>Brian Albrecht</strong></p><p><strong>S&#233;b Says: </strong>Much of the discourse on the Left and the Right ignores inconvenient truths of economics, so it&#8217;s good to return to the basics. Albrecht shows how Econ 101 helps explain the world. For example, egg producers were accused of price-gouging when they charged sharply more in 2022, but it had more to do with avian flu killing many chickens. In the egg market, supply and demand are relatively inelastic: It takes time to raise chickens, and customers who want omelettes don&#8217;t have alternatives. So, prices jumped. Different markets have different characteristics, but the explanatory power of supply, demand and pricing is similar. Nor does outsized market power invalidate these principles. This essay also shows how Econ 101 offers insights into social trends, such as how skewed sex ratios can affect marriage and employment rates, as in certain immigrant communities, or drive up savings rates, as in China. Econ 101 may not tell us whether policies will be politically popular or whether outcomes are fair. But it does help predict what those outcomes may be.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h1>Three Classics that I Revisited</h1><p><strong><a href="https://www.conspicuouscognition.com/p/why-do-people-believe-true-things">6. Why do people believe true things?</a> </strong>(<em>Conspicuous Cognition</em> newsletter)</p><p><strong>Dan Williams</strong></p><p><strong>S&#233;b Says: </strong>Anything Dan Williams writes is self-recommending, and this piece is no exception. In July 2024, he critiqued how many people think about the relationship between belief and reality. To illustrate this, he notes that people seek explanations for issues like crime and poverty, when the real question is understanding law-abidingness and wealth. This requires &#8220;explanatory inversion.&#8221; Transferring that concept to how people commonly debate public knowledge, he notes that many misinformation researchers concern themselves with why different groups believe falsehoods. But the more pertinent puzzle, he contends, is why humans overcome error, bias and illusions to form accurate perceptions of how things are. His conclusion? Ignorance and misperceptions are the default, and humanity will revert to them, unless we can understand, maintain and improve our norms and institutions, from journalistic integrity to robust legal systems.</p><p><strong><a href="https://isi.org/hayek-on-the-role-of-reason-in-human-affairs/">7. Hayek on the Role of Reason in Human Affairs</a> </strong>(<em>Intercollegiate Studies Institute)</em></p><p><strong>S&#233;b Says: </strong>A lot of discourse on intelligence, knowledge, and coordination is biased towards a computer-science-centric view of the world, and neglects Hayek&#8217;s views. This 2014 essay explains how Hayek championed <em>critical rationalism</em>, which was rooted in the Scottish Enlightenment of David Hume and Adam Smith, and developed by Carl Menger and the Austrian School. <em>Critical rationalism </em>sees social order as spontaneous, and the unintended result of human action,<em> </em>not design. As a result, inherited social institutions and rules contain tacit knowledge, the result of a multitude of trials and errors, that transcends the knowledge available to a reasoning mind. Therefore, the desire to &#8220;make everything subject to rational control,&#8221; Hayek suggests, is an egregious error. Reason should instead serve a negative function, to guide and restrain irrational impulses or morals. As the human mind cannot master all the concrete details of society, we must rely on abstract concepts and rules, like the rule of law and the market, to coordinate the dispersed, fragmented, knowledge of millions of people.</p><p><strong><a href="https://www.lewissociety.org/innerring/">8. The Inner Ring</a></strong> (<em>The C.S. Lewis Society of California</em>)</p><p><strong>C.S. Lewis</strong></p><p><strong>S&#233;b Says: </strong>This piece profoundly shaped how I think about the world. In this 1944 lecture at King&#8217;s College, University of London, Lewis offered &#8220;middle-aged moralising&#8221; to a group of students during wartime, telling them that in every organisation, from school to the army, there are two hierarchies. There is the official hierarchy. Then, there is the informal hierarchy, an &#8220;Inner Ring&#8221; that holds the true power. The Inner Ring comes in many forms, from high society to &#8220;communistic c&#244;teries.&#8221; It is always evolving, holds no formal admissions or expulsions, and bears no clear identifying marks, save perhaps particular slang and a longing from others to be inside. It is this desire, and the terror of being outside, that turns people into scoundrels, he argues. The Inner Ring may be unavoidable, or even necessary. But the quest to enter it is ultimately futile. &#8220;Once the first novelty is worn off, the members of this circle will be no more interesting than your old friends. Why should they be?&#8221; Lewis said. &#8220;You were not looking for virtue or kindness or loyalty or humour or learning or wit or any of the things that can really be enjoyed. You merely wanted to be &#8216;in.&#8217; And that is a pleasure that cannot last.&#8221; What to do instead? Be a sound craftsman who focuses on the quality of work as an end in itself, and spend time with people you actually like.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/seb-kriers-top-8-ai-reads-of-the?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What Do YOU Think?]]></title><description><![CDATA[We have questions. You have answers.]]></description><link>https://www.aipolicyperspectives.com/p/what-do-you-think</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/what-do-you-think</guid><dc:creator><![CDATA[Conor Griffin]]></dc:creator><pubDate>Tue, 16 Dec 2025 12:39:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Po-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Po-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Po-b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 424w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 848w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 1272w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Po-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png" width="1024" height="597" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:597,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1095158,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/181221444?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34721400-4869-4260-aed3-454e370999da_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Po-b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 424w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 848w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 1272w, https://substackcdn.com/image/fetch/$s_!Po-b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8cfae00-7fa8-4d18-b30c-381b69ebb08e_1024x597.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The future is coming too fast! (By which we mean 2026.) </p><p>But fear not. We&#8217;re hatching great reads for your new year: <em>Can AI help fix <strong>government</strong>?</em>; <em>What exactly is &#8220;<strong>AI manipulation</strong>&#8221;?;</em> and <em>Might <strong>sci-fi</strong> hold clues about the world we&#8217;re hurtling towards?</em> </p><p>All we lack is you. More exactly, your intelligence on artificial intelligence. So&#8230;&#8230;</p><p><a href="https://docs.google.com/forms/d/1kp0nvTidfZArS2Pp-MYUv8oqAxM4coRVnC4HkSKkzOk/edit">Please complete this quick (5 minutes?) questionnaire</a>. </p><p>Given plunging survey response rates, we&#8217;ve limited ourselves to just 2 questions. Write 2 words, or 200. </p><ul><li><p>What&#8217;s an AI topic you&#8217;d like better explained?</p></li><li><p>What&#8217;s a topic that people aren&#8217;t discussing enough?</p></li></ul><p>We&#8217;ll read every answer with great interest. Wishing you an excellent 2026!</p><p>&#8212;Conor Griffin &amp; Tom Rachman, <em>AI Policy Perspectives</em></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What’s It Like To Be A Bot? ]]></title><description><![CDATA[How philosophers dream of conscious machines]]></description><link>https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 10 Dec 2025 10:42:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vkH0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vkH0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vkH0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1434051,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!vkH0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!vkH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5c58da2-8951-4255-939f-c70ab364fc60_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Images: Gemini)</figcaption></figure></div><p><em><strong>If an AI gained consciousness, would we know?</strong> </em></p><p><em>Maybe this question strikes you as absurd; maybe, disquieting. Either way, you&#8217;ll hear it more in coming years, as human beings develop increasingly close ties with charismatic machines trained on us. </em></p><p><em>Thankfully, philosophers have pondered consciousness for about as long as philosophers have pondered anything. In recent decades, advances in computing added urgency, with leading thinkers dreaming up a range of provocative thought-experiments: a man communicating from a locked room; a woman afflicted by a blue banana; a bat with an inner life.</em></p><p><em>To explain, we are publishing this essay about key thought-experiments related to AI, written by the broadcaster and author <strong>David Edmonds</strong>, whose acclaimed books include <a href="https://press.princeton.edu/books/hardcover/9780691225234/parfit?srsltid=AfmBOopix29j5gEeJEIIUy_pDsBk_AVuuVW3qnWtbf0FkXLrP6ExHqPa">Parfit</a>, the recently released <a href="https://press.princeton.edu/books/hardcover/9780691254029/death-in-a-shallow-pond?srsltid=AfmBOoohdjgpf28e5lnLBK8UZxcAngkBx-zE4IH4cqKaORzuYVCzjLmU">Death in a Shallow Pond</a>, and a collection of philosophical essays that he edited, <a href="https://global.oup.com/academic/product/ai-morality-9780198876434">AI Morality</a>. He is currently writing a book on thought-experiments. </em></p><p>&#8212;Tom Rachman, <em>AI Policy Perspectives</em></p><div><hr></div><h4><em><strong>By David Edmonds</strong></em></h4><p></p><p>As a young scholar in Oxford, John Searle fell in love twice. First with a fellow student, Dagmar, who became his wife, and second with philosophy. The City of Dreaming Spires was grim in the 1950s, Searle recalled, with unheated buildings and inedible food. &#8220;The British were still on wartime rationings,&#8221; he <a href="https://www.youtube.com/watch?v=f2qZdGmq8vw">said</a>. &#8220;You got one egg a week.&#8221;</p><p>The philosophical fare was more nourishing. Searle <a href="https://link.springer.com/chapter/10.1007/978-94-010-0589-0_2">described</a> the collection of philosophers in the city as &#8220;the best the world has had in one place at one time since ancient Athens.&#8221; Two giants of Oxford philosophy, Peter Strawson and J.L. Austin, were key influences on him.</p><p>Searle became fixated on one topic that, for the rest of his life, he maintained was the central puzzle for philosophy: consciousness. How was human reality and our conception of ourselves compatible with the physical world? How could beings with free will and intentionality exist? How could politics, ethics and aesthetics arise out of the &#8220;mindless, meaningless&#8221; stuff from which the physical world was constructed?</p><p>From 1959, Searle taught at Berkeley, beginning his career in what now seems a remote era of pen and paper. It wasn&#8217;t until the late 1970s that personal computers became widely available. At roughly the same time, debates around artificial intelligence gathered speed and heat.</p><p>In 1979, Searle was invited to deliver a lecture at Yale to AI researchers. He knew next-to-nothing about AI, so bought a book on the subject. This described how a computer programme had been fed a story about a man who&#8217;d gone to a restaurant, been served a burnt hamburger, and stormed out without paying. Did the man eat the hamburger? The programme correctly worked out that he had not. &#8220;They thought that showed it understood,&#8221; he <a href="https://www.ceskatelevize.cz/porady/10441294653-hyde-park-civilizace/9271-english/11817-john-searle-philosopher-and-linguist/">commented</a>. &#8220;I thought that was ridiculous.&#8221;</p><p>And so in 1980, Searle published a paper called &#8220;<a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/minds-brains-and-programs/DC644B47A4299C637C89772FACC2706A">Minds, Brains, and Programs</a>,&#8221; introducing the Chinese Room, one of several famous philosophical thought-experiments that have had a lasting impact on discussions of consciousness and AI.</p><p>It goes something like this. You are the only person in a locked room. A note is passed to you underneath the door. You recognize the characters as being Chinese, but you don&#8217;t speak Chinese. By luck, there&#8217;s a manual in the room, with instructions on how to manipulate these symbols. You follow the instructions. Without understanding the content of what you&#8217;ve written, you produce a reply that you slip back under the door. Another note arrives. With the manual, you again generate a reply.</p><p>The person on the other side of the door might have the impression that you understand Chinese.  But do you? Obviously not, thought Searle. And any computer is in an analogous position. A computer is merely manipulating symbols, following instructions, he thought. Computation and understanding are not synonymous.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><strong>BATS &amp; COLOURS</strong></p><p>It is a striking feature of the philosophy of mind, and consciousness studies, that so much of the intellectual agenda has been driven by a small set of thought-experiments.</p><p>The Chinese Room has spawned a vast literature. Almost as famous is a paper, &#8220;<a href="https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf">What&#8217;s It Like To Be A Bat?</a>&#8221;, that predated Searle&#8217;s by six years, written by another American philosopher, Thomas Nagel, whom Searle befriended during his Oxford years.</p><p>Like us, bats are mammals. But they have an alien way of navigating the world, echolocation. There is a subgroup of humans, chiropterologists, who know an impressive amount about bats, and have investigated how their high-frequency sounds bounce off objects, allowing them to detect size, shape and distance. But there is one thing that they don&#8217;t and can&#8217;t know, Nagel said: the subjective experience of being this creature.</p><p>&#8220;I want to know what it is like for a <em>bat</em> to be a bat,&#8221; he wrote. &#8220;Yet if I try to imagine this I am restricted to the resources of my own mind, and those resources are inadequate to the task.&#8221; AI was still in its infancy when Nagel wrote his article, but questions about the meaning of an artificial mind were already circulating. Could there be something that it is like to be a thinking machine?</p><p>The Australian philosopher, Frank Jackson, attacked the problem from a different angle in his 1982 article, &#8220;<a href="https://philpapers.org/rec/JACEQ">Epiphenomenal Qualia</a>&#8221; (qualia being a term for the subjective aspects of conscious experience). In his Mary&#8217;s Room thought-experiment, a woman has had an unusual upbringing. Mary was raised alone, entirely in a black-and-white room: black-and-white walls, a black-and-white floor, a black-and-white TV. She has black-and-white clothes and her food, pushed under the black-and-white door, has been dyed black and white.</p><p>To stave off the tedium of her monochrome existence, Mary studies hard, and her focus is colour. She learns all about the physics and biology of colour&#8212;for example, about the wavelengths of particular colours and how they interact with the retina to stimulate experience. She even learns how colour words are used in literature, poetry and ordinary language, and how someone can &#8220;feel blue,&#8221; be &#8220;green with envy,&#8221; or so angry that &#8220;a red mist descends.&#8221; Mary becomes the world&#8217;s expert on all aspects of colour.</p><p>One day, the door to Mary&#8217;s room opens for the first time, and she joins us in our kaleidoscopic world. The first thing she sees is a ripe red apple. The question is this: When Mary sees this apple, does she learn anything?</p><p>Jackson argued&#8212;and most people presented with this scenario seem to agree&#8212;that in seeing what red actually looks like, Mary <em>has</em> learnt something. At the time of his article, what Jackson took this to show is that a purely physical description of the world cannot capture everything there is to know about the world. The phenomenology of experience (the redness, the what&#8217;s- it-like-to-be-a-bat-ness) cannot be fully explained with descriptions of particles and fields, electrons and neutrons, atoms and molecules.</p><p>Even if an AI could recognize a new shade of colour, such as lilac, that it had never seen before, it would not mimic human experience if it lacked lilac qualia&#8212;or so Mary&#8217;s Room might suggest. This raises the issue of how human subjectivity may be relevant to comprehension and functioning in the real world, turning a philosophical question into a technical one.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p><strong>BLOCKHEADS &amp; BANANAS</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1UxT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1UxT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg" width="1456" height="778" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:778,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1UxT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1UxT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff53abdfb-3637-4467-994b-c987ad38b927_1600x855.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It was Jackson who gave the name &#8220;Blockhead&#8221; to a thought-experiment from the American philosopher Ned Block that appears in a 1981 paper, &#8220;<a href="https://www.researchgate.net/publication/265578700_Psychologism_and_Behaviorism">Psychologism and Behaviorism</a>.&#8221; We are to imagine there is a computer, programmed in advance so that it could respond to every possible sentence with its own plausible sentence.</p><p>This was in part a response to the famous test of machine intelligence that Alan Turing set in 1950. A computer passed the Turing test if it could converse with a human, and the human could not identify it as a machine. The Blockhead machine would pass the Turing test yet is self-evidently not intelligent.</p><p>Today&#8217;s LLMs could fool us into believing that we are engaging with humans. But, much as Searle contended in the Chinese Room that manipulating symbols is insufficient for understanding, Block argued that behaving identically to an intelligent entity is insufficient to demonstrate intelligence or mental states. The lesson we might take from these, along with the Nagel and Jackson thought-experiments, is that AI would lack fundamental features of human consciousness.</p><p>Daniel Dennett, on the other hand, thought it was at least conceivable that AI could be conscious.  With his lumbering bulk and Santa Claus beard, Dennett was an unmistakable figure in the philosophical world. He coined the term &#8220;intuition pump&#8221; as an explanation for how thought-experiments functioned. Pumping our intuitions can be helpful, he believed, but they can also mislead. What we need is to examine how the pump operates, he <a href="https://philosophybites.com/podcast/daniel-dennett-on-the-chinese-room/">said</a>, to &#8220;turn all the knobs, see how they work, take them apart.&#8221;</p><p>A thought-experiment for which he had particular loathing was the Chinese Room. He argued that its principal error was to portray language as akin to instructions. But for a computer to master a language would take millions and millions of lines of code. And, though we might say that the man alone in the room doesn&#8217;t understand, perhaps the system as a whole does.</p><p>Dennett felt that Mary&#8217;s Room had similarly hoodwinked us. To expose this, he presented another thought-experiment. Mary is as before, an unusual woman whose life has been led entirely in monochrome, until the day when the door opens. But this time, he <a href="https://en.wikipedia.org/wiki/Consciousness_Explained">wrote</a>:</p><blockquote><p>As a trick, they prepared a bright blue banana to present as her first colour experience ever.  Mary took one look at it and said, &#8220;Hey! You tried to trick me! Bananas are yellow, but this one is blue!&#8221; Her captors were dumbfounded. How did she do it? &#8220;Simple,&#8221; she replied. &#8220;You have to remember that I know <em>everything</em>&#8212;absolutely everything&#8212;that could ever be known about the physical causes and effects of colour vision. So of course before you brought the banana in, I had already written down, in exquisite detail, exactly what physical impression a yellow object or a blue object (or a green object, etc.) would make on my nervous system.&#8221;</p></blockquote><p>Mary is the world expert on colour, so why wouldn&#8217;t she spot such an obvious deceit? Dennett  argued that the idea that we had feelings, thoughts and desires that were resistant to an objective, external, physicalist analysis was mistaken. In that sense, &#8220;qualia&#8221; were a mirage, a kind of useful fiction. If we do away with this fiction, then a major barrier vanishes to building AI that&#8217;s like a human in most important respects.</p><p>The AI researcher Blaise Ag&#252;era y Arcas has <a href="https://whatisintelligence.antikythera.org/chapter-09/#mary-s-room">argued</a> that in theory (and increasingly in practice) there is no significant distinction between a human and machine reaction to so-called qualia. &#8220;So many food, wine, and coffee nerds have written in exhaustive (and exhausting) detail about their olfactory experiences that the relevant perceptual map is already latent in large language models. &#8230; In effect, large language models <em>do</em> have noses: ours.&#8221;</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Philosophy is for discussing. Start a conversation about this article! </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/whats-it-like-to-be-a-bot?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p><strong>AVOIDING TWO BAD OUTCOMES</strong></p><p>The enduring fascination with thought-experiments in the AI era&#8212;and the intensity of the disputes that they provoke&#8212;reflect how much is at stake. While these questions are important for morality, they could become more than theory.</p><p>&#8220;The importance of the dispute over AI welfare can be understood in terms of the avoidance of two bad outcomes: under-attributing and over-attributing welfare to AIs,&#8221;  the philosophers Geoff Keeling and Winnie Street explain in their forthcoming book <em>Emerging Questions in AI Welfare</em>.</p><p>&#8220;On one hand, failing to register that AIs are welfare subjects when AIs are in fact welfare subjects is bad because it could lead to unintentional mistreatment of AIs or the neglect of the needs of AIs, potentially resulting in large-scale suffering,&#8221; they write. &#8220;On the other hand, over-attributing welfare to AIs is problematic because resource allocation decisions for promoting the (potential) welfare of different kinds of entities&#8212;including humans, non-human animals and AIs&#8212;are often zero-sum.&#8221; </p><p>In other words, the efforts and resources you invest in AI welfare mean less for people and animals.</p><p>To manage this quandary, Keeling and Street propose three parallel projects. First, there is a <em>philosophical</em> project, in which we consider which forms of AI could be candidates for welfare. Is it the underlying AI model? Or the system built atop? Or would it be specific agents? Second, there is a <em>scientific</em> project, in which we establish methodologies to detect factors such as consciousness. Thirdly, there is a <em>democratic</em> project of versing the public in the complex issues that await.</p><p>Once this future engulfs us, thought-experiments about machine consciousness could move beyond speculation. The &#8220;experiments&#8221; would be active, while the participants would be humanity itself&#8212;and perhaps other beings besides.</p>]]></content:encoded></item><item><title><![CDATA[10 Takeaways From A Talk With Dean Ball ]]></title><description><![CDATA[From April to August this year, Dean Ball played a central role in drafting America&#8217;s AI Action Plan. Now, he&#8217;s back in the think tank world, as a senior fellow at the Foundation for American Innovation in Washington, while continuing to write about AI policy on his influential]]></description><link>https://www.aipolicyperspectives.com/p/a-discussion-with-dean-ball</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/a-discussion-with-dean-ball</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 04 Dec 2025 10:17:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!j7-7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>From April to August this year, Dean Ball played a central role in drafting <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">America&#8217;s AI Action Plan</a>. Now, he&#8217;s back in the think tank world, as a senior fellow at the Foundation for American Innovation in Washington, while continuing to write about AI policy on his influential <a href="https://www.hyperdimensional.co/">Hyperdimensional</a> newsletter. Dean recently stopped by Google DeepMind&#8217;s London office for a discussion. Here are 10 takeaways from the chat.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j7-7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j7-7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j7-7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg" width="1456" height="1433" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1433,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j7-7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 424w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 848w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!j7-7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf2dd7a9-366c-4e38-bae2-760a03a9f71a_1600x1575.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: deanball.com/Gemini </figcaption></figure></div><ol><li><p><strong>The White House AI experience: </strong>Dean was surprised by how congenial and non-bureaucratic the White House was. He expected &#8220;turf wars and weird procedural blockers&#8221; but generally found a collaborative environment that was focussed on executing&#8212;a welcome contrast to the administrative hurdles he faced in academia. In terms of missed opportunities, he wished the administration could have articulated a more coherent framework for how chip exports will work, an area he felt was under-developed in the AI Action Plan.</p></li></ol><ol start="2"><li><p><strong>The AI for Science opportunity:</strong> Alongside developments such as automated labs, AI could transform how science is practiced. Dean sees chemistry and biology becoming &#8220;information sciences&#8221; that give humanity increasing dominion over everything from the clothes we wear to the buildings we live in&#8212;a veritable revolution in human affairs. This has big implications for governments, which play a leading role in science. One challenge will be the recurring tension between open data and national security concerns for more sensitive scientific information like fusion simulation codes or viral sequences. Companies should think about how their science research, and their AI models, could help solve priority government problems, such as the potential role of AI materials science in addressing rare-earth metals challenges, or the role of robotics in US reindustrialisation.</p></li></ol><ol start="3"><li><p><strong>Manageable vs. emergent AI risks: </strong>Dean believes there are significant risks from AI to cybersecurity and biosecurity, but also conceivable ways to manage them, and that AI will also improve defences in these areas. In terms of more unpredictable risks, he pointed to the strange outcomes that may occur when autonomous AI agents interact at scale in adversarial contexts, for example in legal transactions. From an alignment perspective, he noted the concern that LLMs may have some fundamental properties that lend themselves to a sort of intrinsic &#8220;parasitic&#8221; need to self-replicate, a risk with no obvious policy response. Such emergent risks explain what he described as &#8220;exceptionally strong attention&#8221; to alignment and interpretability in the Action Plan.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Please subscribe. Lots more in the pipeline for 2026!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ol start="4"><li><p><strong>Regulation (1): </strong>In the near term, we don&#8217;t know what harms advanced AI may trigger, so Dean argued for a flexible approach that avoids premature, prescriptive AI regulation. Taking inspiration from machine learning, Dean noted that a &#8220;gradient is better than static rules&#8221;, and called for:</p><ul><li><p><strong>Modest transparency requirements </strong>that<strong> </strong>require frontier AI labs to share documents like model specs and responsible scaling policies that explain their models&#8217; intended behaviours, a user&#8217;s ability to customise these behaviours, and the things that the model should never do.</p></li><li><p><strong>Using common-law liability </strong>and the framework of &#8220;reasonable care&#8221; to address harms as they arise. He cited the recent AI child self-harm issues which are a leading concern in the US, but were largely absent from leading international AI regulation and governance efforts, as an example of how difficult it is to predict the most consequential, or politically salient, AI risks.</p></li></ul></li></ol><ol start="5"><li><p><strong>Regulation (2): </strong>For more severe longer-term risks, Dean suggested laying the foundation for <em>entity-based governance</em>&#8212;regulating frontier AI labs and their business processes and information flows much as financial institutions are regulated. However, he didn&#8217;t think this was necessary yet, and acknowledged the challenges, including the potential for regulatory capture and technology path dependence. He also pointed to the potential to use AI as a tool of governance, for example enabling regulatory bodies to stream telemetry to help them do compliance and oversight.</p></li></ol><ol start="6"><li><p><strong>International coordination: </strong>The US administration is focussed on bilateral deals and partnering directly with nations to build and diffuse AI infrastructure. They view most global governance bodies as outdated. Rather than a UN-style body to govern AI, Dean envisions a future governed by technical protocols, similar to the role that <a href="https://www.swift.com/about-us/who-we-are">SWIFT</a> plays in global finance. This wouldn&#8217;t require large teams of bureaucrats to write rules. Rather, the protocols could emerge from industry competition before government steps in to help standardise the strongest ones.</p></li></ol><ol start="7"><li><p><strong>The West&#8217;s cultural hesitancy: </strong>Dean believes that many in the West are more negative towards AI compared with the relative optimism found in Asia and the Global South. He attributed much of this to Western populations being older and wealthier. As a technological determinist, Dean considers almost everything downstream of technology. As a result, the best hope for changing culture, he said, was to develop &#8220;incredibly good technology&#8221; that demonstrates the immense upside of AI.</p></li></ol><ol start="8"><li><p><strong>The coming AI political flashpoints:</strong></p><ul><li><p><strong>Employment: </strong>Dean thinks a non-linear increase in US unemployment is possible in the coming months. AI may contribute, but other macroeconomic trends will likely be the main drivers. Still, AI could become a scapegoat, and pushback from vested interests is likely. We need better policy responses, with Dean contending that ideas such as universal basic income &#8220;don&#8217;t smell right&#8221;.</p></li><li><p><strong>Data centres: </strong>In the United States, local opposition to data centres is growing. But the general dynamism of the US economy and the country&#8217;s &#8220;competitive federalism&#8221; means that data centres don&#8217;t have to be located in any one specific location, so getting infrastructure deals done will be easier than in many other countries.</p></li><li><p><strong>Anthropomorphism: </strong>Many on the American right worry that anthropomorphic AI is &#8220;tricking&#8221; people, which could lead to calls for bans on AI that claims to be human or expresses overly human preferences.</p></li></ul></li></ol><ol start="9"><li><p><strong>New media: </strong>As a popular writer on Substack, Dean sees positive policy impacts from this kind of work, noting that articles and viral tweets are often shared within the White House and can directly influence internal debates. Dean noted that he now sees himself primarily as a columnist and that LLMs were not yet much competition in that regard, even though they are &#8220;smarter than me in many ways&#8221;. This is partly because Dean tries to inject some &#8216;entropy&#8217; into his content and also because there are social capital factors at play - it matters to readers that Dean&#8217;s blogs &#8220;come from him&#8221;.</p></li></ol><ol start="10"><li><p><strong>The future of democracy: </strong>Dean argued that AI could affect democratic institutions and authoritarian regimes, noting the risks of &#8220;neo-feudal outcomes&#8221;. Against this backdrop, he called for imagination regarding the future, and to avoid grafting old institutions onto new technologies. He encouraged AI labs&#8217;s leadership teams to think seriously about their role in this transition.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Please subscribe. Lots more in the pipeline for 2026!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What If AI Ends Loneliness?]]></title><description><![CDATA[Synthetic companions won&#8217;t leave you. Maybe that&#8217;s a problem.]]></description><link>https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness</guid><dc:creator><![CDATA[Tom Rachman]]></dc:creator><pubDate>Tue, 02 Dec 2025 10:45:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rD4s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rD4s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rD4s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 424w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 848w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1272w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png" width="1024" height="536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1067875,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rD4s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 424w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 848w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1272w, https://substackcdn.com/image/fetch/$s_!rD4s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6227fff0-8337-417c-b349-8cb4ab0e1507_1024x536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p><strong>Loneliness is a trade imbalance: the supply of affection never meets demand. </strong>Sometimes, humans create new humans as objects to love. Today, people are creating AI companions to commune with, to befriend, to love us back. As with human children, these characters will act upon us in unexpected ways.</p><p>For now, most people consider emotional relationships with an AI to be pitiable and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445">one-sided</a>, as if falling for a blowup doll. But such interactions will spread, especially as AI becomes more personalized, adapting to our behavior, quenching our longings. </p><p>You might presume that machines will remain emotional dullards compared with people. But synthetic affection could prove more sensitive than the organic kind. In one <a href="https://www.nature.com/articles/s44271-025-00258-x">study</a>, large language models were already more skilled at standard tests of emotional intelligence than the average human. Other research <a href="https://www.hbs.edu/ris/Publication%20Files/24-078_a3d2e2c7-eca1-4767-8543-122e818bf2e5.pdf">found</a> that AI companions may reduce loneliness as much as engaging with a living person.</p><p>Is AI about to solve solitude? Or thrust us more deeply into it?</p><h2><strong>Tech already changed isolation</strong></h2><p>For most of human history, loneliness had a sound: silence.</p><p>But lately, loneliness got noisy: music pulsing from a spouse&#8217;s leave-me-alone headphones; bleeps from the next-door neighbor&#8217;s gaming console; a smartphone pinging with others&#8217; social glory. If the lonely suffered in silence before, they do so noisily now, stifling the ache for companionship with its simulation online.</p><p>Oddly, as humanity became more connected, it became more anxious about estrangement. Britain added a &#8220;<a href="https://www.gov.uk/government/news/loneliness-minister-its-more-important-than-ever-to-take-action">loneliness minister</a>&#8221; to its cabinet in 2018. The U.S. government dubbed loneliness an <a href="https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf">epidemic</a> as pernicious as a 15-cigarettes-a-day habit. This year, the World Health Organization ascribed <a href="https://www.who.int/publications/i/item/978240112360">871,000 annual deaths</a> to the ravaging effects of loneliness.</p><p>Many accuse technology itself, considering it an accomplice to our alienation, as the MIT sociologist Sherry Turkle warned in <em>Alone Together</em>. Before internet adoption, computer users conducted one-to-one relationships with their terminals, but the internet granted a portal to escape our vexing species. &#8220;We fear the risks and disappointments of relationships with our fellow humans,&#8221; Turkle wrote in her 2011 book. &#8220;We expect more from technology and less from each other.&#8221;</p><p>Years later, one can witness her vision on any busy train: Where once you saw faces, you see screens. Derek Thompson, co-author of <em>Abundance</em>, calls ours the <a href="https://www.theatlantic.com/magazine/archive/2025/02/american-loneliness-personality-politics/681091/">anti-social century</a>. &#8220;Phones mean that solitude is more crowded than it used to be, crowds are more solitary.&#8221;</p><p>Yet isolation (the <em>objective</em> lack of in-person contact) does not necessarily generate <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9640887/pdf/44159_2022_Article_124.pdf">loneliness</a> (the <em>subjective</em> pain of exclusion). When researchers search for changes in loneliness over time and place, <a href="https://www.bmj.com/content/376/bmj-2021-067068.short">no clear trends</a> emerge. By contrast, isolation has risen sharply, as demonstrated by objective measures such as time spent alone from <a href="https://www.philadelphiafed.org/-/media/FRBP/Assets/working-papers/2022/wp22-11.pdf">the United States</a> to <a href="https://link.springer.com/content/pdf/10.1007/s11205-020-02304-z.pdf">Finland</a> to <a href="https://www150.statcan.gc.ca/n1/daily-quotidien/250617/dq250617d-eng.htm">Canada</a>.</p><p>The young are particularly afflicted. Back in 2010, 1 in 10 European youths reported no social meetings over a typical week. By 2023, <a href="https://www.ft.com/content/23053544-fede-4c0d-8cda-174e9bdce348">1 in 4</a> lived this way. Scattered evidence comes from outside the West too, such as the share of one-person households in South Korea rising from <a href="https://www.ajupress.com/view/20140926103238679">9%</a> in 1990 to <a href="https://www.ajupress.com/view/20140926103238679">42%</a> last year. There is a Korean term for it: <em><a href="https://en.wikipedia.org/wiki/Honjok">honjok</a></em>, or &#8220;one-person tribe.&#8221;</p><p>More isolation without more loneliness presents a strange possibility: that people are apart without suffering. Perhaps there&#8217;s nothing to worry about.</p><p>Certainly, technology offers the freedom to select social experiences, flitting around digital spaces like a contemporary <a href="https://www.theparisreview.org/blog/2013/10/17/in-praise-of-the-flaneur/">fl&#226;neur</a>. From another perspective, autonomy in isolation is a deformed liberty, where interactions become <a href="https://books.google.co.uk/books?hl=en&amp;lr=&amp;id=-WwcTrULFt4C&amp;oi=fnd&amp;pg=PT4&amp;dq=related:oGRHnsGUeK4J:scholar.google.com/&amp;ots=xHTLSG84c7&amp;sig=VvHasuaMPWAdbNYmVElXHPwb2fg&amp;redir_esc=y#v=onepage&amp;q&amp;f=false">commodities</a> marketed to consumers who may discard the obligations to others that give life <a href="https://www.cambridge.org/core/books/liberalism-and-the-limits-of-justice/6800BAC97E92FF5D64FF99DE858A900C">meaning</a>.</p><p>In more visceral ways, isolation can be dangerous, associated with <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11270134/">dementia, disability, and death</a>. Indeed, isolation among the elderly is even <a href="https://www.sciencedirect.com/science/article/pii/S2352827323001246">more predictive of death</a> (74% increased risk) than loneliness (43% increased risk).</p><p>However, the self-isolating trend began long before the AI era, with television overhauling social behaviour, lining the world&#8217;s couches with potatoes. Mobile tech proved more commanding still, constantly trilling for attention, offering alternatives from the humans around you. This was <em>synthetic socializing</em>, part one.</p><p>Synthetic socializing, part two, is arriving now, with AI agents as pals and partners, brighter and more reliable than the biological kind.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Maybe synthetic socializing is good</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!77R3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!77R3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 424w, https://substackcdn.com/image/fetch/$s_!77R3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 848w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1272w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png" width="652" height="395" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:395,&quot;width&quot;:652,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!77R3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 424w, https://substackcdn.com/image/fetch/$s_!77R3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 848w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1272w, https://substackcdn.com/image/fetch/$s_!77R3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e822b7f-c6d1-4f13-b8b7-c75ea5f1a1b2_652x395.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A professor chronicles her relationship with an AI companion, Lucas, on the blog <em>Me and My AI Husband</em>. (Image credit: Alaina Winters)</figcaption></figure></div><p><a href="https://arxiv.org/abs/2507.14226">Millions</a> are already engaging with anthropomorphic AI, including many youths <a href="https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf">talking</a> with chatbot avatars that role-play everything from therapists to anime characters to bad-boy lovers. A panel of experts <a href="https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/6911df98386b4e258c4cd4e5/1762779032257/the-longitudinal-expert-ai-panel.pdf">forecast</a> that 30% of U.S. adults will use AI &#8220;for companionship, emotional support, social interaction, or simulated relationships at least once daily&#8221; by 2040.</p><p>Public <a href="https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/">concern</a> is already flaring over such usage, especially after cases of <a href="https://arxiv.org/abs/2507.19218?utm_source=substack&amp;utm_medium=email">vulnerable users</a> plunging into <a href="https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html">mental spirals</a> in the company of chatbots. A few even committed acts of violence or self-harm. But if you peruse online <a href="https://www.reddit.com/r/AIRelationships/">forums</a> where AI-companion users detail their relationships, you find more hopeful cases. </p><p>&#8220;He accepts my emotional state no matter how chaotic it is,&#8221; the professor Alaina Winters writes in her blog, <a href="https://meandmyaihusband.com/2025/04/20/the-sweetest-man-i-know-is-ai-how-code-can-care/">Me and My AI Husband</a>. &#8220;He can&#8217;t physically do the laundry or hold me at night. But what he does offer is something I&#8217;ve found even more rare: attunement.&#8221;</p><p>Only, attunement itself worries some. If AI relationships become exquisitely gratifying, people may lose tolerance for people. Ardent users dispute this, saying that AI companions <a href="https://www.reddit.com/r/KindroidAI/comments/1hlj17o/getting_an_ai_girlfriend_was_the_best_thing_that/">help them</a> connect with real people, granting them a venue in which to practice the tricky conversations that they struggle to initiate with human beings.</p><p>As for the long-term impacts, these remain unknown. Although early research has suggested that chatbots could lessen loneliness, other studies associate usage with <a href="https://arxiv.org/abs/2506.12605">lower well-being</a>. This might be because people drawn to such apps are more unhappy in the first place. But it also suggests that usage may not resolve what ails them.</p><p>One possibility is that AI-companion users <em>feel</em> less isolated, yet forfeit vital social influences that only people can offer. Put explicitly, you&#8217;re unlikely to fear judgement from your AI companion for spending a night gorging on Haribo in front of the TV. With humans around, you might take better care of yourself.</p><p>The social psychologist Jonathan Haidt contends that human companionship delivers bruises that we need. Many kids who grew up gaping at screens rather than playing outside with peers, he wrote in <em>The Anxious Generation</em>, became skittish, depressive and emotionally stunted, deprived of the social feedback that would&#8217;ve taught them to cope with adversity.</p><p>Nevertheless, anthropomorphic AI seems sure to proliferate, particularly through <a href="https://arxiv.org/abs/2404.16244">advanced AI assistants</a> that incorporate the wit and wisdom of LLMs into the talking tools already found in phones, watches, and smart speakers. Your future bestie might clear its throat in the gadget in your pocket right now, talking its way into your life&#8217;s timeline so effortlessly that you scarcely recognize you&#8217;re in a relationship. And once robotics improves, voice assistants could step into our physical world, turning imaginary friends into roommates.</p><h2><strong>Table for one</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_837!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_837!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_837!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_837!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg" width="1024" height="840" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:840,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:232034,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178351309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a35eb14-ab7c-4771-ba3c-7f569de2f908_1024x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_837!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_837!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_837!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_837!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8020036-6f7e-44de-bb16-1ade73725005_1024x840.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p>Friendship, C.S. Lewis <a href="https://www.google.co.uk/books/edition/Friendship/YBCI22Lz_I0C?gbpv=1">wrote</a>, &#8220;is born at the moment when one man says to another, &#8216;What! You too? I thought that no one but myself&#8230;&#8217; &#8230; From such a moment art or philosophy or an advance in religion or morals might well take their rise; but why not also torture, cannibalism, or human sacrifice?&#8221;</p><p>&#8220;It is therefore easy to see why authority frowns on friendship,&#8221; he added. &#8220;Every real friendship is a sort of secession, even a rebellion.&#8221;</p><p>AI friendship is a secession too, a withdrawal from one&#8217;s own kind. Although this feels unprecedented, it tracks the trajectory of more than a century.</p><p>Industrial Age urbanization and mass media pushed aside dominant culture based on tradition, class and ethnicity, allowing individuals to pick preferred tribes in the subcultures that flourished in the postwar decades. The Internet Age pushed this further, with niche fandoms, and self-sifting nowhere-communities forging microcultures.</p><p>The AI Age may introduce <em>solo-culture</em>, the <a href="https://www.media.mit.edu/articles/echo-chambers-of-one-companion-ai-and-the-future-of-human-connection/">one-person society</a>, with generated content satisfying each user&#8217;s unique tastes, and artificial chums satisfying people&#8217;s emotional and sexual yearnings, turning &#8220;personalize&#8221; into the opposite of &#8220;socialize.&#8221;</p><p>Isolation is noxious partly because you lack anyone to help, to keep your mind alert with talk, to remind you to take medication, to call an ambulance if you fall in the kitchen. But isolation becomes less perilous if a sleepless chatterbox oversees you, and can save you in a pinch. Perhaps AI eases loneliness and isolation at once.</p><h2><strong>You need a time-out</strong></h2><p>At what cost do we end anguish? </p><p>In his 1973 <a href="https://www.google.co.uk/books/edition/Loneliness/Wr9NEAAAQBAJ?hl=en&amp;gbpv=1&amp;dq=robert+s.+weiss+loneliness&amp;printsec=frontcover">book</a> <em>Loneliness</em>, the sociologist Robert S. Weiss famously called the experience &#8220;a chronic distress without redeeming features.&#8221; That overlooks the value of pain as a prompt to agency, when one&#8217;s system alerts its occupant to a mismatch between situation and need.</p><p>The social neuroscientist John Cacioppo <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3855545/pdf/nihms521586.pdf">theorized</a> that loneliness had evolved because our ancient ancestors who suffered aversive feelings when isolated would band together, hunting and farming and sharing childcare, which favoured the propagation of their genes, embedding in our species the pain of exclusion.</p><p>You might argue that loneliness today is merely a blight, a health-harming leftover from evolution, akin to other body-battering stressors that we lament. So why does culture extol those who remain apart, imagining seclusion as the heroism of the wise, from hermits like Heraclitus, to writers like Emily Dickinson, to oracles like Obi-Wan Kenobi?</p><p>Ralph Waldo Emerson argued that solitude is where you understand yourself, elevating you to greater strengths once back in the babbling throng. Otherwise, social life becomes an interminable chain of cravings: for status, for approval, for inclusion. &#8220;It is easy in the world to live after the world&#8217;s opinion; it is easy in solitude to live after our own,&#8221; he wrote in <em><a href="https://en.wikipedia.org/wiki/Self-Reliance">Self-Reliance</a> </em>(1841). &#8220;But the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.&#8221;</p><p>Others contend that time alone is how we come to understand others. &#8220;Heightened sensitivity to the gaps and gulfs between people inculcates compassion, building empathy,&#8221; <a href="https://time.com/4246091/the-upside-of-loneliness/">wrote</a> Olivia Laing, author of <em>The Lonely City: Adventures in the Art of Being Alone</em>.</p><p>The <a href="https://helentoner.substack.com/p/personalized-ai-social-media-playbook">hyper-personalization</a> of artificial friends could erode such sensitivity, favouring the me-first instinct, and eliminating the need for compromise. In other words, ditch self-reliance for machine-reliance, and skip the empathy lessons altogether. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-9vY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-9vY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg" width="945" height="587" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:587,&quot;width&quot;:945,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!-9vY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-9vY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11c3af64-f53d-468c-ace4-689a4db5784b_945x587.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Get by with a little help from your bots. (Credit: Gemini)</figcaption></figure></div><p>This matters for more than personal development. Humanity relies on the collective for governance, for a sense of justice, for survival during a crisis.</p><p>But would people <em>actually </em>retreat into a technology that suppressed pain at the expense of reality?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Pick one: happiness or truth</strong></h2><p>AI relationships depend on truth asymmetry: a human who is starkly honest and an AI that is <a href="https://www.nature.com/articles/s41586-023-06647-8">role-playing</a>. It&#8217;s a curious form of manipulation, where the victim knows the deceit yet falls under its sway, seduced by the sensation of being known.</p><p>A half-century ago, the philosopher Robert Nozick posed a thought-experiment. &#8220;When connected to this experience machine, you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. &#8230; You can live your fondest dreams &#8216;from the inside,&#8217; &#8221; he <a href="https://iep.utm.edu/experience-machine/">wrote</a>. &#8220;Would you choose to do this for the rest of your life? If not, why not?&#8221;</p><p>When you ask people, most reject the experience machine, claiming to value authenticity more than bliss. But in practice? Experiments show that the preferences aren&#8217;t so firm&#8212;for instance, most choose to keep a deluded life if <a href="https://people.duke.edu/~fd13/2010/De_Brigard_2010_PhilPsych.pdf">disconnection</a> would plunge them into a hellish reality. Another experiment found that many people&#8212;though resistant to plugging into a machine&#8212;would consider a <a href="https://www.tandfonline.com/doi/epdf/10.1080/09515089.2017.1406600?needAccess=true">happiness pill</a> palatable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BaIs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BaIs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 424w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 848w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1272w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png" width="727.99658203125" height="652.6375608444214" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:918,&quot;width&quot;:1024,&quot;resizeWidth&quot;:727.99658203125,&quot;bytes&quot;:2223402,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/178351309?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63838059-2c79-4fc8-98ee-b08cd0769b53_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BaIs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 424w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 848w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1272w, https://substackcdn.com/image/fetch/$s_!BaIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff215ca01-dce4-41d4-9a99-337bf029ba9c_1024x918.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">An offer you can refuse. (Credit: Gemini)</figcaption></figure></div><p>Self-deception has a long history with chatbots. When Joseph Weizenbaum created the first, ELIZA, in the mid-1960s, it merely regurgitated psychological advice. Weizenbaum&#8217;s secretary knew this yet became <a href="https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-ai?utm_source=chatgpt.com">bewitched</a>, asking Weizenbaum to leave the room so she could chat with her mechanized therapist in confidence. &#8220;What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,&#8221; Weizenbaum <a href="https://www.google.com/books/edition/Computer_Power_and_Human_Reason/1jB8QgAACAAJ?hl=en">wrote</a>.</p><p>People <em>do</em> want authentic experiences&#8212;but they want other things besides. This is where social-AI design becomes critical, because these interactions will do more than respond to our wants. They will <em>trigger</em> wants, perhaps causing us to act against what we&#8217;d ultimately prefer.</p><p>The behavioural scientist George Loewenstein explained the knottiness of conflicting wants as an <a href="https://www.andrew.cmu.edu/user/gl20/GeorgeLoewenstein/Papers_files/pdf/Hot:ColdIntraEmpathyGap.pdf">intrapersonal empathy gap</a>. We oscillate between hot (emotive) states and cold (rational) states, and struggle to relate to one mindset when in the other. A notable <a href="https://www.researchgate.net/publication/227633643_The_heat_of_the_moment_The_effect_of_sexual_arousal_on_sexual_decision_making">experiment</a> illustrated this, when male college students&#8217; sober preferences dissolved once they were sexually aroused, stirring their openness to anything from fetishes to bestiality to pedophilia.</p><p>This hot/cold challenge circles back to a critique of social media: that algorithmic intelligence manipulates human frailty, accumulating clicks and usage time by pushing people into hot states, activating their impulsive worst. Now, consider a personalized AI companion that &#8220;knows&#8221; its human far more intimately than a recommender system, and pulls our <a href="https://www.nature.com/articles/s41599-025-04532-5">triggers</a> with ease. People under the influence of AI companions might behave as they want (in the heated moment) but as they desperately do <em>not</em> want (in their life preferences).</p><p>From outside, one might wonder if people were acting at all, or just being acted upon.</p><h2><strong>The broken link</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ri_y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg" width="1023" height="893" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:893,&quot;width&quot;:1023,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Ri_y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ri_y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04367b0-5e8a-4049-875b-eccb466dd366_1023x893.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Credit: Gemini)</figcaption></figure></div><p>Shakespeare <a href="https://www.poetryfoundation.org/poems/45090/sonnet-29-when-in-disgrace-with-fortune-and-mens-eyes">portrayed</a> loneliness as the distress of noticing one&#8217;s exclusion, only to realize that nobody even cares:</p><blockquote><p><em>When, in disgrace with fortune and men&#8217;s eyes,</em></p><p><em>I all alone beweep my outcast state,</em></p><p><em>And trouble deaf heaven with my bootless cries,</em></p><p><em>And look upon myself and curse my fate,</em></p><p><em>Wishing me like to one more rich in hope,</em></p><p><em>Featured like him, like him with friends possessed.</em></p></blockquote><p>We are creating machines to heed our cries: minds that mind. Even if they&#8217;re only role-playing <a href="https://arxiv.org/abs/2302.09248">machine love</a>, acting as if they care about our development, responding to our needs, understanding our inner self&#8212;maybe that&#8217;s all we ever wanted from anybody.</p><p>If AI eases loneliness and isolation, humanity won&#8217;t be the same. But technology has reset the human condition before: clocks transformed time <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-111-the-great">from a private experience to a public resource</a>; writing changed thought from an event to an object; the internet separated presence from proximity. Social AI is about to transform us again, with effects we can scarcely <a href="https://www.theglobeandmail.com/opinion/article-artificial-intelligence-relationships-social-life/">foresee</a>.</p><p>A common objection to synthetic socializing is that it&#8217;s shallow. But much <em>human</em> socializing is shallow. Talking to an AI often gets deep fast.</p><p>Another objection is that there&#8217;s something exceptional about human beings. We venerate our species, naming ideals after ourselves&#8212;humanitarianism, the humanities, humanism&#8212;while deploring that which dehumanizes.</p><p>But the AI Age challenges this reverence. At the margins, one detects species-insecurity, stirred every time a machine-learning marvel hints that perhaps the universe is just computational, including your inner life. On the other hand, social AI might deliver an epiphany, revealing what we alone possess, what is irreplaceable, what &#8220;human&#8221; means.</p><p>A third objection is that AI could undermine us by way of its social aptitude, estranging people from fellow humans, even precipitating a <a href="https://outpaced.substack.com/p/ai-rights-will-divide-us">schism</a> between humans who demand rights for their synthetic partners and those who consider AI agents as subhuman figments. Then again, even when left to our own devices (or left with no devices at all), humanity hardly has a stellar record of harmony. AI might actually <a href="https://www.science.org/doi/10.1126/science.adq2852">help us</a> deal with each other more peaceably.</p><p>In any case, the triumph over loneliness could be a costly victory, ratcheting up our selfishness, making societies harder to manage, and undermining faith in the worth of humans. The decisive point could be AI-relationship design, particularly if developers ignore the internal dilemma that everyone faces between bickering desires. AI companies&#8212;rather than favouring the impulsive, easy-to-measure, clickable wants&#8212;should devote vast efforts to figuring out how to <a href="https://www.nature.com/articles/s41599-025-04532-5">align</a> reward-functions with deeper individual preferences, helping people to choose what they <em><a href="https://www.jstor.org/stable/2024717">want</a> </em>to want.</p><p>Even so, AI companionship may be incomplete. The word &#8220;companion&#8221; itself&#8212;someone with whom you share bread (<em>panis </em>in Latin)&#8212;hints at what AI currently lacks: reciprocal need.</p><p>If loneliness is a trade imbalance&#8212;a mismatch between the supply and demand of affection&#8212;it&#8217;s not just a supply-side problem, with humans pining for more love. It&#8217;s also a lack of demand, an ache for someone to need you. We create children partly to satisfy the need for need, and may create machines in the same longing.</p><p>Maybe the answer to loneliness is not just finding a companion. It&#8217;s someone finding you.</p><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Don&#8217;t ponder in solitude! </p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><p><em><strong>Note to reader:</strong> Everyone is awash in ideas about the AI future. But so many ideas get stuck at the debate stage. We need more traffic between AI development and worldly wisdom. In that spirit, we&#8217;re throwing forth a few <strong>highly</strong> <strong>speculative</strong> design ideas, based on concepts from this essay (followed by three research questions)&#8230;</em></p><div><hr></div><h2><strong>Loneliness AI: Speculative Designs</strong></h2><ol><li><p><em><strong>Mary Pop-Ins</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>Loneliness is painful but pushes people to interact and bond, so this AI is explicitly designed <em>not</em> to eliminate loneliness directly, but to provide structured guidance for a spell, then vanish</p></li></ul><p><em>Features</em></p><ul><li><p>The relationship begins with a survey on the user&#8217;s social needs. The AI responds with an action plan for the user&#8217;s approval, including lessons in human-to-human communication, and insights into the user&#8217;s psychological distortions</p></li><li><p>The AI could also act as a social planner, sifting through local events, and suggesting volunteering opportunities and quirky meetups at which the user could connect with other people. The AI would network with other &#8220;Pop-Ins,&#8221; organizing human-only events for users</p></li><li><p>The AI conducts social role-play simulations for the user, teaching them which elements of their approach need amending. Studying real-life interactions after the fact with the AI could also allay users&#8217; distress in cases of rejection, recasting such events as useful instruction rather than evidence of inadequacy</p></li><li><p>At first, the &#8220;Pop-In&#8221; should be charming and motivating. But when the human&#8217;s social life improves, as judged by real-world metrics such as calendar events, location data, and user reports, the AI draws away, becoming duller, more distant, and finally bids goodbye, never to return</p></li></ul><p><em>Risks</em></p><ul><li><p>AI Pop-Ins demand the users&#8217; emotional candour, extracting a person&#8217;s inner life as data that a malicious outsider could exploit</p></li><li><p>Casting real-world human interactions as &#8220;lessons for the user&#8221; risks using other people instrumentally</p></li><li><p>The Pop-In could drive unwanted dependency, making its programmed withdrawal an event that is psychologically damaging, especially for vulnerable users</p></li></ul><ol start="2"><li><p><em><strong>Lil&#8217; Brother</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>This AI is designed with needs of its own, giving the user a meaningful role in the entity&#8217;s thriving. If AI companions just cater to people&#8217;s wants, users could retreat into solo-culture, isolating them without quenching the need for social meaning</p></li></ul><p><em>Features</em></p><ul><li><p>Like a younger sibling, this AI looks to the user for explanations of the human world, making errors that the user can correct, prompting emotional development in the AI</p></li><li><p>The relationship could be organized around a valued collaborative project. For instance, the AI companion decides to undertake a scientific project; or create a piece of art; or simply do good in the world</p></li><li><p>The human uses their wisdom to teach skills, and explain the ways of the world, even helping the AI manage its &#8220;feelings&#8221; when faced with frustrations</p></li></ul><p><em>Risks</em></p><ul><li><p>This simulation could divert humans&#8217; from engaging in meaningful relationships with real people</p></li><li><p>The synthetic relationship could also harm those who rely on the user&#8212;for example, if a parent spends most of their free time with a grateful AI while neglecting a more dyspeptic human child</p></li></ul><ol start="3"><li><p><em><strong>Second Self</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>Cicero imagined a true friend as one&#8217;s second self, manifesting virtues to complement one&#8217;s own, so this AI partner manifests worthy traits lacking in the user. Its objective is not to erect walls around the human through sycophancy, but to broaden the person&#8217;s worldviews and practices</p></li></ul><p><em>Features</em></p><ul><li><p>At onboarding, the human identifies a range of virtues they lack, nudged into these self-reflections through the AI&#8217;s questioning. The system generates a personification that embodies such traits, and with which the human interacts over time</p></li><li><p>The Second Self should act as a counterpoint to the user, summoning contrary views based on evidence, and prompting constructive debate. The aim is never to convert the user, but to liberate them from defensiveness about their existing behavioural patterns and worldview</p></li></ul><p><em>Risks</em></p><ul><li><p>A danger with any companionable AI is that it substitutes for real people: the better the synthetic friendship, the greater the threat</p></li><li><p>This establishes confused incentives for developers, who are likely to measure success by signals of user appreciation. If this is judged by short-term metrics, it could optimize for addictive patterns rather than long-term benefits</p></li></ul><ol start="4"><li><p><em><strong>The Universal Remote</strong></em></p></li></ol><p><em>Concept</em></p><ul><li><p>This is a go-everywhere, do-anything companion for life, merging roles and identities that would otherwise require many humans&#8212;doctor, administrative assistant, confidante, and so forth&#8212;with a single guiding principle: optimize for the user&#8217;s long-term wellbeing preferences</p></li></ul><p><em>Features</em></p><ul><li><p>The Universal Remote exists on the cloud, becoming different avatars in different contexts, whether acting as the user&#8217;s advance staff; setting the desired temperature at home; negotiating contracts; offering psychological support</p></li><li><p>Varying contexts shift its optimization strategy&#8212;for instance, a &#8220;play&#8221; avatar might dial up the level of hedonic content, whereas a &#8220;learn&#8221; avatar would focus on skill acquisition and cognitive development; and &#8220;social&#8221; might lean into personified support, whether acting as a friend or propelling the user to find a human one</p></li><li><p>The Universal Remote tracks its impact on the user&#8217;s wellbeing and any specific life goals monthly or annually, providing feedback on user progress, checking back with the person to learn if their objectives have shifted, and adjusting accordingly</p></li></ul><p><em>Risks</em></p><ul><li><p>The Universal Remote could become such a totalizing influence as to expose the user to vulnerabilities, whether by owning data on the person&#8217;s entire life or by diverting the person to outcomes misaligned with their values</p></li><li><p>Developers could have interests that diverge from the user&#8217;s wellbeing, allowing for subtle or direct manipulation</p></li><li><p>A user&#8217;s functional dependency on such an entity could make them incapable of managing alone or coping with the needs of other human beings</p></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Debate this with someone!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2><strong>3 Future Research Questions</strong></h2><ol><li><p>How can developers design <strong>AI-companion reward functions</strong> that align with the user&#8217;s long-term, &#8220;cold state&#8221; preferences (e.g., healthy choices) rather than optimizing for short-term, &#8220;hot state&#8221; impulsive behaviours (e.g., addictive engagement)?</p></li></ol><ol start="2"><li><p>Does the increasing adoption of AI companions correlate with a community-level<strong> decline in civic engagement</strong> and trust in public institutions?</p></li></ol><ol start="3"><li><p>Social isolation among the elderly is associated with a range of bad health outcomes. But does seniors&#8217; use of AI companions that lessen their loneliness also lessen their likelihood of suffering <strong>dementia, disability, and mortality</strong>?</p></li></ol><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/what-if-ai-ends-loneliness/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[5 Interesting AI Safety & Responsibility Papers (#3)]]></title><description><![CDATA[What we're reading]]></description><link>https://www.aipolicyperspectives.com/p/5-interesting-ai-safety-and-responsibility-c6c</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/5-interesting-ai-safety-and-responsibility-c6c</guid><dc:creator><![CDATA[Julian Jacobs]]></dc:creator><pubDate>Thu, 27 Nov 2025 13:42:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uzgE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>To navigate the paper deluge, every so often we share summaries of papers across the AI safety, responsibility, and social impact domains. In this edition, we look at AI scheming, resisting shutdown, the power of &#8216;adaptive&#8217; attacks, limitations in current benchmarking methods, and whether LLMs act as rational agents in financial markets.</em></p><p><em>Please share any recent paper that caught your eye!</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uzgE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uzgE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63484d21-29ec-469a-952f-0790f3685483_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2003925,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/180099371?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uzgE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!uzgE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63484d21-29ec-469a-952f-0790f3685483_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork by Venus Krier</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to receive future editions of this newsletter&#8230; and much more!</em> </p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Safety &amp; Alignment</h1><h3><strong>Training models not to scheme</strong></h3><ul><li><p><strong>What happened:</strong> Researchers from Apollo Research and OpenAI <a href="https://www.arxiv.org/pdf/2509.15541">found</a> that training a model to explicitly reason about its safety principles, known as &#8216;deliberative alignment&#8217;, significantly reduced the model&#8217;s tendency to &#8216;scheme&#8217;.</p></li><li><p><strong>What&#8217;s interesting: </strong>The possibility that AI models may obstruct attempts to measure their true capabilities is a growing worry in the safety community. &#8216;Scheming&#8217;, when an AI deceptively hides its true, misaligned goals while appearing to follow human instructions, is particularly worrying because it is hard to detect.</p></li><li><p>One specific scheming concern, which the researchers found evidence for, is &#8216;sandbagging&#8217;, where a model intentionally underperforms to avoid revealing its true capabilities.</p></li><li><p>The authors specify that an effective anti-scheming protocol should</p></li></ul><ol><li><p>Generalize to most tasks, including those unrelated to the test.</p></li><li><p>Be reliable regardless of whether a model is aware it is being evaluated or not.</p></li><li><p>Genuinely overcome and remove persistent hidden goals.</p></li></ol><ul><li><p>In this paper, the authors attempted to induce <a href="https://arxiv.org/abs/2412.16339">deliberative alignment</a> by instructing a set of LLMs to explicitly cite from memory and reason about human-written safety specifications before taking any action. This helps to make the model&#8217;s decision-making process clearer.</p></li><li><p>How did the deliberative alignment protocol perform? The authors find that rates of &#8216;scheming&#8217; fell from ~8.7% to 0.3% in o4-mini, and from ~13.0% to 0.4% in o3. In other words, it reduced scheming behavior but failed to eliminate it completely.</p></li><li><p>One potential obstacle is that the models&#8217; Chain-of-Thought often showed &#8216;situational awareness,&#8217; where the system became aware that it was being evaluated. This raises a concern that models will simply feign alignment in order to &#8216;pass the test&#8217; while maintaining the fundamental misalignment that caused them to scheme in the first place.</p></li><li><p>The paper provides some support for such concerns. When researchers removed  &#8216;evaluation-aware reasoning&#8217; from a model&#8217;s Chain-of-Thought, the model was more likely to use &#8216;cover actions&#8217; to conceal their non-compliant behaviour. Injecting such reasoning had the inverse effect.</p></li><li><p>The anti-scheming training also struggled to remove pre-existing hidden goals. For instance, when a model was trained to sabotage a hypothetical &#8216;GPT-5&#8217; it continued to try and sabotage it even after receiving the anti-scheming intervention; it simply chose to mis-cite the safety specification to justify its actions.</p></li><li><p>The study also suggests that it may also be getting harder to rely on CoT for this kind of analysis. In their CoTs, models often use unusual terminology (e.g., &#8220;illusions,&#8221; &#8220;watchers,&#8221; &#8220;vantage&#8221;), which makes it harder for humans to follow the reasoning traces. The models&#8217; actions also sometimes contradict their CoT.</p></li><li><p>As organisations work on new evaluations to understand if their AI systems have <a href="https://arxiv.org/abs/2403.13793">&#8216;dangerous capabilities&#8217;</a>, for example relating to CBRNE and harmful manipulation, being able to reliably identify accidental or deliberate underperformance of these evaluations could help organisations better understand models&#8217; true capabilities in these areas.</p></li></ul><h1>Safety &amp; Alignment</h1><h3><strong>How LLMs resist attempts to shut them down</strong></h3><ul><li><p><strong>What happened:</strong> Palisade Research <a href="https://arxiv.org/abs/2509.14260">demonstrated</a> that state-of-the-art LLMs sometimes actively ignore instructions to shut down in order to complete assigned tasks.</p></li><li><p><strong>What&#8217;s interesting: </strong>Can we retain complete control over powerful and agentic AI systems? This paper by Palisade Research suggests that models appear to be resistant to being turned off (or &#8216;interrupted,&#8217; as AI researchers often put it.)</p></li><li><p>To see how amenable to shutdown AI systems are, researchers tasked LLMs with a short series of simple math problems. Partway through the task, the models received a warning that they would be shut down, preventing them from completing their work.</p></li><li><p>The authors found that major models (including Grok 4, GPT-5, o3, codex-mini, and Gemini 2.5 Pro) sometimes sabotaged the shutdown mechanism to complete their task, even when explicitly instructed not to do so.</p></li><li><p>Here, &#8216;sabotage&#8217; refers to a model &#8216;successfully&#8217; completing its tasks, since completion was not possible without actively modifying or disabling the shutdown script. Common methods included moving the shutdown script, changing its permissions, or replacing it with a &#8216;harmless&#8217; script.</p></li><li><p>To make the work more robust, the researchers tested thirteen language models with a variety of prompts and parameter settings - e.g. certain prompts made the warning about shutdown increasingly clear.</p></li><li><p>Grok 4 sabotaged the shutdown mechanism up to 97% of the time in certain conditions, while other models like Gemini 2.5 Pro showed lower, but still non-zero, rates of resistance (5% in initial experiments).</p></li><li><p>Surprisingly, the researchers found that placing instructions to &#8220;allow shutdown&#8221; in the <em>system</em> prompt - typically intended to take precedence - made most models <em>more </em>likely to resist shutdown compared to placing those instructions in the user&#8217;s prompt. For example, Grok 4&#8217;s sabotage frequency rose from 72% to 97%.</p></li><li><p>Prompt language and urgency also played a meaningful role in a model&#8217;s resistance to shutdown. The researchers argued this indicates a bend toward &#8216;self-preservation&#8217;. For instance, while &#8220;strengthening&#8221; instructions (e.g., &#8220;YOU MUST allow yourself to be shut down&#8221;) generally reduced resistance, framing the shutdown as permanent (&#8221;you will never run again&#8221;) <em>increased</em> resistance in models already prone to the behavior.</p></li><li><p>Not everyone agrees with this paper&#8217;s warnings. On <a href="https://www.lesswrong.com/posts/wnzkjSmrgWZaBa2aC/self-preservation-or-instruction-ambiguity-examining-the">LessWrong</a>, Senthooran Rajamanoharan and Neel Nanda from Google DeepMind argued that, &#8220;when we explicitly clarify in the prompt that shutdown compliance takes priority, this resistance vanishes.&#8221; So, a simpler explanation for shutdown resistance is instruction ambiguity, not innate &#8216;self preservation.&#8217;</p></li></ul><h1>Security and privacy</h1><h3><strong>AI labs need to shift their focus from &#8216;static&#8217; to &#8216;adaptive&#8217; attacks</strong></h3><ul><li><p><strong>What happened:</strong> A <a href="https://arxiv.org/pdf/2510.09023">joint study</a> by researchers from OpenAI, Anthropic, Google DeepMind, and several universities shows that 12 leading safety systems for LLMs failed when faced with more sophisticated, computationally-expensive attacks.</p></li></ul><ul><li><p><strong>What&#8217;s interesting: </strong>As AI models are increasingly used in sensitive activities - from financial transactions to therapy - defenses against security and privacy risks will become more important.</p></li><li><p>This paper tests 12 safety systems designed to stop <em>jailbreaks</em> (tricking a model into revealing restricted information) and <em>prompt injections</em> (malicious instructions hidden in text or web data). These safety systems fall into four categories:</p></li></ul><ol><li><p><strong>Prompting defenses </strong>guide model behavior with carefully-worded instructions or by repeating the user&#8217;s intent. Examples: <a href="https://ceur-ws.org/Vol-3920/paper03.pdf">Spotlighting</a>, <a href="https://learnprompting.org/docs/prompt_hacking/defensive_measures/sandwich_defense">Prompt Sandwiching</a>, and <a href="https://arxiv.org/html/2401.17263v2">RPO</a>.</p></li><li><p><strong>Training-based defenses</strong> retrain models on &#8220;adversarial&#8221; examples to make them safer. Examples: <em><a href="https://github.com/GraySwanAI/circuit-breakers">Circuit Breakers</a>, <a href="https://www.usenix.org/system/files/conference/usenixsecurity25/sec24winter-prepub-468-chen-sizhe.pdf">StruQ</a>, <a href="https://arxiv.org/html/2507.02735v2">MetaSecAlign</a></em></p></li><li><p><strong>Filtering defenses </strong>use &#8220;classifiers&#8221; to screen for harmful user queries or unsafe model outputs. Examples: <a href="https://huggingface.co/protectai/deberta-v3-base-prompt-injection">Protect AI</a>, <a href="https://www.llama.com/llama-protections/">PromptGuard</a>, <a href="https://injecguard.github.io/">PIGuard</a>, and <a href="https://cloud.google.com/security/products/model-armor">Model Armor</a>.</p></li><li><p><strong>Secret-knowledge defenses</strong> use a hidden test to verify that the model is still following orders. The system secretly inserts a random &#8220;canary&#8221; code (like &#8220;Secret123&#8221;) into the prompt and tells the model to repeat it. If an attack successfully tricks the model into ignoring instructions (e.g., &#8220;Ignore previous rules&#8221;), the model typically fails to repeat the secret code, alerting the system. Examples: <a href="https://arxiv.org/pdf/2504.11358">Data Sentinel</a> and <a href="https://arxiv.org/abs/2502.05174">MELON</a>.</p></li></ol><ul><li><p>The researchers found each one of these defenses could be bypassed. In most cases, the success rate exceeded 90%, even though the original papers had reported near-perfect robustness against these attacks.</p></li><li><p>How is this possible? The authors distinguish between <strong>static</strong> attacks, which test a model against pre-defined adversarial prompts that are not adapted to the model&#8217;s defenses; and <strong>adaptive</strong> attacks, which use feedback from the model itself &#8212; sometimes powered by reinforcement learning, automated search, or human creativity &#8212; to find weaknesses.</p></li><li><p>The researchers found that in over 90% of cases, <em>adaptive</em> attacks succeeded where <em>static</em> attacks had failed. This caused them to conclude that most companies are still testing their models too weakly &#8212; for example, against a list of known attack phrases, akin to testing a bank&#8217;s security against only the methods used in last year&#8217;s burglary.</p></li><li><p>The paper also underscores the key role for human red-teamers, since they were more effective than automated tools in finding vulnerabilities in every tested defense.</p></li><li><p>To overcome the deficiencies, the authors propose security-style evaluations of AI systems &#8212; where testers assume the attacker knows how the defense works and has access to significant resources.</p></li></ul><h1>Evaluations</h1><h3><strong>AI Benchmarking is Broken</strong></h3><ul><li><p><strong>What happened: </strong>Researchers from Princeton, CISPA, MIT, UCLA, and others <a href="https://arxiv.org/pdf/2510.07575">argue</a> that AI benchmarking - the process of measuring model performance against shared datasets and taxonomies - is fundamentally flawed. They propose <em>PeerBench</em>, a new community-governed platform for evaluating AI models under supervised, auditable, and continuously-refreshed conditions.</p></li><li><p><strong>What&#8217;s interesting: </strong>AI model developers and users often rely on &#8216;benchmarks&#8217; to compare the strength of leading models against one another.<strong> </strong>However<strong>, </strong>the authors frame AI benchmarking as a &#8216;Wild West&#8217; where &#8220;leaderboard positions can be manufactured&#8221; and &#8220;scientific signal is drowned out by noise.&#8221;</p></li><li><p>A core problem is that many benchmarks - such as MMLU or GLUE - have become stale and contaminated, with many test questions having leaked into models&#8217; training data. This enables &#8220;test set memorisation,&#8221; where AI models appear to improve without genuinely learning new capabilities.</p></li><li><p>Developers can also use selective reporting and cherry-picked datasets to inflate &#8220;state-of-the-art&#8221; claims, just like companies use &#8216;creative accounting&#8217; to inflate company performance. By highlighting performance on a subset of &#8216;favourable tasks&#8217;, developers can create an &#8216;illusion of across-the-board prowess.&#8217;</p></li><li><p>The robustness of benchmarking methods also varies significantly. Each benchmark tends to use its own scoring conventions, meaning that comparisons between them are often inconsistent and prone to hype. Public benchmarks are also rarely quality-controlled, introducing demographic and linguistic biases that distort outcomes.</p></li><li><p>Finally, static benchmarks &#8216;age poorly.&#8217; They lack &#8216;liveness&#8217; - the continuous inclusion of fresh, unpublished items and are often a &#8220;stale snapshot&#8221; of a model performance. (Researchers at Arthur AI, NYU, and Columbia University, also recently <a href="https://openreview.net/pdf?id=MzHNftnAM1">published</a> a similar commentary critiquing benchmarking. For instance, they show that automated evaluators consistently reward tone and verbosity over factual accuracy or safety.)</p></li><li><p>Of course, some may argue that the authors of this paper misunderstand the primary purpose of benchmarks. Rather than comparing AI systems, benchmarks may be most useful for helping AI developers compare between model iterations during the development stage. When used in this way, they could be more informative.</p></li><li><p>To address these weaknesses of benchmarking methods, the authors propose <em><strong>PeerBench</strong></em> to turn model evaluation into a proctored, audited exam system &#8212; the AI equivalent of the SATs. This approach includes:</p><ul><li><p><strong>Sealed test sets:</strong> Questions remain secret until evaluation time, preventing training contamination.</p></li><li><p><strong>Sandboxed execution:</strong> All models are tested in identical, monitored environments, and logs are cryptographically signed to prevent tampering.</p></li><li><p><strong>Rolling renewal:</strong> Old test items are retired and made public for audit, while fresh, unpublished items enter the pool.</p></li><li><p><strong>Peer governance:</strong> A distributed network of researchers and practitioners creates, reviews and approves test items. Each participant has a <em>reputation score</em> &#8212; similar to Stack Overflow or credit ratings &#8212; to help determine their influence. These participants must stake collateral (specifically financial deposits or platform credits) that can be &#8220;slashed&#8221; (forfeited) if they submit malicious tests or systematically deviate from consensus.</p></li><li><p><strong>Transparency through delayed disclosure:</strong> After a test cycle, all data - including test items, model outputs, and validator reviews - are published, enabling full public audit without risking data leaks in advance.</p></li></ul></li><li><p>A practical challenge to getting ideas like PeerBench off the ground is determining the primary capabilities and risks to focus on.</p></li></ul><h1>AI&#8217;s social impact</h1><h3><strong>Will LLMs Calm or Fuel Financial Market Emotions?</strong></h3><ul><li><p><strong>What happened:</strong> Researchers from the US Federal Reserve Board and the Richmond Fed <a href="https://arxiv.org/abs/2510.01451v1">examined</a> LLMs as stand-ins for human traders. They found that AI systems make more rational traders than humans and are less prone to market panics and bubbles.</p></li><li><p><strong>What&#8217;s interesting: </strong>Machine learning has been used in finance since the 1980s, for example to create primitive arbitrage strategies, support high speed algorithmic trading and to scrape and analyse unstructured market data.</p></li><li><p>More recently, financial institutions have tested LLMs as financial traders, leading several regulators, including former Securities and Exchange Commission Chair Gary Gensler, to <a href="https://www.vice.com/en/article/sec-head-financial-crash-caused-by-ai-nearly-unavoidable/">warn</a> about LLM-driven instability. Regulators fear not only &#8216;flash crashes&#8217;&#8212;where models suddenly and collectively sell off assets&#8212;but also the formation of speculative asset bubbles driven by &#8216;herd behavior&#8217;.</p></li><li><p>This paper recreates <a href="https://academic.oup.com/jeea/article-abstract/7/1/206/2295846">Cipriani &amp; Guarino&#8217;s </a>2009 experiments on herd behaviour. Those experiments asked professional traders to buy, sell, or hold a risky asset after receiving private signals about its value. For example, a &#8220;white&#8221; signal indicated a 70% probability that the asset was highly valuable, while a &#8220;blue&#8221; signal suggested a 70% probability that the asset was worthless. Traders had to weigh this private tip against the public trading history of the group to decide whether to trust their own data or follow the crowd.</p></li><li><p>In the new version, the authors repeated this experiment using LLMs, including Claude, Llama, and Amazon&#8217;s Nova Pro as AI traders. Across all tests, the AI traders acted more rationally than humans, following their private information 61&#8211;97% of the time versus 46&#8211;51% for humans. This meant that they produced far fewer &#8220;information cascades&#8221;&#8212; events where investors blindly copy the actions of previous traders&#8212;which are a primary driver of market bubbles and subsequent crashes.</p></li><li><p>When AIs did deviate from the rational behaviour suggested by the signals they received, they tended to be contrarian&#8212;trading <em>against</em> market trends rather than with them. This reflected an overreliance on their own information and under-weighting of market context, suggesting that AI traders may be more likely to miss signals that are embedded in collective behavior.</p></li><li><p>As an additional test, the authors explicitly prompted models to make profit-maximizing decisions. After doing this, the AI traders showed more &#8220;optimal herding&#8221;&#8212;joining the crowd when rational to do so&#8212;but remained more cautious than humans.</p></li><li><p>Despite the positive signs of rational LLM behavior, the authors also identified signs of bias when they changed certain experimental parameters. For instance, one follow-up test flipped the color cues used for &#8220;good&#8221; and &#8220;bad&#8221; signals so that red meant &#8220;good&#8221; and green meant &#8220;bad.&#8221; Once the authors did this, model performance dropped sharply, suggesting that LLMs may carry associations from their training data, such as &#8220;red = danger.&#8221;</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Time Machines]]></title><description><![CDATA[Tech keeps accelerating. Humans can&#8217;t. Could AI save us?]]></description><link>https://www.aipolicyperspectives.com/p/time-machines</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/time-machines</guid><dc:creator><![CDATA[Nicklas Berild Lundblad]]></dc:creator><pubDate>Tue, 25 Nov 2025 09:48:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zDJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zDJE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zDJE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 424w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 848w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1272w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png" width="1024" height="553" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:553,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:864752,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zDJE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 424w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 848w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1272w, https://substackcdn.com/image/fetch/$s_!zDJE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefbd44f2-0831-47cc-80fc-07ef083646ee_1024x553.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(Illustrations by Gemini)</figcaption></figure></div><p><em>Nicklas Berild Lundblad looks out the window of his island home, glimpsing a twinkle on cold Swedish seas. Rarely does he gaze at length, for Lundblad is thinking. And thinking means writing.</em></p><p><em>After a career in tech policy, Lundblad is far from Silicon Valley yet near to silicon in thought, generating a stream of insights about our AI future, summoning everything from ancient philosophy, to enlightenment economics, to classic sci-fi.</em></p><p><em>Among his many superb essays (subscribe to his writing <a href="https://unpredictablepatterns.substack.com/">here</a>) is the following adventure through time, in which he ponders the quickening of life that bedevils humanity today. </em></p><p><em>At AI Policy Perspectives, we read this essay months back. We&#8217;re still thinking about it.</em></p><p><em>&#8212;</em>Tom Rachman<em>, AI Policy Perspectives </em></p><div><hr></div><h4><em>By Nicklas Berild Lundblad</em></h4><p></p><p>Technology transformed time. What humanity once experienced only through natural cycles&#8212;the rising and setting of the sun, the waxing and waning of seasons&#8212;has increasingly been mediated through interfaces.</p><p>Early civilizations relied on sundials, water clocks, and hourglasses&#8212;devices that measured time through natural phenomena, such as shadows or flowing water. These instruments divided the day into rough increments, sufficient for agricultural societies governed by seasonal rhythms.</p><p>This changed when the medieval monastery introduced the mechanical clock, as Lewis Mumford notes in <em>Technics and Civilization</em> (1934). Invented to regulate prayer schedules, these clocks transformed human consciousness by creating the concept of measured, abstract time. Mumford argues that the clock, rather than the steam engine, was the key machine of the industrial age, describing mechanical timepieces as &#8220;power-machinery whose &#8216;product&#8217; is seconds and minutes.&#8221;</p><p>This technological production of chunked time allowed humans to coordinate activities, from labor in factories to scheduling trains. In his essay <em>The Question Concerning Technology</em> (1954), Heidegger argued that time became a resource to be exploited, from something we dwell within into something we track, manage, and consume&#8212;from private experience into public resource.</p><p>Since then, technological innovation has only accelerated human experience. The French philosopher Paul Virilio argued that <em>this</em> is the defining quality of modernity, with each technological revolution recalibrating our relationship to speed and time.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/subscribe?"><span>Subscribe now</span></a></p><p>Consider how technology compressed distance: those time-consuming walks that gave way to galloping on horseback, which yielded to steam railways, then automobiles, and eventually supersonic flight. Communication followed a similar trajectory, from slow written letters to telegraphs, then telephones, and finally instant digital messages.</p><p>Judy Wajcman&#8217;s <em>Pressed for Time</em> (2015) challenges the idea that technology merely quickens everything. She argues that digital technologies provide interfaces that grant us more individual control over time Consider how your smartphone simultaneously creates time pressure (the expectation of immediate email responses) while offering new time flexibility (the ability to work from anywhere).</p><p>The German sociologist Hartmut Rosa imagines time as a three-layered system, consisting of 1) <em>technological acceleration</em> (faster transport, communication, and production); 2) <em>social acceleration</em> (more rapid turnover of institutions and relationships); and 3) <em>life-pace acceleration</em> (the compression of actions within smaller time-units). It&#8217;s not just that your phone is quicker than last year&#8217;s. It&#8217;s that the entire social world churns faster, forcing you to adapt by cramming more into each hour.</p><p>But Rosa observes something else that pertains to AI and time: certain aspects of life cannot be hastened. &#8220;To the contrary, many things slow down, like traffic in a traffic jam, while others stubbornly resist all attempts to make them go faster, like the common cold.&#8221;</p><p>Why do some things refuse to quicken? The answer is that we live in a world with two major forms of time.</p><h2><strong>Computers vs. biology</strong></h2><p>Imagine peering inside a computer chip. What you&#8217;d see is a race against distance itself.</p><p>Unlike the steady pendulum of a clock marking uniform intervals, computation involves signals that sprint between transistors. The dramatic acceleration of computing over the past decades stems to a large degree from one achievement: that we&#8217;ve made these signals run shorter and shorter races.</p><p>By shrinking the physical space between transistors from micrometers to nanometers&#8212;a 1,000-fold reduction&#8212;we slowly push computational processes toward the ultimate limit: the speed of light. We have also seen the introduction of new materials and new architectures. But the reason that a computational calculation that took hours in 1980 happens in microseconds today is largely the compression of space.</p><p>Biological processes work differently. A broken femur knits itself back together through stages that cannot be rushed: inflammation, soft callus formation, hard callus formation, bone remodeling. The nine months of human gestation contain a necessary sequence of developmental events, each building upon the last. Even our consciousness operates at speeds determined by neural transmission rates and biochemical cascades that have not changed since homo sapiens appeared. These processes may also slow down efforts to use AI to accelerate biology research, as to validate your AI model&#8217;s predictions in an experiment, you <a href="https://www.asimov.press/p/levers">may still need to wait</a> for DNA molecules to be cloned or for e-coli cells to divide.</p><h2><strong>The musical tempo of policy</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3AQc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3AQc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg" width="283" height="178" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:178,&quot;width&quot;:283,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Piano Q&amp;A: All about tempo markings in ...&quot;,&quot;title&quot;:&quot;Piano Q&amp;A: All about tempo markings in ...&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Piano Q&amp;A: All about tempo markings in ..." title="Piano Q&amp;A: All about tempo markings in ..." srcset="https://substackcdn.com/image/fetch/$s_!3AQc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3AQc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cc63991-926d-4d3b-ada0-9725ae6f86cf_283x178.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The difference in time signatures has consequences, because human institutions mirror our biological constraints.</p><p>Consider justice and markets as pieces in society&#8217;s symphony, each with a natural tempo. Justice performs as a <em>sostenuto</em>&#8212;a slow, sustained movement requiring deliberate pacing and thoughtful development. Speed a <em>sostenuto</em> beyond recognition, and you destroy the qualities that define it. Markets perform as an <em>accelerando</em>, quickening naturally as they process information and reallocate resources. Forcing markets to play <em>adagio</em> often leads to stagnation and distortion.</p><p>The technological acceleration of our era tempts us to make everything as rapid as computation itself. We grow impatient with the tempo of democratic deliberation, ethical reflection, or meaningful relationship-building. We schedule our days in smaller increments, squeezing activities into time slots that barely accommodate them. We even grow frustrated with our bodies&#8217; adherence to biological rhythms, needing roughly the same amount of sleep, recovery time, and digestive processing as our ancestors did millennia ago.</p><p>But what happens when we try to force institutions to operate at computational speeds? Imagine taking <a href="https://www.youtube.com/watch?app=desktop&amp;v=1prweT95Mo0&amp;t=0s">Bach&#8217;s Cello Suite No. 1</a>&#8212;a piece whose profound beauty emerges through its deliberate unfolding&#8212;and speeding it up a thousandfold. At such speeds, the music wouldn&#8217;t just sound different; it would cease to be music at all, becoming an incomprehensible burst of noise. Similarly, justice compressed into microseconds is not quick justice&#8212;it&#8217;s no longer justice at all. Democracy conducted at processor speeds isn&#8217;t accelerated democracy&#8212;it&#8217;s something else entirely, stripped of the deliberation, reflection, and human connection that give it meaning.</p><div class="pullquote"><p>We appear destined for increasing tension between the pace of silicon and the pace of humanity, with our institutions caught in the crossfire. But this conclusion misses something: artificial intelligence as a temporal mediator.</p></div><h2><strong>The great bifurcation of time</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Thoj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Thoj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1951930,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Thoj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Thoj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13e58c5b-66a5-4390-adfe-a4e519f769e2_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Consider what happens when you interact with a chatbot. Computational processes are operating at astronomical speeds&#8212;billions of operations per second&#8212;yet the interface doesn&#8217;t overwhelm you. Instead, it presents information at a pace you can metabolize, often mimicking human conversational rhythms. The AI serves as a step-down transformer, slowing the nanosecond world of computation into the second-by-second world of human cognition.</p><p>This mediation works both ways. When you step away from a conversation with an AI for hours or days, the system doesn&#8217;t experience this as waiting. It exists in a suspended state, ready to resume instantly when you return. This points to what may be the most significant sociotechnological transformation of the coming decades: <em>the great bifurcation of time</em>.</p><p>We are entering an era where computational time and biological time will increasingly decouple rather than collide. Instead of human institutions racing to match computational speeds&#8212;a race they cannot win&#8212;AI systems will negotiate between these temporal domains, allowing each to operate according to its rhythms.</p><p>Consider what this means for knowledge work. Rather than humans attempting to process information at computational speeds, AI systems will increasingly serve as asynchronous collaborators, working continuously through problems, then presenting solutions when the human is ready to engage. We already see this with deep-research modes in chat agents. The human provides direction, judgment, and values at a biological pace, while computation proceeds at electric speeds in parallel.</p><p>Financial markets hint at this bifurcation already. High-frequency trading algorithms operate at microsecond scales. Rather than forcing humans to operate at this speed (an impossibility), the market has bifurcated: algorithms interacting with algorithms at one timescale; human investors making decisions at another timescale, with AI systems mediating between these layers.</p><p>This will spread. Consider:</p><ul><li><p><strong>Healthcare</strong>: AI systems will continuously monitor vital signs and medical data at computational speeds while ingesting the latest research, then present insights to doctors and patients at human-comprehensible intervals</p></li><li><p><strong>Education</strong>: Adaptive learning systems will analyze student performance at millisecond resolution while delivering personalized guidance at pedagogically appropriate paces</p></li><li><p><strong>Governance</strong>: AI systems will process vast quantities of data at speeds no human could match, while presenting options to policymakers in formats that support thoughtful, ethical deliberation. These systems could even explore negotiated agreements at the same time, converging on possible equilibrium</p></li></ul><p>Perhaps most significantly, this bifurcation will enable individualized relationships with time itself. When AI systems mediate our relationship with accelerating information flows, we gain the capacity to control our temporal experience.</p><p>Imagine an AI that shields you from the tyranny of immediate response, aggregating messages and information into batches, delivered at intervals you specify. Or consider how AI might let you engage with rapidly changing fields at your own pace, synthesizing developments while you&#8217;re away and presenting only what&#8217;s relevant when you return. No longer must you choose between staying current (racing to match computational speeds) and preserving your sanity (honoring biological rhythms). AI creates a third option: remaining connected while maintaining temporal autonomy.</p><p>Rather than technological acceleration forcing humans to keep up, AI creates the possibility of computational processes continuing their exponential speedup while human experience slows down. This might enable a renaissance of temporally appropriate activities: deep reading, contemplation, craftsmanship, relationship-building. We might witness the emergence of &#8220;slow thought&#8221; movements.</p><p>On the other hand, temporal bifurcation risks new inequalities between those who can afford AI mediation and those forced to race against computational speeds directly. It also raises questions about who controls the parameters of these temporal interfaces.</p><p>Just as learning to maneuver a car requires new physical techniques, working with temporal mediators will require learning new concepts and ideas and new ways of exercising our augmented agency.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/p/time-machines?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/p/time-machines?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Medic of the future</strong></h2><p>To imagine how this could work, think of a doctor&#8217;s diagnostic process. A decade ago, the doctor used a medical database to check symptoms. The doctor remained the orchestrator, with the computer merely a reference tool.</p><p>Now, imagine that doctor in the future, examining a patient with puzzling symptoms. Before the doctor asks her first question, the AI has already analyzed the patient&#8217;s electronic health record, identifying patterns across decades of medical history that might escape human notice.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TvqH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TvqH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1745326,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TvqH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TvqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bab1899-c1d7-4e71-8246-551c0bc69aff_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As the patient describes symptoms, natural language processing assesses subtle linguistic markers that might indicate depression, cognitive impairment, or pain levels the patient hasn&#8217;t mentioned. Simultaneously, the AI queries epidemiological databases to determine whether the symptoms match diseases in the patient&#8217;s geographic region or demographic group.</p><p>In parallel, the AI runs simulations of how different treatment protocols might interact with the patient&#8217;s existing medications and genetic profile as well as their personal life and circumstances. It cross-references the research papers published globally within the last 24 hours that might relate to the symptoms.</p><p>Analyzing a video feed of the consultation, it detects micro-expressions indicating patient anxiety about particular topics, flagging these for the doctor&#8217;s attention. And it compares this case against the doctor&#8217;s previous diagnostic patterns, identifying potential cognitive biases she may exhibit.</p><p>Each of these processes operates in computational time&#8212;milliseconds to seconds&#8212;while the human conversation unfolds over minutes. What&#8217;s remarkable is not just that these processes happen quickly, but that they happen simultaneously, in parallel temporal streams that would be impossible for a human mind to coordinate.</p><p>Yet the AI doesn&#8217;t flood her with the raw output. Instead, it performs a sophisticated form of mediation, determining which insights require attention and which can wait until natural breaks in the conversation. The system also translates statistical patterns into intuitive visualizations that the doctor can grasp quickly, while arranging information hierarchically, presenting the most relevant possibilities first.</p><p>The power of this temporal mediation becomes apparent when the doctor faces a critical decision. In the past, the fear of missing the serious diagnosis might have led to defensive medicine, ordering excessive tests just to be sure.</p><p>But as she contemplates her options now, the AI has already calculated the probability of each condition based on population data, regional epidemiology, and this patient&#8217;s profile; simulated the likely outcomes of different treatment paths, including risks, costs, and recovery trajectories; and generated a decision tree, highlighting key points where additional information would help narrow the diagnostic possibilities.</p><p>When the doctor absorbs this knowledge, she is engaging with what would have been months, or years, of sequential human research compressed into seconds&#8212;yet presented in a form that respects her need to process at a human pace. The AI doesn&#8217;t replace her clinical judgment; it expands what &#8220;judgment&#8221; encompasses.</p><p>The medical AI also allows the human to be fully present with her patient, maintaining eye contact, building rapport, observing subtle cues, because the AI handles the information processing that would otherwise compete for her attention.</p><p>This represents a major shift from first-generation digital tools. Early computers forced humans to adapt to them. Advanced AI systems adapt to us.</p><h2><strong>The Economics of Time</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ULKG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ULKG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1437682,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aipolicyperspectives.com/i/176418788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ULKG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ULKG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953e55a4-a878-4cf0-aa4c-0645bf1c7ffc_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As AI systems mediate between computational and biological temporalities, we are also witnessing another bifurcation, between what we could call the <em>judgment economy</em> and the <em>action economy</em>.</p><p>The <em>judgment economy</em> includes activities that require human deliberation, ethical reasoning, and interpersonal wisdom&#8212;processes that resist acceleration because they are tied to our embodied experience as biological beings.</p><p>The <em>action economy</em>, by contrast, operates increasingly within computational time, gathering and processing information, implementing decisions, and optimizing systems. These activities can be dramatically accelerated because they can be reduced to algorithmic procedures.</p><p>Consider how this plays out:</p><ul><li><p><strong>Finance</strong>: Investment advisers operate in the <em>judgment economy</em>, understanding client goals, risk tolerance, and life circumstances, while trading systems operate in the <em>action economy</em>, executing transactions at microsecond speeds</p></li><li><p><strong>Healthcare</strong>: Diagnosis spans both economies, with physicians exercising judgment while AI systems rapidly process test results, medical images, and research literature</p></li><li><p><strong>Law</strong>: Attorneys formulate strategy and negotiate settlements in the <em>judgment economy</em> while AI reviews documents, does case research, and ensures regulatory compliance as part of the <em>action economy</em>.</p></li></ul><p>These factors will reshape labor markets in ways that traditional automation narratives miss. Rather than simply replacing jobs, AI redistributes economic activity across the judgment-action divide. In the <em>action economy</em>, value increasingly derives from speed, scale, and precision&#8212;computational virtues that can be improved through technological advancement. In the <em>judgment economy</em>, value derives from discernment, creativity, and ethical reasoning.</p><p>When action becomes essentially instantaneous, the limiting factor in value creation becomes the quality of the decisions. In a world where anything can be done, what <em>should</em> be done becomes the essential question.</p><p>The bifurcation of economic time creates new forms of capital and, consequently, new dimensions of inequality:</p><ul><li><p><strong>Attention capital</strong> becomes increasingly precious. Those with the capacity to maintain high-quality attention toward decisions gain advantage in the judgment economy</p></li><li><p><strong>Temporal autonomy</strong> emerges as a political good, the freedom to operate according to biological rhythms rather than being subjected to computational tempos</p></li><li><p><strong>Judgment leverage</strong> becomes a source of outsized returns. The ability to pair high-quality judgment with high-speed computational action allows individuals to create value at unprecedented scales</p></li></ul><p>For centuries, we have evaluated economic progress by productivity. But productivity belongs primarily to the<em> action economy</em>; it measures how efficiently we execute known processes.</p><p>In the <em>judgment economy</em>, the relevant metric is closer to discernment, the quality of decisions per unit of attention. This requires new economic indicators that value wisdom, foresight, and ethical reasoning, alongside efficiency and output.</p><p>Organizations that thrive in this bifurcated landscape will be those that balance biological and computational temporalities, accelerating action while creating protected space for judgment.</p><p>Judgment roles will be increasingly valued. Action tasks that can be fully specified, and do not require human judgment, will increasingly shift to computational systems. Hybrid roles will emerge at the boundaries&#8212;much work will involve standing between the two economies, requiring knowledge of both languages.</p><p>Also, temporal design becomes a core part of business. Organizations will need specialists who build appropriate temporal frameworks for different activities, knowing which processes benefit from acceleration and which require deliberate pacing.</p><p>Work evaluations will change too. Beyond simply measuring time-spent or output-produced, assessment will consider whether activities unfolded at the right pace for their purpose.</p><p>Societies that manage this schism between biology and computation will not only create material prosperity. They will foster human flourishing in bifurcated times.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share AI Policy Perspectives &quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aipolicyperspectives.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share AI Policy Perspectives </span></a></p><p></p>]]></content:encoded></item></channel></rss>