<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Policy Perspectives : Reviews ]]></title><description><![CDATA[Reviews and commentary of third party work]]></description><link>https://www.aipolicyperspectives.com/s/reviews</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 18:19:47 GMT</lastBuildDate><atom:link href="https://www.aipolicyperspectives.com/feed" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><webMaster><![CDATA[aipolicyperspectives@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aipolicyperspectives@substack.com]]></itunes:email><itunes:name><![CDATA[AI Policy Perspectives]]></itunes:name></itunes:owner><itunes:author><![CDATA[AI Policy Perspectives]]></itunes:author><googleplay:owner><![CDATA[aipolicyperspectives@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aipolicyperspectives@substack.com]]></googleplay:email><googleplay:author><![CDATA[AI Policy Perspectives]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Stafford Beer and AI as Variety Engineering]]></title><description><![CDATA[Thoughts on The Unaccountability Machine by Dan Davies]]></description><link>https://www.aipolicyperspectives.com/p/stafford-beer-and-ai-as-variety-engineering</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/stafford-beer-and-ai-as-variety-engineering</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Wed, 23 Oct 2024 07:29:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_aTk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This review is written by <a href="https://x.com/nickswan73">Nick Swanson</a> from Google DeepMind&#8217;s public policy team. Subscribe for more essays, policy notes, and reviews and leave a comment below or get in touch with us at <a href="mailto:aipolicyperspectives@google.com">aipolicyperspectives@google.com</a> to tell us what you think.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_aTk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_aTk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 424w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 848w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_aTk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_aTk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 424w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 848w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!_aTk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a4fda98-cc24-47af-9bd9-e908d0989332_1600x1600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney via Seb Krier</figcaption></figure></div><p>In his book the <em>Unaccountability Machine</em>, author Dan Davies uses the &#8216;management cybernetics&#8217; model of Stafford Beer as a model for understanding the complexities and &#8220;poly-crises&#8221; of the last two decades. Davies argues that standardised and process-driven systems have emerged to manage the vast complexity we live with, but in turn they have had the effect of distancing decision-makers from the impact of their decisions. These &#8216;accountability sinks&#8217; explain everything from why we have call centres that literally can&#8217;t help you, to austerity policies based on blanket debt-to-GDP ratio rules.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>At the core of management cybernetics is the aphorism that &#8220;the purpose of a system is what it does&#8221; - and the implied second clause &#8220;not what it says it does&#8221;. This is an incredibly clarifying mental model of the world. The reason conversations with call centres (or more familiarly in the UK, GP receptionists) often come up against a dead end where your issue cannot be resolved is usually not because you haven&#8217;t tried hard enough, or they haven&#8217;t understood and routed your inquiry correctly. There simply aren&#8217;t enough appointments available at the right time &#8211; you are experiencing a &#8216;problem without an owner&#8217;. The purpose of a system is what it does.</p><p>When you internalise this world view, you begin to see examples of it all around you. Is the purpose of the public science funding system to facilitate the discovery of groundbreaking new ideas and creating entire new research paradigms (<a href="https://www.sciencedirect.com/science/article/abs/pii/S0165176520304067">which</a> has been in <a href="https://www.nature.com/articles/d41586-022-04577-5">decline</a> for some time), or is it doing something else? Are arduous grant application processes, and <a href="https://medium.com/@coalfacer/the-academic-research-funding-system-as-it-is-now-5f49ab614629">metrics</a> based on citation quantity rather than the quality or novelty of research, <em>really</em> the best way to allocate scarce resources to advance science?</p><h3><strong>Combinatorial complexity</strong></h3><p>When a system contains two parts &#8211; according to cybernetics &#8211; the feedback between both agents in the system gives complete (and usable) information about the system. But when the number of agents rises, more and more circuits between nodes exist, and understanding combinatorial outcomes becomes impossible &#8211; and knowing the properties of individual connections no longer gives you a proper understanding of the whole. Complex systems thus have to be understood as a whole, or they cannot be understood at all, and knowing <em>part</em> of a complex system in depth can be riskier than knowing nothing. Davies argues it was better to be totally ignorant of cryptocurrency than to lump on because a graph was trending upwards the last time you looked at it. Failures in military intelligence are often a result of knowing too much about one part of a complex situation.</p><p>The result of the difficulty in managing complexity at a corporate or policy level, is the emergence of accountability sinks &#8211; the reduction of decision-making to a rule, often publicly stated, perfectly defensible and through a completely transparent process. But these rules break feedback links in systems, create problems with no owners, and are unable to deal with edge cases. Davies tells the <a href="http://news.bbc.co.uk/1/hi/world/europe/320721.stm">story</a> of 440 live squirrels being &#8216;shredded&#8217; at Schiphol airport because it was beyond the distribution of cases which could be &#8216;computed&#8217; by the paperwork intended to govern the movement of live animals into Holland.</p><p>Every &#8216;rule&#8217; is an implicit model of the world, but a model which inherently has to leave out a lot of information about the actual world. There is no intention, there is just a network of cause and effect &#8211; a system which makes an outcome inevitable.</p><p>But as infuriating as reductive processes can be, they are also an essential means to help us navigate complexity. Though no rule can possibly model the world perfectly, often the absence of such a rule is worse. Procurement processes which prevent innovation and risk taking, or Civil Service competency interviews which seem designed to shut out &#8220;weirdos and misfits&#8221; with good ideas, are clearly suboptimal. But they are <a href="https://fs.blog/chestertons-fence/">intended</a> to offset the alternative &#8211; the patronage of middle managers or the financial interests of corrupt officials. This is the trade off we often live with when systems cannot compute complexity &#8211; we can have outcomes on the basis of reductive rules which cannot model every potential situation, or we can have outcomes on the basis of discretion and personal standing within a network.&nbsp;</p><h3><strong>AI as variety engineering</strong></h3><p>A core theoretical contribution of cybernetics is the Viable Systems Model, which centres on the concept of &#8216;requisite variety&#8217;. Requisite variety is &#8211; very simply put &#8211; the idea that <em>a system must be at least as complex </em>[varied]<em> as the thing it is trying to regulate</em>. The cockpit of a fighter jet is able to handle more complexity than the control panel of a railway train. If a jet only had go/stop, it would not get off the ground (or it would crash into the end of the runway).</p><p>AI is most useful to us when it functions as &#8216;variety engineering&#8217;, whether at the level of individuals, companies, states or societies &#8211; equipping them with the ability to regulate complexity.</p><p>A great example of this in practice is Google DeepMind&#8217;s <a href="https://arxiv.org/pdf/2406.07234">research</a> into the &#8216;optimal power flow&#8217; problem (OPF), which created an ML system for balancing supply and demand on an electrical grid. Current methods of balancing the grid are done by humans, essentially switching flows manually. In a system as complex as directing electrons to innumerable sources of demand, from areas of supply (some sources of which produce too many electrons at a given moment, and some which produce too few when the wind isn&#8217;t blowing) is a hard task! As a result, energy is wasted and surplus carbon emitted. The OPF system - which demonstrated human-level functionality at superhuman speed - has the <em>requisite variety</em> necessary to balance a modern grid, reduce waste and reduce emissions.&nbsp;</p><p>If I was an energy minister with an ambitious plan to decarbonise the energy system by 2030, I would be asking why this isn&#8217;t already in beta testing on the Grid &#8211; implemented as a recommender system with human oversight at first, and then setting the performance benchmarks which should be met in order to get the grid running on this system autonomously. AI in public services needs to be read as something broader than just &#8220;LLMs doing what civil servants do&#8221;, which massively narrows the opportunity space. The same goes for <a href="https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/">GraphCast</a> and weather prediction.</p><p>Radiology is a well-worn topic in AI policy debates. Yes, clearly Hinton was wrong, there are still radiologists. But understanding radiography is hard even for experts, who still &#8211; <em>of course </em>&#8211; miss things on occasion, and the healthcare systems in which radiographers operate (and the demands placed on them) can make it even harder. It isn&#8217;t obvious that humans operating in a demanding healthcare system have the requisite variety to manage the complexity of spotting and acting upon every scan taken, at the pace and volume which meets the public&#8217;s expectations - whereas ML systems might (though <a href="https://x.com/sebkrier/status/1817877099673203192">operationalising</a> and delivering on this is a whole other set of questions related to feedback and regulation within a system).</p><h3><strong>Avoiding Cybersyn v.2</strong></h3><p>Cybernetics itself had its peak moment of hubris in the 1970s, in the infamous &#8216;Cybersyn&#8217; system which was built at the height of the &#8220;<a href="https://en.wikipedia.org/wiki/Socialist_calculation_debate">socialist calculation debate</a>&#8221;. Stafford Beer, working with the Chilean Government of Salvador Allende built a system called Cybersyn, where experts would analyse economic data, and attempt to allocate resources across the economy (or specifically, factories). Obviously it couldn&#8217;t work - the requisite variety required to run the economy of a developing country&#8217;s economy is not possible with the computing we have now, let alone with what they had in the 1970s (and Cybersyn only had <a href="https://www.latimes.com/business/technology/story/2023-09-21/column-merchant-cybersyn-chile-tech-utopia-experiment">one</a> computer) &#8211; though the coup of Augusto Pinochet did <em>technically</em> cut short the experiment.</p><p>Mercifully, there is relatively little debate about today&#8217;s AI replacing the price signal in how we operate our economy. However, quite a lot of it does sound somewhat premised on the idea that we can simply replace civil servants with LLMs. There likely is a good amount that can be done better &#8211; much of internal government activity is repetitive, staff turnover is high and institutional memory strained, so an LLM taking your organisation&#8217;s knowledge base and an agent navigating form-filling (or writing briefings, proposals, presentations) on your behalf seems sensible and appealing. But we should think about systems in a more holistic way, and AI as something which can expand the capability, quality and service offering of the state.</p><h3><strong>Cybersyn, but for individuals and organisations</strong></h3><p>Government should also not think of AI in the public sector solely as being about delivery of services, but also in the ways that individuals can interact with those systems. I want to automate <em>my</em> interaction with the state - from booking GP appointments by just saying &#8220;book me a GP appointment&#8221; to my agent, to allowing the agent and the GP to subsequently change/rebook my appointment to manage demand/prioritisation at the practice &#8211; in line with boundaries I&#8217;ve set.</p><p>In the civil courts, two parties to a <a href="https://x.com/sebkrier/status/1799429340033085809">contract</a> could have their agents negotiate a mutually acceptable resolution to a dispute, which could consider factors more <em>varied</em> than what can be included in a written contract (assuming both parties have pre-agreed to it). A contract is a model of the world, but one which leaves out important facts about the parties to it, and what they would be willing to negotiate. This technology does not exist yet, but it likely will at some point - and it could empower us with our own individual systems of requisite variety to parse the complexity of modern life, and of the social systems we have had to create to imperfectly manage that complexity. Ensuring this is possible should also be in scope of how Governments think about AI in public services.</p><p>A final thought which is important for any government when thinking about user-facing services driven by AI is what the &#8220;system&#8221; that people feel they are interacting with <em>is.</em> Davies is dismissive of Searle&#8217;s famous <a href="https://plato.stanford.edu/entries/chinese-room/">Chinese Room argument</a>, which was intended to disprove the Turing Test. In a cybernetic analysis, Searle has the wrong object level of analysis &#8211; to the person outside of the room, <em>the translator and the AI are the same system</em>. It makes no sense to say the Chinese Room disproves the Turing Test, because it is answering a different question, one of <em>accountability</em>, not of <em>intelligence</em> (indeed it was never conceived of as such).&nbsp;</p><p>This argument can extend to the way people feel about their interactions with the state. A future AI model which books and optimises GP appointments, and the NHS, are to the user <em>the same thing</em> - a black box of a system I can&#8217;t possibly understand (and frankly shouldn&#8217;t have to), which I am taxed for and for which I have certain expectations of. Done well, this will be a great thing &#8211; a seamless experience where we don&#8217;t waste our mental energies navigating someone else&#8217;s institutional framework, or suffering frequent human error, certainly sounds appealing.</p><p>However, it is vital that AI in public services does not end up creating a new, hyper-technical form of the accountability sink. It is not the doctor&#8217;s receptionist&#8217;s fault that the appointment booking system works the way it does &#8211; it works this way because there aren&#8217;t enough appointments and some have to be prioritised over others. We should not assume that a fancy AI booking system, even one which is able to optimise prioritisation, will solve the complexity (and demands on) a system like the NHS &#8211; however over time it will allow us to rethink the tasks that comprise them at a more foundational level.</p><p>In cybernetic terms, it is important that when using AI in the public sphere, we need a capable &#8220;system 3&#8221; - a layer within the wider system which can ask questions about whether functions of the system itself should be redesigned or reimagined. This is a new way of thinking and not typically what public organisations have done in the past &#8211; they have tended to optimise for a specific function based on a model the world implicit in their design, rather than operate in a space where the problems they are solving for are yet to be discovered. If the purpose of a system is what it does, we should continually strive to ensure that its outcomes match our expectations, and rather than just throwing computing power into broken systems, we should seek to ensure that they themselves adapt and learn.&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The return on the bicameral mind ]]></title><description><![CDATA[Presence and tulpamancers in the age of agents]]></description><link>https://www.aipolicyperspectives.com/p/the-return-on-the-bicameral-mind</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/the-return-on-the-bicameral-mind</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 18 Jul 2024 12:41:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6COx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6COx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6COx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 424w, https://substackcdn.com/image/fetch/$s_!6COx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 848w, https://substackcdn.com/image/fetch/$s_!6COx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 1272w, https://substackcdn.com/image/fetch/$s_!6COx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6COx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1839181,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6COx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 424w, https://substackcdn.com/image/fetch/$s_!6COx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 848w, https://substackcdn.com/image/fetch/$s_!6COx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 1272w, https://substackcdn.com/image/fetch/$s_!6COx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef24562-1a36-47cd-9ab9-4c0e9075b913_3432x1931.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Visualising AI by Google DeepMind</figcaption></figure></div><p><em>Written by Google DeepMind&#8217;s <a href="https://nicklasberildlundblad.com/">Nicklas Berild Lundblad</a>, the following is a review of two different books on the mind &#8211; and a few thoughts on how they can be combined together to suggest an interesting idea. And that is what it is &#8211; an idea, rather than a view &#8211; so take it in that spirit. The books are Jaynes&#8217; book on the bicameral mind, and a recently published book on the nature of presence.</em>&nbsp;</p><h2>Revisiting Jaynes</h2><p>Julian Jaynes&#8217; book, <em><a href="https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in_the_Breakdown_of_the_Bicameral_Mind">The Origin of Consciousness in the Breakdown of the Bicameral Mind</a> (1976)</em>, presents an intriguing theory about the evolution of human consciousness. Jaynes suggests that ancient humans operated under a &#8220;bicameral mind&#8221; wherein the brain&#8217;s two hemispheres functioned independently, with one side generating commands in the form of auditory hallucinations, perceived as the voices of gods, and the other side following these commands. This bicameral mentality, Jaynes argues, was a mental state without self-awareness or introspection, guiding behavior through these hallucinatory directives. He supports his theory by analyzing historical texts, archaeological findings, and studies of contemporary psychology, suggesting that this mental structure was prevalent until about 3,000 years ago.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The transition from the bicameral mind to modern consciousness, according to Jaynes, was driven by the increasing complexity of social structures and the necessity for more adaptable, introspective thought processes. This shift led to the development of self-awareness, inner dialogue, and metaphorical thinking. Jaynes contends that the collapse of the bicameral mind was not a sudden event but a gradual process influenced by cultural and environmental changes. His theory provides a novel perspective on the origins of human consciousness, proposing that what we now consider a natural mental state is a relatively recent development in the span of human history.</p><p>Jaynes&#8217; work is controversial, but fascinating. We now live in a time when it might be more relevant than ever, as we are developing a new kind of architectural bicameral mind: part human mind, part artificial agent. This new reality is already here in its most nascent form: the chatbot. Furthermore, we see chatbots increasingly becoming more and more anthropomorphised, and we interact with them in ways that resemble the way we interact with other intentional agents.&nbsp;</p><p>But here is the interesting thing: we still know that the chatbot is different, and we speak to it in a way that does not imply shame or self-awareness, even if we do anthropomorphise it - so it seems worthwhile to ask exactly what kind of anthropomorphisation we are engaging in.&nbsp;</p><p>If we believed that a chatbot was another human being, we would treat it differently - and we would hesitate to badger it, ask it questions that would embarrass us or expose us in other ways. While it is true that some people have started to say &#8220;thank you&#8221; and &#8220;please&#8221; to chatbots, they still resemble something other than another human being - and in some ways something much more intimate.&nbsp;</p><p>A chatbot is more like an imaginary friend than a real friend - a mental construct of sorts. And with it, then, we are returning to a new version of the bicameral mind: one in which we interact with a voice that is different from us, but still co-constructed by us.&nbsp;</p><h2>We are all tulpamancers now</h2><p>One of the key features of any interaction is the sense of presence. We immediately feel it if someone is not present in a discussion and we often praise someone by saying that they have a great presence in the room &#8211; signaling that they are influencing the situation in a positive way. In fact, it is really hard to imagine any interaction without also imagining the presence within which that interaction plays <a href="http://out.it">out.</a> It is hard to imagine a conversation without the backdrop of the presence of the other party in that conversation, for example.</p><p>In <em>Presence: The Strange Science and True Stories of The Unseen Other</em> (Manchester University Press 2023), psychologist Ben Alderson-Day explores this phenomenon in depth. From the presence of voices in people who suffer from some version of schizophrenia to the recurring phenomenon of presences on distant expeditions into harsh landscapes - a third or forth person walking along with them - the author explores how presence is perceived, and to some degree also<em> constructed</em>. One way to think about this is to say that presence is a bit like opening a window on your virtual desktop, it creates the frame and affordances for whatever next you want to do. The ability to construct and sense presence is absolutely essential for us if we want to communicate with each-other, and it is ultimately a relational phenomenon.</p><p>Indeed, the sense of a presence in an empty space, on a lonely journey or in an empty house may well be an artefact of the mind&#8217;s default mode of existing in relationship to others. We do not have unique minds inside our heads &#8211; our minds are <em>relationships</em> between different people and so we need that other presence in order to think, and in order to be able to really perceive the world. So the mind has the in-built ability to create a virtual presence where no real presence exists. One way to think about this is that we still are, in some way, bi-cameral minds, but the duality exists between individuals, rather than within them.&nbsp;</p><p>One of the most extreme examples of this is the artificially generated presence of the Tibetan <em>tulpa</em>. A tulpa is a presence that has been carefully built, infused with its own life and intentions and then set free from our own minds, effectively acting as another individual, but wholly designed by ourselves. We are all, to some degree, tulpamancers &#8211; we all know how to conjure a tulpa &#8211; since we all have the experience of imaginary friends. These imaginary friends allow us to practice having a mind with another in a safe environment, and so work as a kind of beta testing of the young mind.&nbsp;</p><p>All of this takes an interesting turn with the emergence of agentic large language models, since we now have the ability to create something that is able to have a presence &#8211; and interact with these new models as if they were intentional. An artificial <em>intelligence</em> is only possible if it also manages to create an artificial <em>presence</em>, and one of the astonishing things about large language models is that they have managed to do so almost without us noticing. The world is now full with other presences, slowly entering into different kinds of interactions with us. We are, in some sense, all tulpamancers again, building not imaginary friends, but something different, and perhaps deeper: a bicameral mind.&nbsp;</p><p>We have other examples of where this is happening - and one of the most palpable is the presence we experience with pets. I grew up with dogs. A dog projects presence in a home, and it seems clear that we have human/dog minds at least if we are dog owners. If you live with a dog you can activate that particular mode of mind when you meet a dog and it is often noticeable when people &#8220;are good with animals&#8221; or have a special rapport with different kinds of pets. This ability to mind-share in a joint presence is something humankind has honed over many, many generations of co-evolution. You could even argue that this ability now is a human trait, much like eye color or skin tone. There are those that completely lack this ability and those that have an uncanny connection with animals and manage to co-create minds with all kinds.&nbsp;</p><p>The key takeaway from this is that the ability to co-create a mind with another is an evolved capability, and something that takes a long time to work out. There are, in addition, clear mental strengths that need to be developed. Interacting with a dog requires training and understanding the pre-conditions and parameters of the mind you are co-creating.&nbsp;</p><p>We can generalize this and note that our minds are really a number of different minds created in different presences, all connecting to a single set of minds that we compress into the notion of an I. This is what we mean when we say things like &#8220;I am a different person with X&#8221; or &#8220;You complete me&#8221; or cast ourselves in different roles and wearing different masks in different contexts. What is really going on is not just that we are masking an inner secret self, but we are really different with different people, the minds we co-create with them are us, but also not us. The I is secretly a set of complex we:s, and the pre-condition for creating that we is presence.&nbsp;</p><p>Or, put slightly differently: we are <em>dividuals</em>. The philosophical concept of &#8220;dividuals&#8221; contrasts with the idea of individuals as autonomous, self-contained entities. Instead, dividuals are understood as beings whose identities are distributed and composed through their relationships and interactions with others. This concept suggests that rather than being singular, independent units, human beings are inherently interconnected and their sense of self is formed and continually reshaped by their social, cultural, and material contexts.</p><p>In dividuality, identity is fluid and multiple, reflecting the various roles and connections a person has. This perspective emphasizes the collective and networked aspects of human existence, challenging the Western notion of the individual as a discrete and bounded entity. The concept has been explored in anthropology and sociology, particularly in the context of non-Western societies where communal and relational understandings of self are more prevalent. It highlights how identities are co-constructed and dynamic, influenced by a myriad of external factors and relationships.</p><h2>Conclusions</h2><p>What does this mean, then, for artificial intelligence agents? As these models get better, we are likely to be even more enticed to co-create minds with them and interact with them in ways that are a lot like the ways in which we interact with each-other. But we need to remember that these artefacts are really more like our imaginary friends than our real relationships &#8211; and we probably need to develop what researcher Eric Hoel calls a set of intrinsic innovations &#8211; mental skills &#8211; that help us interact with these models.&nbsp;</p><p>A lot of how we think about these models now is about how we think we can fix the models so that they say nothing harmful and do nothing that is dangerous. We are treating these technologies as if they were mechanical, but they are more than that &#8211; they are intentional technologies, technologies that are open to us creating a presence and a sense of intent. This means that we may need to complement our efforts on creating safety mechanisms in the machine, with creating safety mechanisms in our minds.</p><p>There is, then, <em>an art and a craft</em> to co-creating a mind with an agent &#8211; and it is not something we are naturally good at, since they have not been around for long. And this art reminds us of a sort of tulpamancy &#8211; the knowing construction of an artificial presence that we can interact with in different ways. A conscious and intentional crafting of an imaginary friend. One part, then, of safety research also needs to be research into the mental techniques that we need to develop to interact with artificial presences and intentional systems. And it is not just about intellectual training &#8211; it is about feeling these presences and intentional systems, understanding how they co-opt age old evolutionary mechanisms for creating relational minds and figuring out ways in which we can respond mentally to ensure that we can use these new tools. It requires a kind of mentalics to interact well with, and co-create functional and safe minds with, artificial intelligence.&nbsp;</p><p>We need to intentionally architect the coming bi-cameral mind, both technologically and psychologically.&nbsp;</p><p>A surprising conclusion? Perhaps. But the more artificial presences and intentional artifacts we build, the more attention we need to pay to our own minds and how they work. We need to explore how we think and how we think with things, people, presences and other tools. Artificial intelligence is not a substitute for our intelligence, but a complement &#8211; and for it to really be that complement we need to develop the skills to interact with such technologies.</p><p>It is not unlike learning to ride a bike or driving a car. A lot of the training there is the building of mental constructs and mechanisms that we can draw on, and this is something we need here too. How we do that is not clear &#8211; and I do think that we need research here &#8211; but some simple starting points can be meditation, a recognition of the alien nature of the presences created by these models and conscious exploration of how the co-created minds work, where they behave weirdly and where they are helpful. It requires a skillful introspective ability to do so, and such an ability is probably useful for us overall in an evermore complex world.&nbsp;</p><p>Becoming bicameral minds again can be both an exciting and terrifying prospect, depending on how we view our current consciousness. It may well be that our current version of consciousness, then, was just a short cognitive period in the evolution of minds - and that the much more natural bi-camerality is now returning &#8211; allowing us new degrees of freedom, different definitions of mental health and a better overall grasp of what it really means to be a mind.&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Thoughts on: The Handover by David Runciman]]></title><description><![CDATA[Thomas Hobbes, AI, and 450 year old alignment problems]]></description><link>https://www.aipolicyperspectives.com/p/thoughts-on-the-handover-by-david</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/thoughts-on-the-handover-by-david</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Thu, 20 Jun 2024 08:02:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!W_Ck!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W_Ck!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W_Ck!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 424w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 848w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W_Ck!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png" width="596" height="596" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1400,&quot;width&quot;:1400,&quot;resizeWidth&quot;:596,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W_Ck!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 424w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 848w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!W_Ck!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa3eef8-96a1-46c1-9c3b-1f264b29f935_1400x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Created with Midjourney by Seb Krier</em></figcaption></figure></div><p><em>This review is written by <a href="https://x.com/nickswan73">Nick Swanson</a> from Google DeepMind&#8217;s public policy team. Subscribe for more essays, policy notes, and reviews and leave a comment below or get in touch with us at <a href="mailto:aipolicyperspectives@google.com">aipolicyperspectives@google.com</a> to tell us what you think.</em></p><p>David Runciman&#8217;s <em>The Handover </em>(published September 2023) applies the thinking of 17th Century political philosopher Thomas Hobbes to the age of AI. In <em>Leviathan</em> - written to the backdrop of the English Civil War - Hobbes set out the philosophical grounding for the creation of states, which he believed were the solution to the &#8216;state of nature&#8217;, a war of all-against-all. By submitting to a state - which in his view does not need to be just, democratic or liberal, it just needs to function - we end the perpetual state of violence of the kind wrought by the violence he lived through.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Hobbes' thinking - and its manifestation in nation states, their law and their coercive power - is the <em>source code</em> of the world we live in, and like any operating system, it gets very little thought. We tend to focus on the surface-level applications built on top of it. But an operating system defines what apps you can run, and how they perform. This might be an analogy Hobbes would appreciate, given he regarded reasoning as a form of <a href="https://plato.stanford.edu/entries/hobbes/#2.4">computation</a>.</p><p>Runciman argues that the rise of artificial, machine-like systems is not remotely novel - we are surrounded by leviathans. He argues that the state itself is an artificial (if not intelligent) &#8216;being&#8217; whose goals are not the same as the humans which make up its constituent parts, and the reward function of the state is the ability to make decisions related to its survival and power, not to make optimal or correct decisions from our point of view. States can - and obviously do - provide benefits to the population, but survival remains its ultimate goal.</p><p>The book references <em>Homo Deus</em> in which Noah Yuval Harari argues that human beings are uniquely able to (co)operate on the basis of collective fictions or stories. Runciman, however, goes a layer deeper - looking at the mechanisms, incentives, and strategies employed by the state and large corporations. For him, the important moment in human history was not Harari&#8217;s &#8216;cognitive revolution&#8217;, but the moment we operationalised and mechanised our ability to think over the long-term, act over the long-term, and benefit from the cumulative growth in scientific discovery over the long term (something enabled by the creation of the nation state, and pooling our collective decision-making). A related and more granular approach was developed by <a href="https://cset.georgetown.edu/publication/machines-bureaucracies-and-markets-as-artificial-intelligences/">Richard Danzig</a>, who argues that markets, bureaucracies and machines are all information processing systems which reduce reality into more narrow inputs, such as bits, prices and completed driving licence applications. Just as AI can<em> </em>seem alien and new to us, so can the behaviours of states and outcomes of markets.</p><p>Though Runciman is somewhat sceptical about the certainty of reaching artificial general intelligence, the book benefits from at least taking the possibility of it at face value and thinking through some of the implications of it - for instance, the concept of legal personhood for AI. Often the conversation around legal personhood and artificial intelligence is dismissed as being an emotional response to a statistical model, or anthropomorphisation (and perhaps it often is that). However, legal personhood is a means of giving entities <em>additional</em> duties, and clarifying their relationship to the law. We do it with companies and trusts to clarify very specifically that <em>they are not humans</em>. Some thoughts on the challenges of doing this for AI can be read <a href="https://arxiv.org/pdf/2403.18537">here</a>.</p><p>In the modern world, states look to technology as a means of creating and projecting (economic) power, and states are well <a href="https://www.statecraft.pub/p/how-to-use-challenge-prizes">suited</a> to throwing money and resources at unlocking fundamental breakthroughs. Some, however, might take issue with Runciman&#8217;s proposition that the relative lack of a tech sector in Europe &#8220;is not for want of trying&#8221;. Whilst the United States clearly has certain advantages of size, scale, resources, knowledge and controlling the world&#8217;s reserve currency, building something like a tech sector (an app running on <em>OS Leviathan</em>) clearly requires <em>trade offs </em>and<em> choices</em>. Many of the examples of decisions that the state could have taken to support fundamental innovation - for instance increased military spending - are clearly within the grasp of European leviathans, and have happened in the recent past - as evidenced by Europe&#8217;s early lead in mobile telephony.</p><p>Whilst a simplified chatbot-scaling-to-super-intelligence-pipeline dominates much of the current discourse, and though it is beyond the scope of this book, it would be great to read Runciman&#8217;s thoughts on the implications for the leviathan of a world in which we see rapid (and potentially exponential) growth in scientific discovery as a result of this scaling. What would it mean, for instance, for AI to help develop a transformative cancer treatment? And what would it mean to the West for a <em>Chinese</em> <em>company</em> to do this? What would it mean for leviathan if increasing proportions of economic activity took the form of inference compute - potentially imported? And what is the right <a href="https://unpredictablepatterns.substack.com/p/unpredictable-patterns-91-the-city?utm_source=publication-search">scale</a> or organising principle for decision-making when the production of power is less rooted in territory?</p><p>Another area not covered in the book, but interesting to consider, is the form factor and a user&#8217;s relationship to AI. When discussed in the context of the omnipotent power of the state leviathan, AI comes to sound like a similarly lumbering and unresponsive entity. However, personal assistants and AI systems rooted in and aligned to the interests of individuals could <em>empower</em> them in the face of the leviathans which currently surround us, helping to navigate the bureaucracy which characterises much of our interaction with the state (there is nothing &#8216;human&#8217; about filling out forms any more than delegating it to agents we trust). They <a href="https://x.com/sebkrier/status/1793783460429009199">could</a> act as cognitive shields, protecting both our stated and revealed preferences from external influence. They could provide the context, information and cognitive support to help us make better decisions as individuals.&nbsp;</p><p>The section on how close we came to destruction during the Cold War as a result of automated state systems and nuclear response plans (but were ultimately <a href="https://www.bbc.co.uk/news/world-europe-24280831">saved</a> by human intuition), is effectively discussing an alignment problem. Systems like AlphaZero are described as ruthless optimisers, unable to consider whether it is even &#8216;playing the right game&#8217; or engaging in a worthwhile conflict. But not every instance of collective decision-making is zero-sum or at the stakes presented by mutually assured destruction. In the future, more capable agentic models may prove better at helping us to manage collective action problems or tragedies of the commons. An example could be <a href="https://x.com/sebkrier/status/1799429340033085809">arbitration</a>, where parties can set acceptable thresholds for negotiation, and systems could help find better - more creative - solutions than normally occur in dispute. Done well, this could also help address power imbalances between parties, done badly it could displace power imbalances from well-paid lawyers to the capabilities of one&#8217;s personal AI.&nbsp;</p><p>Towards the end of the book (and as part of a critical view of the rationalism and optimism in the methods of the EA movement) the author makes a strong case that politics is not simply a series of problems to be solved. Runciman proposes that we should put our energies into strategically reforming the state itself to bring it under our greater control. This is - in essence - an alignment problem that humans have faced since the 17th century, and the mechanism by which we work on it is constitutional and legislative reform - something he urges &#8216;long-termists&#8217; to put their energies into.</p><p>The book argues that the state is the only social structure we have with a wide and long enough view of history to ensure AGI remains moored to our long-term interests (albeit one which comes with its own non-aligned aims and survival goals). So shouldn&#8217;t working on AI safety be a top priority for the state and society? Some modern leviathans seem to think so. In May this year and for the second time in 7 months, global leaders have come together at an AI safety summit where the topics of alignment, loss of control, and existential risk are taken seriously and at face value. The UK&#8217;s AI Safety Institute is effectively an exercise in developing the state&#8217;s capacity to understand and evaluate large AI models - what would Hobbes think?&nbsp;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives! Subscribe for free to receive new posts and support this work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Book review: Inspectors for Peace]]></title><description><![CDATA[Inspectors for Peace: A History of the International Atomic Energy Agency by Elisabeth Roehrlich]]></description><link>https://www.aipolicyperspectives.com/p/book-review-inspectors-for-peace</link><guid isPermaLink="false">https://www.aipolicyperspectives.com/p/book-review-inspectors-for-peace</guid><dc:creator><![CDATA[AI Policy Perspectives]]></dc:creator><pubDate>Mon, 22 Jan 2024 11:06:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sl8Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sl8Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sl8Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 424w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 848w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 1272w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sl8Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png" width="810" height="540" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:540,&quot;width&quot;:810,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sl8Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 424w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 848w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 1272w, https://substackcdn.com/image/fetch/$s_!sl8Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F418b2bb7-6c45-4e3e-835a-668510a582a8_810x540.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The first IAEA General Conference held at the Konzerthaus concert hall in Vienna from 1 to 23 October 1957, with the participation of diplomats and scientists from 57 nations. (Photo: IAEA)</figcaption></figure></div><p><em>We&#8217;re expanding AI Policy Perspectives to include essays, book reviews, landscape analyses, and more (you can read a full breakdown of the new types of content on our <a href="https://www.aipolicyperspectives.com/about">About </a>page). To kick things off, we&#8217;re starting with a review of Elisabeth Roehrlich&#8217;s Inspectors for Peace&#8211;&#8211;a must-read for anyone interested in understanding the IAEA.&nbsp;This review is written in a personal capacity by</em> <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Harry Law&quot;,&quot;id&quot;:10612241,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e3b7060-b903-4478-aea7-95ccdd760a01_623x656.jpeg&quot;,&quot;uuid&quot;:&quot;444191a0-e9cd-4d74-b1de-304dd702f288&quot;}" data-component-name="MentionToDOM"></span>, <em>an Ethics and Policy Researcher at Google DeepMind.</em>&nbsp;</p><p>Elisabeth Roehrlich&#8217;s <em>Inspectors for Peace</em> begins in 1974 with the story of two inspectors from the International Atomic Energy Agency (IAEA). The book recounts an episode in which the pair&#8212;who were in India to review the country&#8217;s nuclear capabilities&#8212;were caught off-guard by news of a &#8216;peaceful nuclear explosion&#8217; test. Known as the &#8216;Smiling Buddha&#8217; <a href="https://indianexpress.com/article/explained/explained-history/operation-smiling-buddha-nuclear-first-test-pokhran-history-8616714/">incident</a>, the experiment was made possible thanks to starting material for the plutonium provided by Canada and a reactor moderator, heavy water, supplied by the United States. The rub, however, was that these elements were supplied on the basis that they would be used solely for peaceful purposes.&nbsp;</p><p>The incident sets the backdrop for the core argument of the book: the IAEA&#8217;s &#8216;dual mandate&#8217;, which enabled the transfer of peaceful nuclear technology whilst also seeking to curtail its use for military purposes, both contributed to the proliferation of nuclear weapons <em>and </em>secured buy-in for the organisation&#8217;s anti-proliferation agenda. As Roehrlich explains, &#8220;what appears to be the IAEA&#8217;s greatest weakness has actually contributed to its success: While the promotional agenda of the IAEA bore risks, it also allowed the agency to facilitate diplomats and national experts coming together at the same table in pursuit of shared missions.&#8221;</p><p>After introducing the reader to the risky and counterintuitive dual mandate, the book returns to the beginnings of the IAEA. In chapters one, two, and three, it traces the origins of the IAEA in the aftermath of the Second World War up to its formal creation in 1957. It starts with the infamous Acheson-Lilienthal report and Baruch Plan, named after US diplomat Bernard Baruch, that called for the <a href="https://history.state.gov/milestones/1945-1952/baruch-plans">establishment</a> of an international Atomic Development Authority. This organisation, which would have &#8220;managerial control or ownership of all atomic energy activities potentially dangerous to world security&#8221; and the &#8220;power to control, inspect, and licence all other atomic activities&#8221;, was ultimately rejected by the USSR on the grounds that the United Nations was dominated by the United States and its allies in Western Europe.&nbsp;</p><p>During this portion of the book, Roehrlich considers Eisenhower&#8217;s Atoms for Peace initiative and the acceptance of a new model of governance based on the idea of deterrence. Because total control could not be guaranteed, the book suggests that states organised around a compromise based on the introduction of inspectors who could act as a deterrent while maximising the effective scope of the agency.&nbsp;</p><p>Chapters four, five, six, and seven, detail the early years of the organisation as it grew in size and stature, the failures of safeguards evidenced by the 1974 atomic bomb test, the introduction of the Non-Proliferation Treaty in 1970, and what Roehrlich characterises as the &#8216;north-south&#8217; tensions between nuclear haves and have-nots that surrounded the IAEA in the 1970s and 1980s.&nbsp;</p><p><em>Inspectors for Peace</em> neatly ties together the defining moments of IAEA&#8217;s history, with the development of the Non-Proliferation Treaty, for example, connected to the early safeguard guidelines agreed by the IAEA in the 1960s. It demonstrates the evolution of the safeguarding programme over time, showing how the 1961 safeguards (which only applied to research and small power reactors and to materials placed voluntarily under IAEA safeguards) were superseded by the NPT whose ratification provided the organisation with the authority to routinely conduct on-the-ground inspections across the world.&nbsp;&nbsp;&nbsp;</p><p>The closing sections of the book focus on how the IAEA dealt with the 1986 Chernobyl disaster, especially with respect to the way in which the organisation&#8217;s leadership struggled with a public relations crisis that &#8216;put the whole of nuclear energy on trial&#8217;. Returning to the argument made in the opening few pages in the final chapter, Roehrlich proposes that the history of the IAEA <em>is</em> the history of the dual mandate. Fewer than 250 pages, the economical <em>Inspectors for Peace </em>reminds us that international governance is messy, fraught, and imperfect. Often, though, it is these imperfections that make it possible in the first place.&nbsp; <em>&nbsp;</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aipolicyperspectives.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Policy Perspectives! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>