AGI, Government, and the Free Society
Navigating the tightrope between authoritarianism and anarchy
In this essay, Séb Krier explores how AGI might affect the delicate balance in power between state and society. The essay is based on a recent paper by Séb, Justin Bullock and Samuel Hammond.
As highlighted by Woodrow Wilson in 1887, governments are composed of two aspects - politics and administration - which, though often confused in practice, should be clearly distinguished. Politics is defined by who decides, who makes the rules, and how these individuals are selected. Administration describes government in action - the efficient and systematic execution of laws and public duties. Over time, these concepts have evolved to accommodate different forms of government, from tribes and divine monarchies to republics. The development of free and liberal democratic societies is a relatively recent phenomenon, emerging in the 16th and 17th centuries with the rise of European nation-states.
Throughout history, new leaps in technology have altered and destabilised these forms of government. In the 15th and 16th centuries, the diffusion of the printing press and a growing mercantile class enabled administrative record keeping, which alongside broader demands for property and the enforcement of contracts, laid the groundwork for more centralised forms of governance. More recently, the Arab Spring and the contemporary rise of populism in Western democracies have been attributed, at least in part, to the capacity of the internet and social media to potentiate mass mobilisations against incumbent political establishments.
AI heralds the next major technological shift. While its precise trajectory and diffusion remain uncertain, many researchers and forecasters anticipate the advent of AGI in a matter of years rather than decades or centuries. This raises a question about how AGI would impact liberal democratic societies. Daron Acemoglu and James Robinson’s ‘narrow corridor’ framework argues that free and open societies have traditionally depended upon maintaining a delicate balance between the relative powers of society and the state. This equilibrium avoids an overly-powerful despotic state on one hand, and a chaotic absent state that is too weak to govern or provide services on the other. Liberty thrives in the narrow corridor between these extremes.
Historically, liberal societies maintained this precarious equilibrium through constitutional constraints, checks and balances, and the rule of law. However, this equilibrium has never been static. Rather, technological and social change have forced repeated renegotiations of the social contract - from the rise of mass politics in the industrialising West to the welfare state reforms of the early 20th century.
Recent empirical research by Ryan Murphy and Colin O'Reilly has questioned whether countries actually follow these trajectories and challenged some of the proposed mechanisms, such as the ‘Red Queen effect’ where the state and society are thought to be locked in a constant, co-evolutionary race. But the core idea that we need to maintain a necessary balance between state power and societal autonomy still serves as a useful illustrative heuristic, when we consider the potential impacts of AGI. In particular, it helps to highlight the critical trade-offs that we can expect to face between efficiency and accountability, collective coordination and individual freedom, and technological capability and democratic control.
How could AGI strengthen free societies?
AGI will likely offer an absolute advantage over human decision-making in terms of scalability, cost, and quality. Artificial bureaucrats could draw on specialised sub-agents to dynamically switch between data interpretation, risk modelling, and stakeholder communication, to dramatically speed up lengthy tasks like interpreting legislation or carrying out environmental impact analyses and fraud detection.
AGI agents could also lead to more equitable decision making. Human bureaucracies are littered with subjectivity. Some of this stems from their use of traditional rules-based automation that leads to disproportionately harsh outcomes for marginalised groups in domains like tax enforcement or welfare eligibility. These ‘dumb’ systems can miss critical context and are typically only deployed in situations, such as to initiate audits of recipients of Earned Income Tax Credits, that are amenable to such rules-based automation in the first instance. In contrast, more general AI systems will be able to grapple with the complexities and idiosyncrasies of the taxes filed by high-income individuals, potentially reducing disparities in how laws are enforced.
AGI could also improve how governments secure democratic inputs, leading to more feedback on, and potentially control over, what governments do. Gudiño-Rosero and colleagues recently explored how “digital twins” that simulate individual citizens’ views and “represent” their policy preferences could lead to an “augmented democracy”, in similar fashion to how AI agents may soon start to represent individuals in commercial transactions. This work stops far short of creating actual digital twins for political representation, partly due to limitations in capturing individual preferences. But future AI systems, enhanced with long-term memory, could enable higher-fidelity simulations. Governments could use these simulations to draw up more effective political agendas and experiment with different policy ideas, while also raising questions about how the roles of elected officials should evolve.
How could AGI undermine free societies?
In Seeing Like a State, the late political scientist and anthropologist James C. Scott offered a critical take on government efforts to make society legible, from birth and death registries to financial reporting requirements. AGI could dramatically enhance such efforts, enabling governments to analyse vast data streams in real-time and monitor and predict societal trends, risks, and individual behaviors, at a granularity and accuracy that is far beyond current levels.
This could make government decision-making radically more efficient and data-driven. But it could also enable unprecedented surveillance and control over citizens, stifling dissent. It could also dramatically reduce the cost of monitoring whether people are complying with laws. For example, governments could pass CCTV camera feeds through a multimodal AI model for continual analysis, leading to a form of ‘perfect enforcement,’ where even minor infractions become subject to consistent punishment. While this might seem beneficial from a rule-of-law perspective, it raises significant concerns for individual freedom and the quality of governance.
Take, for example, the National Highway Traffic Safety Administration’s recall of Tesla’s ‘Full Self-Driving’ software update in 2022, because it was carrying out ‘rolling stops’ - something that human drivers regularly do when an intersection is empty. The advent of self-driving technology could make every rolling stop legible, imposing a level of perfect compliance that many humans would consider draconian and inefficient. (Although such perfect compliance may come with a silver lining - exposing outdated or poorly crafted laws that rely on lenient enforcement and human discretion.)
Delegating decisions to AGI also raises concerns about the loss of moral accountability in public administration. For example, while AGI agents may excel at optimising policies for efficiency, they may lack the ethical nuance required to address competing societal values. This disconnect between computational optimisation and human morality risks eroding public trust.
AGI could also undermine free societies by empowering non-state actors. In more positive scenarios, it could enable citizens to better understand and advocate for policy positions, fact-check officials, and usher in new kinds of public deliberation. However, individuals and groups could also use AI agents to orchestrate harmful actions, such as to manipulate public opinion or coordinate insurgencies. They could also create opaque financial communication methods that make the economy less legible to governments, rather than more, similar to how cryptocurrency can be used to launder money, despite its legibility.
How to secure free societies
To secure the narrow corridor, states must neither blindly hand off power to AI systems nor clamp down on them in ways that stifle innovation. On the technology front, novel privacy-enhancing technologies could help individuals to maintain autonomy and privacy in the face of increasingly pervasive state monitoring. Investments in interpretability could help to ensure that AGI systems operate transparently and remain accountable for their decisions.
On the institutional front, governments could embrace hybrid structures that combine AGI’s computational power with the nuanced judgment and accountability that human administrators provide. This might include equipping public institutions with their own advanced AI tools for functions like biosurveillance, cyberdefense, and regulatory oversight, ensuring they are not outpaced by threats.
Governments could also look to reinforce participatory democratic processes by enabling large scale deliberative platforms, real-time citizen feedback systems, and representative digital twins, while designing robust safeguards to ensure that they genuinely enhance, rather than undermine, democratic accountability.
Perhaps most importantly, securing the narrow corridor in an age of AGI will require an epistemic shift in how we approach the governance of emerging technologies. Rather than passively reacting to technological disruptions, policymakers and the public must cultivate a greater capacity for anticipatory governance. This means proactively imagining and stress-testing our institutions for AGI's transformative potential. To do this, we can use tools like scenario planning, threat modelling, and forecasting - drawing on AI's own emerging abilities in these areas.
Thanks to Justin Bullock and Samuel Hammond for authoring the original paper and to Conor Griffin for editing support.
Great work! Yep, I agree, we can build a direct democratic X or Mastodon to become like a planet-wide brain, mockups:
https://x.com/MelonUsks/status/1929660387995115713
P.S. I'm not sure is it okay to respond in the same way on X and here?)
This is excellent. This is the critical distinction, AI as an amplifier of transparency or as yet another arbiter. I've written a bit about how AI can amplify the bad of technocracy, as well as how institutions will resist beneficial uses of AI.
If you're interested: https://open.substack.com/pub/seekingsignal/p/the-accountability-machine?utm_source=share&utm_medium=android&r=mn93