AI Policy Primer (#22)
Agent economies, science, political misinformation & fellowships
Every six weeks, we look at three AI policy developments that caught our eye. As always, we include a ‘view from the field’ from an interesting thinker on each item. Thanks to Andrey Fradkin, Seth Benzell, and Stuart Buck for taking part.
1. The AI agent economy
- What happened: In September, researchers at Google DeepMind published a paper examining how AI agents might be integrated into the economy. Gillian Hadfield and Andrew Koh also published a paper on the implications of AI agent economies. The papers were timely, with Google and OpenAI recently launching new protocols to enable agents to make payments online. 
- What’s interesting: AI economic impact debates often position the technology as a ‘tool’ and focus on how it may affect employees’ productivity or job prospects. This overlooks the more radical ways that AI agents may change what we even mean by the ‘economy’ or ‘economic actors’. 
- Deploying AI agents at scale will be hard. It will require capability improvements and overcoming barriers from legacy infrastructure to the tacit ‘grey knowledge’ that humans use to navigate organisations. But there are routes to doing so and the GDM paper argues that as AI agents become more capable and interconnected they will begin to transact with each other, at scale and speeds beyond direct human oversight. 
- One way to analyse the potential effects of this is to compare AI agents with humans. LLMs are trained on economic textbooks and early research suggests that some AI agent behaviour may be consistent with that of humans, whether in terms of maximising expected utility or displaying common behavioural biases. However, Hadfield and Koh argue that evaluations of AI agent behaviour are weak and that agents may lead to novel behaviours and impacts, particularly via multi-agent systems: - For example, when it comes to customer welfare, AI agents could be effective personal shoppers, searching widely and continually checking prices. This could lead to better outcomes for consumers, but only if agents correctly infer human preferences and avoid biases towards certain marketplaces. Less positively, agents may develop exploitative strategies, such as seller agents generating a large number of fake reviews to mislead buyer agents, that exacerbate fraud. 
- When it comes to market power, AI agents could help buyers seek out novel competitors, but may also reduce the transaction costs and communication challenges that normally prevent any one firm from becoming too large. 
- AI agents may also exacerbate inequality, as superior agents - equipped with better compute, data, and models - could engage in ‘high frequency negotiation’ on behalf of their higher-income users. AI agents may also pose a greater risk of collusion than their human counterparts, as the reinforcement learning algorithms used to train them can cause a bias that leads agents to insufficiently explore “off-path” strategies. 
 
- Given these challenges, how should we respond? The GDM paper argues that the default response will be to allow the agents to fully permeate the human-led economy, in an emergent or spontaneous manner, with limited safeguards. The authors argue that we should instead aim to prescriptively demarcate (or ‘sandbox’) agents in a controlled sector or section of the economy. This would give policymakers and researchers the opportunity to test them before they are deployed more widely. 
- They also propose various other policy ideas: - Inspired by Ronald Dworkin’s principles of distributive justice, they propose granting every human user an equal initial endowment of a virtual currency to bid for compute, tools, or priority execution slots on behalf of their agent. They also propose using incentives to steer agents towards socially useful ‘missions’, such as accelerating scientific discovery or tackling climate change. 
- The authors also lay out various technical ideas, such as identifiers for each agent, verifiable credentials that allow agents to build a ‘tamper-proof’ reputation, a ‘proof-of-personhood’ that links digital accounts to unique human beings, and standards that encourage interoperability between agents. 
- They also propose a hybrid oversight infrastructure that uses AI for real-time monitoring before escalating cases to human experts. 
 
- A view from the field: What are you excited or worried about with respect to AI agents and the economy? Andrey Fradkin & Seth Benzell, from the Justified Priors podcast: - Andrey: “I am excited by the ways in which markets can be redesigned for AI agents in a way that makes people better off. For example, in the car market, can we create the infrastructure so that a buyer AI agent can find and negotiate a good deal on a car with a lot less human effort?” 
- Seth: “In a world where agents and robots can do anything, output will be determined by the level of capital investment. In such a world, the most important growth policy will be national savings policy. High consumption for Boomers and Gen X would require investing less in the future, at exponentially compounding cost to their children. Intergenerational conflict will become more salient.” 
 
2. Nine ideas to accelerate science with AI
- What Happened: In August, the Institute for Progress published nine policy ideas to accelerate the use of AI in science. 
- What’s Interesting: The evidence is growing that science will be one of the domains where AI will yield its greatest benefit to society. Governments are paying more attention. The EU just released an AI for Science strategy, the UK is working on their own strategy, and the US is prioritising science in their new AI Action Plan. But the policies to pursue are not obvious. How is AI for Science policy different from standard science policy? Or from AI policy? What might an ambitious role for government look like? 
- The IFP provides nine ideas, with a focus on the US. Some aim to improve how science functions, such as using AI agents to replicate scientific papers. Others propose new kinds of organisations, such as self-driving labs to validate new AI-designed materials; ‘X-Labs’ to work on more ambitious AI projects than grant funding normally allows; and a new office to commission AI for science ‘grand challenges’ and evaluations. 
- The success of AI for Science efforts will hinge on the availability of a core set of ingredients, the most important of which is data. This is where most of the IFP ideas focus. Adam Marblestone and Andrew Payne propose creating maps of five small mammal brains, such as laboratory mice, to better understand behaviours that we would like AI systems to learn, such as cooperation. Maxwell Tabarrok proposes a public database of one million antimicrobial peptides, to train AI models to tackle antibiotic resistance. 
- Three other ideas focus on better leveraging the data that already exists, but is inaccessible to most scientists. - Andrew Trask and Lacey Strahm lay out an ‘Attribution-Based Control’ system that would allow owners of healthcare, financial, and industrial sensor data to specify AI models that could access it. 
- Ruxandra Teslo argues that LLMs grant an advantage to large pharmaceutical firms that can draw on their historical archives of new drug applications to create ‘AI copilots’, an opportunity that is unavailable to most startups. In response, she proposes a new entity to monitor biotech bankruptcy cases and buy up ‘orphaned’ regulatory dossiers and clinical trial data, before anonymising and open-sourcing it. 
- Ben Reinhardt argues that most of today’s AI for Science models are trained on ‘clean’ curated datasets and scientific papers. This privileges the final outcome of science research and overlooks the messy process of doing it. In response, he proposes creating ‘Unstructured Data Generation Labs’ where scientists would carry out research in fields like biotech and materials science, record themselves using everything from bodycams to equipment sensors, and then use that data to train AI models. 
 
- A view from the field: What AI for Science policy idea are you passionate about? Stuart Buck, The Good Science Project: - Stuart: “One policy that could accelerate AI in science isn’t about AI per se: Funders should sponsor many more direct replications, including in collaboration with the original labs. The reason: so much about science involves both tacit knowledge (which can’t be articulated) and unwritten knowledge (which can be articulated, but is so routine that no one even thinks to mention it). Most of that knowledge isn’t accessible to AI currently, but if we carried out more direct replications in tandem with AI tools, we could make quicker progress towards figuring out all of the unseen and unwritten factors that explain why an experiment reached the results it did. See this essay from the Good Science Project.” 
 
3. Using LLMs for political guidance
- What happened: Researchers at the UK AI Security Institute published results from a randomised controlled trial which found that individuals who used LLMs to research political information before the 2024 UK election were subsequently less likely to believe false information. 
- What’s interesting: The authors first ran a survey of ~2,500 UK adults and found that 9% of voters, or ~1/3rd of chatbot users, used LLMs to get political information in the week before the July 2024 election. Most LLM users found the models useful and accurate. 
- The share of chatbot users who used the models to get political information is quite high, and is likely higher again in October 2025. But LLMs are still well behind other sources of political information, such as television, social media, and search engines. 
- The authors then ran an RCT of UK residents. The first group was given access to an LLM and asked to research issues of concern to the UK election, such as climate change, immigration, criminal justice and Covid-19 policy. The control group was given access to a search engine. The study found that both groups subsequently showed similar declines in belief in false information and similar increases in belief in true facts. The main difference was efficiency - the LLM group completed their task 6-10% quicker. The results held across different AI models (GPT-4o, Claude, Mistral) and also held when the models were prompted to be more sycophantic. 
- The results suggest that widespread concerns about LLMs exacerbating political misinformation may be misplaced, which in turn may reflect how hallucination rates have dropped over the past two years. The speed of LLMs also means that they could potentially help to debunk fast-spreading misinformation more quickly, enabling what the authors describe as “rapid, reliable public learning during high-stakes events.” 
- The results also challenge the common survey finding that the ‘public doesn’t trust’ AI. Rather, the authors find that ‘information seeking’, including for political information, is one of the public’s main AI use cases. This highlights the need to judge public attitudes to AI based on ‘revealed’ as well as ‘stated’ preferences. 
- The study comes with caveats and limitations. It focussed on one country and only tested a small number of models - others may be more likely to generate political misinformation. 
- The study also evaluates LLMs by comparing them against an antecedent technology: a non-AI search engine. But as the authors note, such comparisons are increasingly difficult as LLMs are now integrated more directly into the search experience. This will make it harder to know what baseline to compare future LLMs against. 
- A view from the field: How do you see AI changing access to political information? Tom Rachman, Google DeepMind: - Tom: “How AIs will remake the news ecosystem is a matter of vast import to democracies. Each person’s evaluation of information is mediated by their trust in its source, particularly for political content. One could envisage a future in which different AI models gain specific political reputations, affecting their influence as sources. Another plausible future could see personalised AI agents as everyone’s fundamental font. This study, among other ambitious experiments led by researchers at the AI Security Institute, helps establish a baseline effect as we wait for new information paradigms to crystallise.” 
 
Bonus: AI Policy Fellows of the World, Unite!
Every year, AI fellowships send fresh outstanding minds into the world of policy. But fellowships are more than points of transit; they are sources of valuable research. To highlight this, we scanned recent projects from leading programs at the Centre for the Governance of AI, ERA Cambridge, Pivotal, MATS and PIBBSS. What struck us was the sheer range of insightful work. While we cannot list all the excellent contributions, here are three that caught our attention:
- Jacob Schaal, a Cambridge ERA fellow, extracted economic insight from Moravec’s Paradox that what is easy for AI is hard for humans, and vice versa, conceiving a new way to judge which jobs are most exposed to automation. Management and STEM occupations, he found, face the highest automation exposure. Interested in more? Ask for details from Jacob at jacobvschaal@gmail.com 
- Said Saillant, a GovAI summer fellow who recently joined UNIDO’s Innovation Lab, developed the concept of AI-ready special economic zones, or AI-SEZ, for Latin America and regulatory sandbox models for the UK. His work focused on adaptive regulation to accelerate safe AI diffusion. Interested in more? Ask for details from Said at ssaillant@societassapiens.org 
- Joël Naoki Christoph, a fellow at GovAI, argued that middle powers chasing “sovereign compute” are walking into a costly trap because such projects fail to achieve real autonomy. Instead, he advocates for “managed dependency,” allowing nations to avoid fruitless expenditure and reduce foreign leverage. Interested in more? Ask for details from Joël at jchristoph@hks.harvard.edu 





An AI agent that can continuously monitor prices , including couple of days after I’ve bought an item would be pretty solid. In case there’s a price drop , it would have the autonomy to cancel purchases and reorder them based on the current dropped prices. I know some who track prices of items on Amazon using telegram bots. So it would be just that but more efficient. Just some incoherent rambling :)
Also, loved the bonus section!