AI Policy Primer (#18)
The economy, the environment, and where Dean Ball thinks AI is headed
Every month, we look at three AI policy developments that caught our eye. Today, we cover how AI may affect the economy, the environment, and Dean Ball’s views on AI liability and governance. In response to a reader suggestion (thanks!), we also include a ‘view from the field’ from an interesting thinker on each topic. Thanks to Gabe Weil, Sam Manning & Andy Masley for lending their time & expertise.
Influential views
Where Dean Ball thinks AI is headed & how to govern it
What happened: In April, prior to joining the White House Office of Science and Technology Policy as an AI policy advisor, Dean Ball published a two-part essay series outlining his expectations for AI and how policymakers should respond.
What’s interesting: In the first essay, Dean lays out his core thesis: we are on the brink of powerful AI agents that will be able to source information, use software tools, and communicate: “These abstract tasks do not constitute everything a knowledge worker does, but they constitute a very large fraction…”.
Early AI agents can be glitchy, but as AI labs put them into reinforcement learning systems, they will get better. As Dean notes, this will be easier in domains like maths, where outputs can be more easily verified. But even in more subjective areas - like writing a newsletter! - AI systems can increasingly review each other’s outputs, which will accelerate progress.
As they deploy growing fleets of AI agents, Dean expects organisations that rely on knowledge workers to become more efficient and profitable. They may also become stranger - heavier at the top and so more variable in character, with leaders able to rely on agents for better information flow and control.
Widespread job loss may happen, particularly if prompted by a recession, but this may also be offset, or delayed, by the “in-person” aspects that many jobs have, or by regulations requiring “human alternatives” to AI decisions. In the near-term, young people entering the labour market may be the most affected.
Dean also expects transformative progress from the use of AI in science, from new cancer cures to room-temperature semiconductors, but these will take longer, as data still needs to be gathered and real-world experiments run. The prospects for the parts of the economy that are largely offline - like social care or the construction industry - are not analysed.
In a 2nd essay, and an accompanying paper, Dean examines how the US government should respond. Dean’s starting point is that AI will become a “foundational technology” - closer to a natural resource like energy than to, say, social media. Past foundational technologies, like railroads, telecoms networks, electricity, and the Internet all differ, in form and function, so commonalities in how we govern them could map to AI.
The main commonality Dean sees is that the US eliminated or severely limited the exposure of providers of these foundational technologies to tort liability for downstream misuse of their products. Dean does not think AI developers should face no liability - if a data centre explodes due to mismanagement or an agent exfiltrates itself and defrauds people - companies should face different types of statutory liability, much like power providers do today.
But Dean argues that attempts to design AI liability schemes that go beyond this, to impose a “reasonable care” standard on AI model developers to foresee and prevent a wider range of downstream harms could be weaponised. Building his previous two-part essay series on liability, he notes how third parties, including anonymous investors, can fund and extend US liability cases and how the target is often those with the most resources, rather than those who are most directly responsible for the harm. Relying on liability can also mean that judges and juries are effectively determining how to govern frontier AI systems and what good safety practices look like.
What is Dean’s proposed solution? Building on Gillian Hadfield’s work on ‘regulatory markets’, he proposes a framework where governments would authorise private bodies to develop safety standards that AI companies could voluntarily opt to be certified and audited against. The AI companies that opt in would receive a ‘safe harbour’ from tort liability stemming from third parties’ misuse of their models. The goal would be to support AI innovation, while providing incentives for safety and encouraging a marketplace of ideas for how to best achieve it.
View from the field: Prof Gabriel Weil, LawAI:
“Tort law is especially useful for mitigating risks from AI (over which there is substantial disagreement and uncertainty) because (unlike ex-ante regulation) it scales automatically with the risk, and shifts the onus to AI companies, where the relevant expertise is concentrated, to figure out how to make their systems safe. Voluntary private certification is poorly situated to protect third parties, since there is no market feedback to induce certifiers to craft rules that protect non-users and prevent a race to the bottom.
To read more, see my recent working paper which argues that liability should be the governance tool of first resort for AI risk, my shorter LawFare piece, and my earlier paper on using tort law to mitigate catastrophic risk from AI. On the mixed relationship between liability and innovation, the risk of excessive litigation, and the nuances that apply here, see my recent piece with Mackenzie Arnold.”
Study watch
In Denmark, chatbots aren’t turbocharging productivity growth
What Happened: In May, Anders Humlum and Emilie Vestergaard from the universities of Chicago and Copenhagen published a new analysis of chatbot adoption among ~25,000 Danish workers in 11 occupations that are ‘exposed’ to AI, such as journalists, customer service employees and software developers. They found that while chatbot use is high, and growing, this has not yet resulted in a statistically significant impact on productivity growth.
What’s Interesting: The analysis is based on two large surveys that the authors conducted in 2023-24 to understand how Danish employees were using chatbots and what the perceived impacts were. The authors then used social security numbers to match the survey data with government data on employment and earnings. Using a difference-in-differences approach - which attempts to mimic a randomised controlled trial using real-world data - they then analysed the chatbots’ impacts on productivity.
What did they find? If chatbots were making individuals more productive, we might expect to see an increase in their wages and/or a reduction in work hours. However, the survey finds zero statistically significant effects on these variables or on firms’ profits.
At first glance, this is disappointing. For many economists, growth in productivity is the most important determinant of long-term economic growth and all that rests on it. For the past two decades, productivity growth has been stagnant in much of the world, rich and poor alike, hurting public services and living standards. Optimists hope that AI will now super-charge it, while pessimists worry about a repeat of the ‘Solow paradox’ - in 1987, the Nobel Laureate famously quipped that “you can see the computer era everywhere but in the productivity statistics”.
The Danish analysis does hint at some productivity gains from AI. Survey respondents who used chatbots reported an average saving of 2.8% in their work hours, but only a very small fraction of this resulted in higher wages. When added to the growing literature on AI’s productivity effects, this suggests a pattern where: academic experiments that give individuals access to chatbots or AI tools to complete a certain task often demonstrate quite high self-reported productivity effects, of 15-50%, in a short time frame (e.g. a week). Yet once we turn to early real-world outcomes from AI use at organisations, these effects shrink. And when we look at aggregate productivity growth at the level of the entire economy, evidence for AI’s benefits is even more scant.
What might explain this? First, the Danish survey focuses on chatbots from 2023-24, and so the AI capabilities may have been too nascent to have had much effect. Second, the J curve theory put forward by Erik Brynjolfsson et. al. argues that AI can increase productivity growth, but only after employees and organisations work out how to best use it, which takes time and resources. Humlum and Vestergaard find some evidence for this - employees report that introducing chatbots creates new tasks for employees who have to integrate them into workflows and ensure compliance. Finally, wage growth may also be a limited way to measure productivity growth, not least since effects on wages take time to materialise. Past research also demonstrates that new technologies may benefit a relatively small number of firms and workers, and so will not always clearly manifest in the data.
This suggests that as AI capabilities improve, and organisations and individuals become better at using it, productivity growth could start to increase, perhaps rapidly. However, this is not guaranteed. For one, AI will need to be usefully deployed across all or most consequential sectors, and catalyse new sectors, if countries are to avoid a ‘Baumol cost disease’ scenario where new productivity gains in some sectors, such as the technology sector, lead to increased demand in other sectors, such as education, where productivity gains are not materialising. Such a scenario, which likely played a role in the original Solow Paradox, could blunt macro-level productivity growth.
AI may also introduce complexities that make it harder to measure productivity growth - for example, if people start using ‘free’ or low-cost LLMs to perform tasks where they previously hired a company, this could cause output to nominally ‘decline’, at least under current measurement approaches. So not only do the potential effects of AI on productivity growth remain unclear, so does the best approach to measuring it.
View from the field: Sam Manning, GovAI:
“Outside the headline result, this paper includes several notable findings. For example, on days when they use AI, marketing professionals and software developers report higher time savings than teachers (~7-11% vs ~4.5%). These numbers don’t strike me as negligible and suggest that AI’s productivity effects are likely to vary quite a bit across occupations. If some roles are already saving 7–11% of their workday thanks to AI, firms will eventually begin adjusting workflows to better capture those time savings and competitive pressures will result in broader productivity gains that are measurable at the firm level. It’s also interesting that when employers actively encourage the use of chatbots, the reported effects on time savings, work quality, task expansion, creativity, and job satisfaction rise by 10–40%. That points to an important role for firms in promoting more widespread and effective use of LLMs in the workplace.”
Topic deepdive
Will AI exacerbate or mitigate climate change?
What happened: In April and May, The Economist and MIT Tech Review published special reports examining how AI may affect climate change, with The Economist’s Alex Hern noting that he has been trying to “nail down” this question ever since AI took off.
What’s interesting: To understand how AI will affect the climate, we need to answer two different questions, which the reports shed light on, in different ways.
The first question is: how will training and inferencing AI models directly affect greenhouse gas emissions, via the power they consume and the ‘embodied’ emissions required to build, maintain and recycle the data centres, devices and networks?
Researchers like Emma Strubell, Alexandra Sasha Luccioni and Jae-Won Chung have devised methods to help address these questions. The MIT report draws on these methods to provide new estimates. For example, they find that asking Llama 3.1 8b to make a travel itinerary requires ~114 joules of energy, once cooling and other factors are accounted for - a tiny amount, equivalent to riding six feet on an e-bike. At the other end of the spectrum, generating a five second video using a ZhiPuAI model, uses about 3.4 million joules, equivalent to ~38 miles on an e-bike.
It is challenging to both compile and interpret these estimates. First, researchers typically have to focus on open-source models, as they argue that companies who develop leading proprietary models do not release the necessary data, although the EU AI Act may soon require estimates for the largest AI models. A second issue is that some past estimates have been miscalculated, or misleadingly reported, in a way that makes them sound a lot larger - echoing past panics about energy use from technology, such as around video streaming during Covid-19. Finally, there is no clear way to tally these individual estimates into an aggregate estimate for the overall emissions from training and running all AI models to understand how relevant it is, from a global emissions perspective.
Instead, the best macro estimates come from looking at data centres’ power consumption. Today, data centres account for just ~1.5% of global power consumption, or ~2% if crypto is included. Most of this comes from activities like streaming, rather than AI. In April, the IEA shared its latest forecast for how this may change in the AI era. In its base case scenario, data centres’ power consumption would rise to 945 terawatt-hours by 2030, up from 415 in 2024. If this proves accurate, it would be a non-trivial increase and could put pressure on energy grids in certain locations, as data centres are geographically concentrated - in Ireland they accounted for ~17% of power consumed in 2022. However, data centres would still account for just ~3% of global power consumption and the increase in their power consumption would be smaller than that of other sectors, such as electric vehicles and air-conditioning.
From a climate change perspective, the precise amount of power that AI consumes will also be less important than the source of that power. Optimists hope that AI acts as a forcing function to dramatically accelerate the roll-out of nuclear and renewable energy in the near-term. Skeptics worry that the AI race will compel companies to use fossil fuels that they may have otherwise eschewed.
When it comes to determining how AI will affect climate change, a second question is arguably more important than AI’s future power use: what applications will AI be used for, at what scale, and how will these applications affect emissions?
AI supporters argue that it will accelerate renewable energy and make the economy - including energy-intensive sectors - dramatically more efficient.
The Economist’s report provides some grounds for optimism on this front. They note how companies such as Octopus Energy and Tapestry are using AI to make it easier to deploy renewable energy and to optimise the grid, for example by helping to locate green-energy projects and enable smart homes and vehicles to autonomously draw power during fallow periods. Other case studies document how energy-intensive industries are using AI, for example to optimise heating and cooling in buildings, reduce waiting times for ships in ports, or to enable new kinds of steel manufacturing processes.
Estimating how these AI use cases may affect emissions is even more challenging than estimating AI’s power use. There is no formal stocktaking of beneficial AI climate applications and few efforts to estimate how they will affect emissions or the additionality that AI brings. In theory, these AI applications could reduce the future emissions that would have otherwise occurred by much more than AI’s future power use increases them, but the level of uncertainty, and the timeline to impact, are greater. This is even more true when we consider efforts to use AI in science, which could lead to new materials for solar panels, batteries, and direct air capture, or even accelerate fusion - an effectively limitless, clean energy. Could AI make breakthroughs in these areas 30% more likely, or accelerate them by 30 years? Given their complexity, the temptation to skip such questions and focus on what we can measure - AI’s power use - is high.
A final complication comes from the fact that most AI applications are not obviously good, or bad, from an emissions perspective but may still shift individual or organisational behaviour in consequential ways. For example, what might happen if people start to shift more of their economic activities to AI agents? History tells us that the impact of technologies often depends on context and the activities they replace. For example, the Internet enabled music streaming, ecommerce, and home working, but whether these shifts in behaviour increase or reduce emissions can vary depending on the individual case, such as the size of a person’s home or whether ecommerce transport is electrified. At the aggregate level, there are reasons to think that digitisation helps to make economies less carbon intensive. But it’s hard to reliably ‘prove’ this and much depends on context and efficiency gains - which so far have been remarkably high for data centres and AI.
Given these complexities, how should policymakers ensure that AI benefits the climate? The Economist argues that the best policy would be a strong global carbon tax to enable the market to incentivise and penalise different AI applications and uses. However, it views this as politically intractable and so calls on governments to instead undertake permitting reforms to allow AI companies to fund and build more clean energy, and to build more flexible data centres that can match workloads to intermittent wind and solar.
The Economist also calls on other geographies to emulate the EU AI Act and impose obligations on AI developers to share estimates for the power used by their leading models. The reliance on open-source models to estimate AI power use does seem inadequate but the usefulness of this recommendation could also be challenged, given the overlapping energy reporting requirements that already exist and the risk of creating the kind of arduous ‘environmental impact assessments’ seen in other sectors that can stymie innovation at little benefit to the environment.
View from the field: Andy Masley, Author of Weird Turn Pro and Why ChatGPT is not bad for the environment
“Excited: AI seems ecologically costly if we only look at its total energy use without considering the value we get out of it. But, hospitals use more energy than yachts, and if we look at value per unit of energy, AI seems very likely to become one of the most energy efficient sectors. See for example this, this, and this about how LLMs are adding value to programming, at relatively little cost. And that’s before we consider the more direct ways that AI can be useful to the climate, such as by optimising energy and transportation.
Worried: In line with Jevon’s Paradox, I worry that even though AI might make processes more efficient, if it’s not paired with a switch to renewable energy, we may emit more in total and miss key climate targets. I'm also concerned that AI-enabled weapons or widespread job automation could threaten political stability, eroding the trust needed for international climate cooperation.”
We are exploring ways to make this Substack more useful. If you have ideas, please reach out to aipolicyperspectives@google.com.