AI Policy Primer (June 2024)
Issue #11: State-level legislation, science, and energy
In this month’s AI Policy Primer, we look at state-level legislation in the US, new work exploring the opportunities and challenges associated with using AI in science, and recent debates about AI and energy. As always, leave a comment or let us know if you have any feedback at aipolicyperspectives@google.com.
Policymakers taking action
US state-level legislation moves forward
What happened: As Congress struggles to find consensus on federal AI legislation, states are moving forward to fill the vacuum. Over 600 AI-related bills have been introduced in 45 states during this year’s legislative sessions alone.
What’s interesting:
California faces a 31 August legislative deadline as it attempts to pass far-reaching legislation to impose various requirements related to the development of advanced models, including safety assessments, increased liability for AI developers, and the creation of a new state regulator for AI models.
Colorado recently passed first-of-its-kind legislation requiring developers of high-risk AI systems to prevent algorithmic discrimination by establishing a rebuttable presumption of reasonable care linked to compliance with disclosure, reporting, risk management, and other requirements. Governor Jared Polis signed the measures into law along with an unusual signing statement expressing “reservations” about the impacts and encouraging lawmakers to make amendments before the law takes effect in 2026. Similar bills were introduced in over a half-dozen states this year.
Looking ahead: Some states are taking a more incremental approach to developing AI policies. For example, Indiana, Oregon, Washington, and West Virginia each enacted bills this year establishing multi-stakeholder public-private AI Task Forces to develop legislative and policy recommendations. The flurry of state-level activity could further fracture the AI policy landscape, potentially creating a patchwork of regulations and compliance requirements.
What we’re reading
Reports tackle science in the age of AI
What happened: The Royal Society's recent publication, "Science in the Age of AI", looks at the opportunities and challenges associated with using AI in science. The report follows publications from the European Commission and OECD, signalling a growing global recognition of AI's transformative potential in scientific research and the need for supportive policy frameworks.
What’s interesting: These reports note that AI is reshaping science across many fields. With applications spanning medicine, materials science, robotics, climate modelling and more, the capacity for more sophisticated data analysis, pattern recognition, and simulation is changing how scientists approach complex problems. Additionally, the reports highlight that:
Infrastructure is key. Plentiful, well-maintained, and robust data and compute resources are essential for AI's success in science. The reports emphasise the need for investment in public research infrastructure, data sharing and open science principles.
Scientists must adapt. The evolving role of AI necessitates new skills and greater AI and data literacy, including a nuanced understanding of AI's limitations and potential risks, and a desire and ability to work across disciplines.
Strategic policy interventions are crucial. These include investments in infrastructure, more public-private partnership and knowledge exchange, standardised tools and methods and governance frameworks to catalyse the integration of AI into scientific workflows.
Looking ahead: Despite showing promise, however, many challenges remain. Responsible AI use requires a balanced approach that embraces its potential while addressing challenges such as reproducibility, transparency, and bias. To that end, the reports also caution against an overreliance on industry-led research, noting that risks could include proprietary tool lock-in, a decline in basic science research, and brain drain from public institutions.
Sector spotlight
AI and energy sparks debate
What happened: The relationship between AI and energy is under the spotlight. Leopold Aschenbrenner estimated that the total energy required to train and deploy AI systems may require up to 20% of US electricity production by 2030, while a new post from Epoch AI proposed that "a naive extrapolation suggests that AI supercomputers will require gigawatt-scale power supply by 2029" for a single model. Meanwhile, Hugging Face looked at the factors driving AI energy use, and the International Energy Agency said that in 2023 NVIDIA shipped 100,000 units that consume 7.3 TWh of electricity annually.
What’s interesting:
Predicting AI’s future energy consumption is challenging. One way of understanding the variables is to create a ‘Drake equation for energy consumption’. A rough version of this framework may contain the following factors: E (growth in energy consumption of AI) = C (annual growth rate of compute) x U (proportion of time AI systems are in use) x P (power consumption per unit of compute) x Ec (efficiency improvements in compute usage) x Ee (efficiency improvements in energy use).
Taking both the IEA’s 2023 figure – with the caveats that it is 1) global and 2) only accounts for NVIDIA chips – of 7.3 TWh for AI consumption and a total US electricity production of 4,510 TWh, we can roughly estimate AI’s 2023 US energy usage to be 0.16%. Based on this figure, the growth rate needed for AI to account for 20% of US electricity production by 2030, starting from 2023, is just shy of 100% per year. Meanwhile, the annual growth rate needed for AI to account for 2% of US electricity production by 2030 is approximately 43%. (Both figures, however, assume negligible growth in energy capacity - see below.)
Returning to our model, we can change individual variables for a range of predictions for the annual growth rate of AI energy consumption. In the 2% scenario, the required rate of compute growth (C) would be around 35% per year if we assume a 80% contribution, 22% with a 50% contribution, and less than 9% with a 20% contribution. For 20%, however, we would require compute growth of 79% for an 80% contribution, 50% for a 50% contribution, and 20% for a 20% contribution. This back of the envelope calculation may, however, look very different based on compute efficiency savings in both compute and energy.
Looking ahead: Meeting this demand may require building new power infrastructure. Aschenbrenner’s work suggests that utilities firms are already pricing in a 4.7% growth rate over the next five years, rather than the previous 2.6% figure (though he acknowledges this is far short of what he sees as required capacity increases). Finally, there is the carbon question. While it is currently unclear whether a surge in demand for energy can be met using green energy sources, it may be possible to provide the necessary power using renewables (depending on how much each factor we identify contributes to effective compute capacity).