AI Policy Primer (November 2023)
Issue #6: The next AI Safety Summit, US Executive Order, and policy discovery
We’re back with another edition of AI Policy Perspectives: a monthly rundown of the topics that Google DeepMind’s policy team has been reading, thinking about and working on over the past few weeks.
This month, we have preparations for the next AI Safety Summit in South Korea, reflections on the US Executive Order, and outputs from Google DeepMind’s recent policy discovery programme delivered in partnership with civil society.
Policymakers taking action
South Korea prepares to host next AI Safety Summit
What happened: Preparations are underway ahead of the next AI Safety Summit, which will be co-hosted by the Republic of Korea and the UK next year. The event, which will take place virtually, is expected to focus on the development of frameworks, guidelines, and policies connected to elements of the U.S. Executive Order, the EU’s AI Act, and the G7 principles. The Carnegie Endowment for International Peace speculated that the summit will “include how to gauge increases in AI model capabilities, as well as institutional design problems affecting the world’s capacity to spread access to frontier-level AI technology without increasing risks of misuse.”
What’s interesting: The preparations come after the inaugural UK Safety Summit culminated in the ‘Bletchley declaration’, an agreement from 28 states to work together on safety standards to maximise the upside and minimise the risks posed by frontier AI systems. Amidst two days of workshops, keynotes, and demos, the US Secretary of Commerce Gina Raimondo used the Summit as an opportunity to highlight new policy interventions from the Biden administration, while Chinese Vice Minister Wu Zhaohui urged attendees to “ensure AI always remains under human control” and that governments should work to “build trustworthy AI technologies that can be monitored and traced.”
Looking ahead: The South Korean summit’s most significant contribution may prove to be the State of the Science report, an effort led by Yoshua Bengio to identify emerging risks associated with frontier AI.
Policymakers taking action
Executive Order reshapes US AI policy landscape
What happened: On 30 October the Biden Administration released its long-anticipated Executive Order (EO) on artificial intelligence. The EO builds on other actions taken by the Biden administration on AI, including the White House Blueprint for an AI Bill of Rights, the National Institute for Standards and Technology (NIST) Risk Management Framework, and the voluntary White House commitments made by leading AI companies.
What’s interesting: The comprehensive EO touches a broad range of federal agencies and AI issues, from workforce development to support for research, to sectors spanning healthcare, education, energy and others. Additionally, the EO establishes a new interagency White House AI Council which will be responsible for coordinating AI-related policy, including implementation of the EO. It also gives the Department of Commerce and NIST a leading role in implementation and tasks the White House Office of Management and Budget (OMB) with formulating guidance for federal agencies’ use and procurement of AI.
Other noteworthy provisions include reporting requirements for developers of “dual-use foundation models” using a certain compute threshold (greater than 10^26 flops for general models). Commerce will be providing more details about the definition of "dual-use foundation models" as well as what such requirements will look like. Additionally, the EO introduces requirements for US cloud service providers to report when a foreign person or reseller transacts to train a large AI model that could be used for malicious purposes.
Looking ahead:The EO does not need to be passed into law to go into effect, and with a divided Congress and prospects for major AI legislation uncertain, it is likely to represent the primary instrument for US AI regulation in the near-term.
What we’re hearing
Civil society groups drive policy discovery
What happened: Throughout 2023, we heard from a broad range of groups calling for policies like equitable data practices, upskilling efforts, and measures to build trust and enable participation in AI development. To surface these policies, we co-authored a new report with civil society organisations that summarised dialogues with a global set of participants from academia, governments, start-ups and the private sector, including those with experience in communities and sectors that will be most affected by the deployment of AI systems. The programme built on our work with the Aspen Institute, ‘A Blueprint for Equitable AI,’ which highlighted the need to encourage democratic dialogue about how AI might be built, used, and governed.
What’s interesting: AI labs are experimenting with methodologies like citizens assemblies and community fora to incorporate public input into the AI development process. Private sector participatory efforts, however, come with a host of challenges: power imbalances, information asymmetries, a lack of shared definitions and competing or contradictory goals. For these reasons, we partnered with civil society organisations to lead the creation of discussion agendas, recruitment of participants, and development of pre-reading materials. Many of the reports include lessons for improving how governments, civil society and the private sector might work together toward ensuring equitable AI outcomes.
Looking ahead: AI developers should strive to make sure that their models are reflective of and responsive to the rest of the AI ecosystem and the world beyond it. To understand some of our work in this space, read a summary of insights from the programme in the report ‘The Changing Landscape of AI: Lessons From a Year of Policy Discovery’. Additionally, each of the organisations we worked with produced their own reports of the individual roundtable discussions, which can be found here.