What we’re watching
Tony Blair Institute for Global Change publishes report on the future of UK AI
What happened: Last month, the Tony Blair Institute for Global Change published a report, with contributions from Tony Blair and William Hague, setting out what it thinks the UK should do to become a global leader in safe and successful AI development.
What’s interesting: This is likely to be an influential report that will resonate inside and outside the government. It makes more than 50 recommendations, with a strong focus on AI safety. These include creating a new national AI laboratory to work with AI companies on safety research; and introducing a “tour-of-duty” programme to second AI industry experts into government roles for 12 months.
The report does not recommend creating a new sovereign LLM, or BritGPT, but does recommend funding an open-access LLM, similar to Bloom, for researchers to use via an API. We shared some inputs with the report’s authors, and support the idea of closer partnerships between governments and private labs to enable, for example, the evaluation of frontier models.
Ada Lovelace and Alan Turing institutes release survey on UK public attitudes to AI
What happened: In June, the Ada Lovelace and Alan Turing institutes released the results of a survey of UK adults about their awareness of AI and perceptions of risks, benefits, and approaches to AI governance. The survey focussed on 17 AI applications, including those that are commonplace (such as facial recognition for unlocking phones) and more speculative (such as robotic care assistants).
What’s interesting: Respondents thought that a small majority of AI applications (11/17) will be somewhat or very beneficial, led by healthcare applications such as using AI to better assess the risk of cancer. When considering what would encourage AI uptake, and greater public comfort with AI, respondents felt that explainability may be more important than accuracy, and that robust data security and a process for appealing AI-based decisions should be prioritised.
The survey suggests some potential disconnects between AI practitioners and the UK public. For example, respondents strongly supported some AI applications - e.g. facial recognition for policing - that some in civil society groups, academia and elsewhere have expressed concern about. The public also favour governance approaches - e.g. a dedicated AI regulator - that many AI practitioners do not.
This raises questions about how, and to what extent, regulators and AI labs’ own governance processes should use public opinion as a direct input. The extent to which they should draw on survey data compared to other data sources (such as information about the public’s actual use of AI or more in-depth consultations with minority groups) remains an important problem.
Some AI labs are accelerating efforts to incorporate public input via initiatives spanning alignment assemblies, community fora, and efforts to boost democratic deliberation.
What we’re doing
GDM submits public response about building trustworthy AI in the US
What happened: In April, the US National Telecommunications and Information Administration (NTIA) requested comments on policies and mechanisms that can help earn trust in AI systems. By the June 12th deadline, more than 1,400 comments had been submitted.
What’s interesting: NTIA is not a regulatory agency, but its report will inform the White House’s approach to AI and accountability, which will likely be led by the Office for Science and Technology Policy (OSTP) and the National Security Council (NSC). As Administrator Alan Davison noted, the NTIA is focused “not (on) what the law is, but what it ought to be.”
NTIA is also part of the Department of Commerce, which houses the National Institute of Standards and Technology (NIST). In January, NIST published the AI Risk Management Framework, which we fed into, and expect many organisations to use to guide their approaches to AI risk.
We submitted a public response to NTIA, where we supported a ‘hub-and-spoke’ approach to US AI regulation. This would include a central agency, helping to inform the approaches that individual sectoral regulators take, rather than a single overarching AI regulator. Our submission also explained how Google and GDM have implemented our AI Principles; signalled our support for bias bounty programmes to identify AI vulnerabilities; and called for more consideration of the extreme risks that frontier AI models may pose.
A number of us - Sébastien Krier, Dorothy Chou and William Isaac - also worked with Alondra Nelson at the Institute of Advanced Studies (IAS) to convene a group of leading AI thinkers for three days to submit a collective response to NTIA.
What we’re reading & listening to
White House commitments: Last Friday, the Biden-Harris Administration announced voluntary commitments from seven leading labs (Alphabet, OpenAI, Anthropic, Meta, Amazon and Inflection AI) to ensure the safe and ethical development and deployment of AI systems. These commitments are the result of several months of coordination. An op-ed published this week by the US Secretaries of Commerce and State called them a “starting point for action”.
The Ezra Klein Show: “A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has”: Demis Hassabis appeared on The Ezra Klein Show to discuss how AI can be used to solve the biggest scientific challenges, using AlphaFold as the main proof point. .
MIT Technology Review: “How judges, not politicians, could dictate America’s AI rules” by Melissa Heikkilä. This article argues that, in the US, the law and not politics is emerging as the leading force imposing restrictions on AI - with AI Now Institute Managing Director Sarah Myers West quoted as saying: “It seems like the more straightforward path [toward an AI rulebook is] to start with the existing laws on the books.”
Thanks for reading. We welcome feedback, ideas and views on these and other AI policy developments. Please share this newsletter with others who might enjoy it, and sign up here to receive it directly.
Nicklas Berild Lundblad, Conor Griffin, Séb Krier, Harry Law, Eimear Nolan, Nick Swanson