AI Policy Primer (March 2024)
Issue #8: AI safety institutes, open models, and biotechnology
Welcome to another monthly installment of the AI Policy Primer. As a reminder, we’re also sharing the AI Policy Primer alongside other content—such as essays and book reviews—on AI Policy Perspectives. If you have any feedback, please do get in touch with us at aipolicyperspectives@google.com.
For this month’s edition, we have a stock-take of the various national AI safety institutes, our response to the NTIA’s request for input on open-weight models, and commentary on a new report from the Tony Blair Institute addressing biotechnology in the UK.
Policymakers taking action
EU AI Office gets up and running
What happened: The EU Parliament approved the Artificial Intelligence Act to boost “safety and compliance with fundamental rights, while boosting innovation.” The regulation, which was agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. As part of the process of operationalising the AI Act, the EU AI Office—established in the Directorate-General for Communications Networks, Content and Technology—has begun to ramp up its operations.
What’s interesting: The AI Office is expected to employ approximately 100 staff members in total by the end of 2025 in order to “play a key role in the implementation of the new EU AI Regulation (AI Act), strengthen development and use of trustworthy AI, and foster international cooperation.” As part of this process, the new group will develop tools, methodologies, and benchmarks for evaluating capabilities for general purpose AI systems, which includes those whose “cumulative amount of computation used for training measured in FLOPs is greater than 10^25.” The move comes as the US AI Safety Institute begins to create a team to conduct evaluations, and follows efforts by the UK AI Safety Institute to build capacity.
Looking ahead: The EU’s AI Office is likely to emerge as one of the three most important new institutions in conjunction with the UK AISI and US AISI (which recently signed a partnership agreement). Like the institutes in the US and the UK, the group will continue to accelerate efforts to hire technical researchers and policy specialists.
Policymakers taking action
NTIA solicits comments on open-weight models
What happened: This month, the US (NTIA) ran a consultation on the “risks of openly available model weights,” as directed by the Executive Order on AI. Google DeepMind partnered with Google to submit a response making the case that, though we have long been supporters of open science, we recognise that open models can pose risks (and releasing them is irreversible). We also proposed that openness is not a binary, and that a more useful frame is “access” to the right capabilities for the right purposes.
What’s interesting: Many parties are grappling with the question of how to assess the risks posed by open models. A recent paper from Stanford researchers, for example, made the case for focusing on marginal risk (i.e. the extent to which open models represent a greater risk relative to their closed counterparts or existing digital technologies). Our ability to set more granular thresholds for when open models may be too risky to release will require making much more progress on safety evaluations. For this reason, we proposed that governments can help develop recommendations and best practices to help set thresholds for risks, drive progress on evaluations, and identify potential procedural requirements for open models release. Google DeepMind also recently released its own set of open models, Gemma, which was based on a set of safety and responsibility best practices.
Looking ahead: The debate around open models will remain highly political given it exists at the intersection of concerns over competition and AI safety discussions. National security will continue to feature prominently. In parallel to discussions about “frontier” models, we may see requirements for developers who are considering releasing the weights of sophisticated models.
What we’re hearing
Harnessing the benefits of biotechnology
What happened: We hosted the Tony Blair Institute (TBI) for the launch of its report, “A New National Purpose: Leading the Biotech Revolution”, which proposes policies to help the UK harness the benefits of advances in biotechnology. Benedict Macon-Cooney, chief policy strategist at the TBI, was joined by Sir Sajid Javid (Former Secretary of State for Health and Social Care), Sarah Korman (Isomorphic Labs’ general counsel) and Hans Bishop (president of Altos Labs) to discuss how policymakers should react to a moment of rapid technological progress.
What’s interesting:
The core recommendation in the report is the creation of a UK Laboratory of Biodesign to bring together scientists and bioengineers under one roof to focus on interdisciplinary research. This institution would, according to the report, “focus on the invention of new biotechnology that is at too early a stage for commercial investors.” The paper’s central argument is that biotechnology represents a major economic opportunity that can be realised through building and scaling globally competitive biotechnology firms.
These firms, in conjunction with the UK Laboratory of Biodesign, would benefit from network effects that can be harnessed to power the UK’s knowledge economy. It also identifies hurdles—and proposes solutions—to realising biotechnology’s potential, with the introduction of a new NHS-led data trust proposed to solve bottlenecks in the availability of high quality data. Finally, to address novel risks posed by the development of biotechnology, the report suggests that the Laboratory of Biodesign should deliver biosecurity advice to the government alongside a new UK Biosecurity Taskforce.
Looking ahead: There is increasing interest in bioengineering and life sciences as areas of strategic advantage for the UK. As a result, it is possible that governments may begin to explore new data sharing frameworks to securely release data for experimentation with AI in the next few years.