Policymakers taking action
United Nations General Assembly discusses AI
What happened: In September, representatives from the UN’s 193 member states gathered for its annual General Assembly. AI was high on the agenda, with members committing in their political declaration to realising AI’s benefits and addressing its challenges.
What’s interesting: Amid competition from other fora, the UN is keen to demonstrate that it can lead AI global governance efforts. The General Assembly highlighted the UN’s wide membership, including among Global South countries, and the potential role of its Sustainable Development Goals in guiding beneficial AI use cases. In August, the UN also published a call for nominations for a new 32-member High-Level Advisory Body to advise it on how to govern AI internationally. To date, more than 1,600 nominations have been received, alongside papers to guide the Advisory Board’s work.
Across the AI community, there is currently active discussion about how to best govern AI internationally, whether a new global AI body is required, and what its mandate or purview should be. At the General Assembly, some participants voiced support for a globally-coordinated system of AI governance, underpinned by UN principles and red lines. There was also broad support, in principle, for a new UN AI institution, modelled on its Intergovernmental Panel on Climate Change (IPCC). However, many participants also expressed concern about the time it would take for such a body to become operational, and that smaller fora, like the G7’s Hiroshima Process, are already moving ahead with their own approaches.
Many of the UN’s ‘specialised agencies’ are also trying to shape how AI should be used, and governed. For example, UNESCO is exploring the role of AI in education; the International Labour Organisation is exploring how large language models may affect employment; OCHR is exploring how generative AI may affect human rights; and the International Monetary Fund - a ‘related organisation’ that operates at arms distance - is examining AI’s impact on the financial sector.
Looking ahead: There will be a lot of focus on the blend of backgrounds and expertise on the Advisory Board when it is announced, as well as on what its initial near-term recommendations, due by the end of the year, will focus on.
UK prepares for AI Safety Summit
What happened: Earlier this year, the UK government announced that it would host the first ever international summit focussed on AI safety, at Bletchley Park in early November. The event will see world leaders and the heads of the major AI labs come together to discuss the risks and opportunities of frontier AI models - both cutting-edge LLMs and narrower models, such as those used in biology.
What’s interesting: As noted above, the summit comes as practitioners continue to explore a number of wide-ranging proposals for a global AI governance regime. While international AI governance is progressing cautiously, domestic efforts to establish guardrails for frontier AI are continuing apace, from the White House commitments to the EU’s AI Act.
Given these various ongoing processes, there have been some questions about the need for the UK summit, with the EU currently considering whether to participate in a formal capacity, and Japan keen to ensure that it does not overshadow its G7 presidency. However, European leaders such as France’s Emmanuel Macron are expected to attend, alongside leaders such as Canada’s Justin Trudeau and American vice-president Kamala Harris. The UK’s decision to involve China has also sparked debate about its role.
In terms of the event’s focus, the UK government is keen to carve out a niche by focussing on more extreme risks and national security. Despite calls to widen this definition of AI safety, the government is focussing on risks such as misuse of AI to create bioweapons or cyber-attacks and advanced systems that escape human control.
Looking ahead: It will be interesting to see whether the summit might become a recurring one, and how it will shape discussions about any new AI institution.
What we’re hearing
How to best leverage AI for Science
What happened: In September, Google DeepMind supported a British Science Association roundtable discussion about the role of AI in science. Alongside Google DeepMind, representatives included Imperial College London, Genomics England, the Department for Business & Trade, EMBL-EBI, the Royal Academy of Engineering, and the Alan Turing Institute.
What’s interesting: Google DeepMind’s recent AlphaMissense work builds on our protein structure prediction tool AlphaFold2, to predict whether genomic variants that cause single–amino acid changes in proteins are pathogenic. It adds to the growing list of AI-enabled breakthroughs in the life sciences.
When it comes to where we might expect future breakthroughs, participants cited the quality, availability, and management of data as a key differentiator for scientific disciplines that have harnessed the potential of AI to date. For example, life sciences have a more developed history of, and frameworks for, structuring and using data. In other fields, such as chemistry, materials science, physics and criminology, there is often lots of unstructured data, but it’s less accessible.
One implication of this is that advancing AI for Science will require incentivising researchers to work in important but sometimes under-recognised roles, such as the ‘service layer’ of data architecture & management.
Multidisciplinarity and interoperability were also strong themes at the roundtable, including collaboration between scientific disciplines, between the sciences and the humanities, and between human and machine intelligence.
Looking ahead: Following Eric Schmidt's new AI for Science moonshot effort, we can expect to see more flagship AI for science efforts in 2024 aimed at addressing the relative stagnation in scientific and economic productivity.