AI Policy Primer - December 2024
Issue #16: AI & material science, AI Safety Institutes, and Central Banks
Every month, our AI Policy Primer looks at 3 external developments from the world of AI policy that caught our eye. In our final edition of the year, we look at a study into the effects of AI on material scientists, the recent meeting of the AI Safety Institutes, and how Central Banks want to use AI to promote financial stability. Please leave a comment below to let us know your thoughts, or send any feedback to aipolicyperspectives@google.com. Thanks for reading!
Study watch
AI helps scientists to discover new materials, but it may make them enjoy their job less
What Happened: Aidan Toner-Rodgers at MIT published the results of an experiment in which more than 1,000 scientists at a US private sector R&D lab were given access to an AI material design model, to assess how it changed the rate at which they discovered new materials.
What’s Interesting: To design materials, scientists often rely on iteration or trial and error to explore the huge search space of potential compounds. This study highlights how AI could help improve this process. In 2022, the scientists received access to an unnamed AI model that outputted compounds that were predicted to possess desired characteristics. On average, the AI-assisted scientists subsequently discovered 44% more materials, filed 39% more patents, and produced 17% more product prototypes, with the new materials scoring high on both ‘novelty’ and ‘quality’ - although the experiment did not capture longer-term commercial impact.
In the past 1-2 years, various experiments have studied the effects of giving AI tools to professionals, including programmers, writers and customer service agents. Many of these studies - though not all - found evidence that lower-skilled employees benefit most from AI. This MIT study finds the inverse, with experienced scientists enjoying the biggest gains. This is because the task of material design is not simply about finding a novel compound, but rather about being able to identify compounds that are most likely to be viable and useful. As the AI model predicts new compounds, it shifts the focus of scientists to evaluating the viability and usefulness of those predictions - a task that those with deep domain expertise are best-suited to.
AI will not just affect science, but also scientists. To understand these effects, the MIT study surveyed scientists who received access to the AI model and found that most reported a decline in their work satisfaction. This was true even for those scientists who benefited from the AI tool, owing to concerns that their skills were being under-utilised and the creativity of their role reduced. Wellbeing is hard to measure, and attitudes to technology can change with time, but this finding highlights the need to better understand how AI may affect scientists, a topic that we also explored in our recent essay about AI and science.
In the next 1-2 years, we hope to see an increase in evaluations focussed on empirically assessing how AI is affecting science and scientists, in a similar vein to the recent suite of new evaluations that focus on AI safety.
Policymakers taking action
US & UK AI Safety Institutes convene in San Francisco
What happened: On 20-21 November, the US AI Safety Institute held a convening in San Francisco to kickstart a new technical collaboration between the global network of AI safety institutes, ahead of the upcoming AI Action Summit in Paris in February 2025. The UK AISI also held a convening to share best practices for how to develop AI safety frameworks, like Google DeepMind’s Frontier Safety Framework.
What’s interesting: As announced in their mission statement, the new global network of AISIs will focus on advancing research to understand the capabilities and risks of advanced AI systems, as well as building best practices for testing them. They also completed their first joint testing exercise, on Llama-3.1, and shared insights about how to improve multilingual AI testing.
This is the first time the AISIs have met and announced shared priorities. They showed interest in coordinating more and exchanging best practices. Synthetic content was one of the three areas discussed during the convening, with the global network of AISIs announcing $11 million to fund new research on how to mitigate risks in this area. The global network of AISIs - currently numbering 10 - may expand further in 2025, and they may look to conduct more joint testing exercises. This collaboration could potentially reduce the risk of separate bilateral conversations and fragmentation in the AISIs’ mandates.
Sector spotlight
Central Banks begin to scale up their use of AI
What Happened: Central Banks play a critical role in most modern economies, where among other mandates, they are typically responsible for maintaining the stability of prices. As outlined in the BIS Annual Economic Report 2024, central banks are increasingly using AI to improve how they make decisions. Key focus areas include economic forecasting, financial supervision, and payment systems.
What’s Interesting: One area central banks are focussing on is identifying signals and anomalies in the vast datasets they have access to. For example, the Bank of England and the European Central Bank are using AI to look for unexpected transaction patterns or spikes in asset price volatility that may signal liquidity issues. Similarly, the Deutsche Bundesbank is focused on detecting outliers in major financial data sets, which could signal risks such as mispricing in the market. This anomaly detection can be difficult for humans to do reliably, and so AI could help central banks improve resilience against different kinds of financial risks and crime.
Central Banks are also using AI to synthesise insights, including sentiment and trends, from unstructured text, to improve their forecasting. For example, the Bank of France is using AI to better gauge public perceptions on inflation, which can provide insights about how future ‘sticky’ price growth may be. Meanwhile, Malaysia’s central bank uses AI to analyse financial news articles to help forecast key indicators, such as GDP and consumer spending.
Finally, central banks are exploring the merits of tokenized payment systems—digital versions of money that use blockchain technology—and unified ledgers - systems that combine financial and other records in one place. These technologies could potentially make transactions faster, more transparent, or more secure. A growing number of central banks are planning to launch their own tokenized payment systems, such as Central Bank Digital Currencies. AI's role in these developments could take several forms, including detecting and preventing fraud in real-time, or monitoring transactions to adhere to anti-money laundering or counter-terrorism regulations.
In the next 1-2 years, we will see central banks explore if AI could also support monitoring and mitigating emerging AI-driven risks to financial stability, such as market disruptions or collusion by autonomous agents - an area that has received relatively little attention in discussions about AI safety risks.
As always, please let us know your thoughts on these updates and what you have found most interesting in the world of AI policy in the last month.