AI Policy Primer (April 2024)
Issue #9: AISI MoU, cyber harms, and AI 'shadow use'
Welcome to another monthly installment of the AI Policy Primer. As a reminder, we’re also sharing the AI Policy Primer alongside other content—such as essays and book reviews—on AI Policy Perspectives. If you have any feedback, let us know at aipolicyperspectives@google.com.
For this month’s Primer, we take a look at the Memorandum of Understanding signed by the US and UK AI Safety Institutes, a survey of cybersecurity practitioners to understand the real-world harms that they are witnessing from deployed AI systems, and reports on AI usage figures in science and business.
Policymakers taking action
AI Safety Institutes sign MOU
What happened: In April, the US and UK AI Safety Institutes (AISIs) signed a Memorandum of Understanding (MoU) to collaborate on AI safety. In the document, the US and UK AISIs agreed to develop a shared approach to AI model evaluations (methodologies, infrastructures and processes), collaborate on technical AI safety technical research, and perform at least one joint testing exercise on a publicly accessible model. They will also explore personnel exchanges and similar collaborations with other countries to manage frontier AI risks.
What’s interesting:
The move comes as the EU and US also begin to collaborate more closely on AI safety. In a joint statement, the EU-US Trade and Technology Council announced that the US AISI and EU AI Office had "briefed one another on respective approaches and mandates" and agreed to a "scientific exchange on benchmarks/risks/future trends." The organisations are also developing a joint roadmap on evaluation tools for trustworthy AI and risk management.
Canada has also launched its own AI Safety Institute. The announcement notes that the Canadian government is planning to dedicate $50 million “to further the safe development and deployment of AI” alongside a further $2 billion for compute and infrastructure.
Looking ahead: Since the Bletchley Summit, we have seen a multiplication of AI Safety Institutes, and it’s possible that an increasing number of governments decide to create their own dedicated institute to better understand advanced AI models. In the future, we also expect to see increased international collaboration between national safety institutes.
Study watch
What AI cyber harms are actually occurring?
What happened: A new study, led by Kathrin Grosse at the Swiss Federal Institute of Technology Lausanne (EPFL), surveyed cybersecurity practitioners to understand the real-world harms that they are witnessing from deployed AI systems. To date, most policy discussions about how AI might affect cybersecurity have been theoretical. This new study is a rare example of a post-deployment evaluation, with the authors surveying >200 practitioners to understand the cybersecurity harms they have witnessed from AI.
What’s interesting:
AI is a double-edged sword for cybersecurity. Threat actors could potentially use AI to identify vulnerabilities, generate more persuasive phishing emails or compile malicious code. Powerful AI systems could also be the target of, and perhaps one day even carry out, cyberattacks. AI could also boost cybersecurity, for example, if practitioners use it to write more secure code, identify anomalies, and better triage alerts. Over the longer-term, AI could potentially enable more automated protection for software by identifying vulnerabilities and generating rapid fixes.
The authors find that less than 5% of practitioners have witnessed real-world harms from AI, although it’s difficult to specify what constitutes an ‘AI’ harm. This small sample size makes it challenging to extrapolate, but the data suggests 1) that attacks on data and infrastructure may be more common than attacks on models; 2) that the healthcare, automotive and security industry may be key targets; 3) that unintentional accidents may be a bigger near-term challenge than intentional attacks; and 4) that employees who feel threatened by AI systems may look to attack them (for example, by sabotaging data labelling efforts - a real-life example).
Looking ahead: As the study notes, there are few robust programmes to reliably track the harms that are occurring due to AI. As deployment increases, addressing this will likely require policy responses that go beyond ad-hoc surveys. This could include: more formal post-market surveillance programmes, building on early examples, such as the the AI Vulnerability Database; funding adversarial research; opening up AI models for third-party access and testing; designing programmes for AI model developers to responsibly report known risks; and designing bug bounties for third-parties to report harms.
What we’re hearing
AI ‘shadow use’ on the rise
What happened: Researchers at Stanford University analysed almost 1m papers published between January 2020 and February 2024 on arXiv, bioRxiv, and the Nature portfolio of journals. The group found that the use of large language models for writing research papers is on the rise across the board, with the largest and fastest growth observed in computer science papers (up to 17.5%). In contrast, the authors said that mathematics papers and the Nature portfolio showed the least LLM usage (up to 6.3%).
What’s interesting:
This forms part of a wider trend of 'shadow AI' use - i.e. individuals using AI tools in their workplace in a way that isn't formally directed/endorsed by their employer. In a 2023 survey of over 1,600 scientists, Nature reported that approximately 30% of researchers said that they had used generative AI tools to help write manuscripts, while a further 15% said they had used the tools to help with grant applications. On the benefits of AI, over half (55%) of researchers cited translation, a finding replicated in a poll by the European Research Council (ERC) in 2023. With respect to risks, around 70% of researchers said that it could lead to “more reliance on pattern recognition without understanding” while a further 59% said the technology may entrench bias.
Science isn’t the only sector in which AI usage is on the up. In a March 2024 poll, Pew Research found that 43% of American adults aged 18-29 have used ChatGPT, a figure that has increased 10 percentage points since last summer. Within this group, Pew found that approximately one third (31%) have used ChatGPT for work. The figures contrast with significantly lower figures collated by businesses about how workers are using AI. A report from the U.S. Census Bureau found that, between September 2023 and February 2024, estimates of AI use rate rose from 3.7% to 5.4%. These figures are directly provided by the leadership of 1.2 million businesses to show the proportion of firms using AI within a two week period. The stats add some colour to reports in the Stanford HAI Index, which said that 55% of organisations in 2023 had tried to use AI in some capacity, marking a slight increase from 50% in 2022 and a significant jump from 20% in 2017.
Looking ahead: Worker shadow usage may continue to increase ahead of officially reported statistics by businesses. While growth is likely to remain steady across many demographic groups, it is possible that young adults in particular will continue to play an outsized role in driving the adoption of AI for work.