Policymakers taking action
Leading AI labs make White House commitments
What happened: Last month, the White House announced voluntary commitments from leading labs to manage risks posed by powerful AI systems. We are signatories alongside Amazon, Anthropic, Inflection AI, Meta, Microsoft, and OpenAI.
What’s interesting: The commitments focus on the principles of safety, security, and trust, and apply to generative models more powerful than the industry frontier of currently released models (GPT-4, Claude 2, PaLM 2, Titan, DALL-E 2).
The White House expects the commitments to act as a bridge until stronger regulation of AI is in place. To uphold them, companies will develop internal and external red-teaming programmes and information-sharing mechanisms. Labs have also agreed to implement safeguards for cybersecurity and insider threats to model weights, and committed to bug bounty systems.
This group will also develop mechanisms to reduce the risks of fraud and deception (including via efforts such as our recently launched watermarking tool SynthID); publish reports on all new significant model public releases; and empower trust and safety teams to advance privacy and protect children. As signatories, Google and GDM are currently undertaking efforts to document how we will uphold the commitments.
Google DeepMind has also been involved in two other recent White House initiatives: the two-year AI Cyber Challenge, in partnership with the Defense Advanced Research Projects Agency (DARPA); and, the DEFCON hackathon challenge. We also recently co-launched the Frontier Model Forum, an industry body to promote safe and responsible development of frontier models.
Advocacy
Calls for changes to AI Act open-source provisions
What happened: GitHub, Hugging Face, Creative Commons, EluetherAI, LAION, and Open Future wrote an open letter claiming that provisions in the upcoming EU AI Act would hinder open-source AI development. To ‘ensure the AI Act works for open source’ the group proposed measures including revised definitions to clarify that open source creators are currently not subject to the full suite of measures targeted at developers, and removing certain obligations for developers of open source foundation models.
What’s interesting: Harry Law and Seb Krier recently published a paper arguing that, while open-source providers should not be subject to the same provisions as commercial developers, open source models should be subject to evaluations to limit risks. In practice, this would only apply to the release of open source foundation models that are demonstrably more complex, capable, and dangerous than those we have seen to-date.
Amidst calls to reform the AI Act, Meta recently allowed users to download its flagship Llama 2 model, which it described as an ‘open source’ release. This decision sparked concerns from two groups. Critics of open-source approaches were worried about the potential for use by bad actors and the proliferation of dangerous capabilities. Meanwhile, supporters of widening access argued that the release was not truly open source due to restrictive licensing and a lack of transparency.
Moves to contest the term open source have continued. Amongst permissive approaches to access, it is possible to differentiate between ‘open models’ that come with commercial-use weights and open-source datasets, as well as ‘open weights’ approaches with licensed model weights but lacking public training data. Other approaches include sharing ‘restricted weights’ that have conditional accessibility with undisclosed datasets, and so-called ‘contaminated weights’ that are technically open but restricted by dataset limitations.
Study watch
UNESCO publishes review of the role of digital technology in education
What happened: In July, UNESCO published its latest Global Education Monitoring (GEM) report, which this year focuses on the role of digital technology, including AI, in education.
What’s interesting: The authors note that the evidence base for edtech applications is relatively weak. Most evaluations are small, funded by developers, only focus on a narrow range of learning outcomes, and struggle to generalise to different contexts. The pace of technology advances also makes it hard to design robust evaluations.
The authors provide examples of digital applications that have, and haven’t, benefited education. Some of the most useful applications are relatively simple, such as initiating televised lessons in Mexico, or the role of radio in Sierra Leone during the Ebola crisis.
Covid-19 was an unprecedented natural experiment for edtech and demonstrated its value - 95% of countries instigated virtual learning, benefitting 1bn students. However, it also exacerbated inequalities, as one in three students had no access, and many others had to rely on low-tech and harmful solutions - what education expert Mary Burns refers to as a ‘caste system’.
In the last 12 months, educators and students have begun experimenting with generative AI tools. Rapid progress in large AI models could potentially make existing edtech applications, such as AI tutors, much more capable. Educators and students could also use applications like ChatGPT, Claude and Bard to create personalised learning materials and experiences. The applications also pose risks, from the obvious - making it easier to cheat - to the subtle - undermining the social or constructive aspects of learning.
UNESCO is somewhat cautious about the impact of generative AI. The organisation does, however, call on governments to take several steps, including teacher training and support, rethinking assessments, and reflecting on what it means to be well-educated in a world shaped by AI. Governments are also publishing calls for evidence and recommendations on its use.
In recent months, we’ve been thinking through some of these topics, including at a roundtable on AI and education organised by the Royal Society of the Arts. With our partners Raspberry PI, we have developed the Experience AI programme to help young people understand how AI works.
Thanks for reading. We welcome feedback, ideas and views on these and other AI policy developments. Please share this newsletter with others who might enjoy it.