What we’re doing
The model machines exhibit
What happened: In January, DeepMind asked design students from the London-based arts university Central Saint Martins to imagine how AI might benefit humanity. The students had 9 weeks to research their chosen topic, evaluate its societal and ethical implications, discuss it with DeepMind employees, and build an exhibition-ready object. The final artefacts were on display from 8-20 April, at the Lethaby Gallery in London, as part of an immersive exhibition.
What’s interesting: The exhibition was timely given the recent acceleration in the capabilities and use of generative AI systems. The students went beyond typical uses of AI - for example, chatbots and robots - to imagine more speculative and provocative examples, such as an AI menopause companion, a tree-to-human translator and an AI-powered confessional booth.
This is a generation who will grow up with, and be impacted by, AI in ways that we can’t yet imagine. Their selection of topics on environmental and social sustainability, healthcare, bias, human-AI collaboration, and more, felt instructive. By using design as a tool, the students also made future AI applications feel tangible and relatable.
The program also aimed to create opportunities for those historically excluded from the design of technology to engage in vision-setting for AI to challenge our assumptions about what good technology looks like. These visual, sometimes visceral representations of AI have the capacity to shape how people feel about the technology, and ultimately influence public perception, regulation, and funding. We hope that those who see the students’ work come away with their own questions, ideas, and feelings about how they want AI to be developed and used.
Looking ahead: While science fiction has long explored robots and superhuman AI futures, it seems likely that the next generation of artists, filmmakers, storytellers and designers will explore more nuanced ways in which these technologies will embed within society. In the next year, we’ll likely see exhibitions, TV shows, films and fiction about AI’s ecological context and what it means to be human in the age of intelligent machines.
What we’re watching
UK launches AI Regulation White Paper
What happened: In March the UK Government published its long-awaited White Paper outlining how it proposes to regulate AI. As expected, its approach is explicitly pro-innovation, and recommends using existing regulations to mitigate AI risks, rather than creating new AI-specific laws - as the EU is doing with its upcoming AI Act. The White Paper also outlines 5 cross-cutting AI principles (e.g. fairness) that regulators will be asked to uphold. In the first instance this will be voluntary, but the Government is retaining the option to legislate if there is insufficient regulator (pro)activity.
What’s interesting: The White Paper’s main focus is ‘getting the house in order’ so that businesses in the UK working with AI are clear on how existing laws apply. It also provides the means to introduce new controls should AI risks increase. As DeepMind made clear in our response to the mini-consultation earlier this year, ensuring regulators have the capacity and powers to deliver this is going to be key.
The White Paper was published on the same day as the Future of Life Institute letter calling for a pause in large AI model development. Some noted a sharp contrast between the letter, and the perceived ‘light touch’ nature of the UK’s approach. However, the White Paper did note the intention to establish a new risk assessment function, including to assess ‘high impact but low probability’ AI risks.
The White Paper also noted the challenges posed by open-sourcing powerful AI models, and potential responses, such as mandatory reporting for model training runs of a certain size (see below). In March, the UK government also announced a new Foundation Model Task Force to explore how the Government should respond to recent developments to ensure a positive impact on UK society.
DeepMind, has engaged with the Government throughout the development of the White Paper, recommending an approach that grounds regulation in the context AI is currently being applied, but is flexible enough to adapt and keep pace with frontier AI developments. We’ve also consistently called for investment in regulatory capacity and new central governance functions such as horizon scanning, regular gap analyses of existing regulation and an assurance ecosystem.
UK assesses compute needs
What happened: In March, the UK published its Independent Review of the Future of Compute. The review recommends funding the development of an exascale computer and improving access to compute for academics and "commercial projects to spark innovation" via a new AI Research Resource. The government earmarked £900 million to action the recommendations in the recent Spring Budget, and a more detailed roadmap is due in early 2024.
What’s interesting: According to the Review, the UK’s compute capacity ranked 10th globally in November 2022, behind countries such as Finland, Italy, South Korea, and Russia. As such, critics argue that the planned investment is too small to return the UK to its former position (it ranked third globally as recently as 2005). The Tony Blair Institute for Global Change, which is working on an index to measure a nation's national computing power, further argued that the UK should not just invest in compute - but also in developing sovereign general-purpose AI systems, which the new Foundation Model Task Force (now backed by £100M in funding) is set to build.
The Review focuses on how to improve the UK’s compute infrastructure and boost access to it, while making a passing reference to “monitoring and verification of compute use” to mitigate AI safety risks. A recent paper by Harvard researcher Yonadav Shavit outlined more ambitious ideas that could potentially be applied over time, including on-chip firmware to save snapshots of weights and information about each training run, and monitoring the supply chain to ensure that no actor amasses a large quantity of untracked chips.
Like many in the AI industry, DeepMind has supported the goal of boosting academic access to compute. We also believe that policymakers should fund public-interest datasets and mechanisms for providing academic access to SOTA AI models. From an AGI governance perspective, we also welcome efforts to scope whether monitoring or restricting access to compute for large training runs might work in practice.
What we’re reading
AI, skills, and employment
What happened: From March 27-30th, the Organisation for Economic Cooperation and Development (OECD) hosted a four-day event to discuss the potential impact of recent AI advances on employment, productivity, and skills.
What’s interesting: At the event, the OECD published a report with ~100 case studies, analysing how deploying AI in various finance and manufacturing organisations had affected employment. According to the report, the AI applications - such as customer service chatbots - led to very few job losses, although some participants cautioned that this may still come via future attrition or reduced hiring. In terms of job quality, some employees felt that the AI applications caused their work to become safer, and more rewarding, while others said that they now have less privacy and a higher work-intensity, due to the applications.
A second report, and accompanying analysis, examined how AI systems perform on the OECD’s influential PISA Assessment, which evaluates 15-year olds on their reading, mathematics and science abilities; and on their PIAAC assessment, which evaluates adults’ abilities in literacy, numeracy, and problem-solving, including computer use. The analysis suggests that AI systems already outperform a large number of students and adults, including for skills that adults routinely use in their jobs. By contrast, adult literacy rates are flat or declining in many countries.
Debate at the event focussed on whether the assessments - and AI systems’ performance on them - really capture what we want to evaluate; and if and how to incorporate the use of AI into curricula, and into the assessments;. Observers also called for more dynamic ways to monitor and evaluate the impact of AI on employment and skills.