In this post, our Senior Director of AI Policy, Nicklas Lundblad, shares 8 things that surprised him about AI policy in 2024. Like all the pieces you read here, it is written in a personal capacity. Please let us know what surprised you about 2024 in the comments section, and send any feedback to aipolicyperspectives@google.com.
2024 was a(nother) year in which artificial intelligence continued to beat expectations, reach new highs on different benchmarks and dominate a lot of the news. In some branches of science, AI has become indispensable. Doing research in parts of biology without using AI tools is a bit like insisting that telescopes are not necessary for astronomy. The AI Safety Institutes in the US and UK have consolidated their work and are starting to find key roles to play in placing AI safety on a solid scientific footing. The EU AI Office is ramping up to take on the role of clarifying what the AI Act will mean in practice. Two Nobel prizes, billions in investment, a new AI czar in the US, and new sky-high valuations of AI companies all made the year feel as if we were closing in on the peak of the hype curve.
At the same time, a certain skepticism has snuck into the public debate. The ghost of Solow past is haunting the productivity numbers, where AI has yet to deliver significant growth - even as many economists predict massive impact on labour markets down the road. The scaling laws that have brought us thus far have been questioned and the rising energy costs of both training large models and running inference against them has led to legitimate concerns, albeit with plenty of arguments on the opposing sides. Similarly, some argue that misinformation, fraud and manipulation have gotten much worse with AI, although it seems as if none of the elections this year were impacted in a major way. There have been cyber and misinformation attacks, but many of these did not require AI and there is even some hope that AI might even help debunk conspiracy theories or identify unsupported claims. Questions around national security are on the rise, as well. The geopolitical tensions between the US and China have not eased up, but rather sharpened.
Embrace mess
Against this background, any attempt to sum up the year feels almost impossible, and that in itself is perhaps a key observation. In 2024, the field of AI policy got messier - and I use that term in the interesting technical sense that management thinker Russell Ackoff launched in his 1981 article “The Art and Science of Mess Management”. Ackoff writes: “What we do experience are large and complex sets of interacting problems - dynamic systems of problems. In S3 (Social System Science) we refer to these as messes. Our focus is on the management of messes rather than the solution of problems. Mess management requires planning, not problem solving.”
Ackoff’s statement is interesting and he goes on to outline different kinds of planning that can help with messes: the clinical, research and design approaches. He recommends the latter - where we strive to understand the mess, figure out what the ideal ends are and then design the means. Or, put differently, start to think about what a good, stable outcome could look like, and how we can get there. That is not a horrible agenda for 2025.
Another way to think about 2024 is to ask what the most surprising things that happened during the year were. I will offer the following 8 candidates. Please share yours in the comments section.
1. Governance: A need for speed
I have been surprised at the speed with which institutions like the AI Safety Institutes and the European AI Office have stood up and become operational. This is a process that usually takes years, but here we have seen significant operational capability built out within months. This is a good thing, as it suggests that the institutional capacity for change is greater than many feared when we started discussing how AI should be regulated.
2. Market entry in the EU: Delayed gratification
I found it surprising how many companies have chosen to stagger their introduction of products and services on the European market. While this was mentioned as a clear risk in the discussions about the AI Act, we now see an established pattern where the latest technology is delayed or denied for Europeans. As a European I find this deeply worrying, and it increases the distance between the leaders in the field, the US and China, and Europe in a way that is likely to have real consequences for European competitiveness.
3. Technology & sovereignty: Come together
The prevalence and proliferation of national language models has surprised me. It is not that I think they are bad, as such, and I do understand the reasons for investing in them - the respect for and curation of one’s own cultural heritage. But countries deciding to invest in specific technologies is a distinctly new trend. I think this is a part of a larger trend, where technology and sovereignty become closely linked in the political mind.
4. Analogies: AI is not the Internet
I find it surprising that so many AI policy discussions still seem closely modeled on the Internet. If you just replace ‘AI’ with ‘Internet’, some of the discussions are almost identical - and yet these are two very different technologies. It has been pointed out that AI is not just a general purpose technology, it is also an invention of a method of invention, and as such it will revolutionize every sector of society in different ways. The Internet was great - but AI is different.
5. Hallucinations: Time for nuance
I have been surprised at how many people I meet use “hallucinations” as a way to devalue and dismiss AI. Few seem aware that hallucinations have been steadily reduced, in some cases by an order of magnitude, and the ease with which some refer to them suggests to me that a lot of people are missing out on the value that modern AI services can provide. In 2025, I expect to start seeing people distinguish hallucinations by type and severity, and to explore where it is that we might want to encourage the creative juxtaposition of concepts that humans would not consider, and where we need to reduce that, and how to best do it.
6. Agents: The infrastructure gap
The lack of emerging standards for agents remains very surprising to me. If we imagine a future where agents will use the Internet, it is easy to see that we need something like an “affordance markup language” where different Internet resources declare how they can be integrated into an agentic flow, and what is required for an agent to be validated and authenticated with them. We will need digital identity systems - not for humans as much as for agents - and that is also lagging behind. If we truly believe agents are next, we need to think not just about agentic AI, but agentic infrastructures.
7. Cities: Where are you?
The absence of city programmes using AI is puzzling. One way to think about AI is that we should use and deploy it in existing cognitive structures, and one of the most important cognitive structures for humans is the city. In a sense, a city is just software solving problems with algorithms in steel, concrete, wires and people - and there is so much that could be radically improved here. Cities also have unique ways of adapting their data collection and sourcing to create a rapid response capability for emerging problems, which would seem to make them a key candidate for AI deployment. Let smart cities become truly smart!
8. AI devices: Or cognicity?
The lack of any really convincing new AI devices indicates that it might be much harder than we thought to build a form factor for cognition. There are still very few of my contacts that speak to their phone - AI remains, as far as I can discern, a technology where writing dominates. Maybe this is because AI is a general purpose technology, and as such will be much more like electricity: an invisible presence in all of our homes? The idea that what we will get is not a new device, but rather something like cognicity also suggests very different regulatory challenges.
And….finally
Let me also extend a thanks to everyone who has subscribed, shared and commented on our work here at AI Policy Perspectives. We are in your debt! The community we are trying to build here is becoming a key to the kinds of discussions and ideas that we will need in 2025. So stick with us, help recruit others and let us know what you would like to hear more about next!