AI Policy Primer (February 2025)
Issue #18: European AI investments; a market for AI safety; and autonomous vehicles
Every month, our AI Policy Primer looks at three external developments from the world of AI policy that caught our eye. In this edition, we spotlight recent French and European investments in AI, a study exploring the concept of an AI safety marketplace, and recent developments in autonomous vehicle deployment.
As always, please leave a comment below to let us know your thoughts, or send any feedback to aipolicyperspectives@google.com. Thanks for reading!
Policymakers taking action
France and the EU announce large AI investments
What happened: At the recent AI Action Summit in Paris, President Emmanuel Macron unveiled €109bn in private sector funding for AI infrastructure. This includes a new €50bn AI campus of data centres, led by the UAE’s MGX, which is also involved in the US ‘Stargate’ project; a €10bn AI “supercomputer” from Fluidstack, a British AI Cloud platform; and €5bn from the US firm Apollo to invest in AI energy infrastructure.
The European Commission also unveiled a €200bn “InvestAI” initiative, which includes plans to create four “AI gigafactories” with “100,000 last-generation AI chips”, complementing the smaller ‘AI factories’ the EU is already developing.
What’s interesting: One narrative that emerged from the Paris Summit, following a speech by US Vice President JD Vance, was that the EU’s AI efforts are mired in excessive regulation, while the US is powering ahead with a more supportive regulatory environment and strong financing. The reality is more nuanced. While the US has not yet passed any federal AI regulation akin to the EU’s AI Act, many US states are advancing AI bills that could impose similar obligations - if passed. Unlike the EU’s harmonised approach, some of these state-level bills also take inconsistent approaches, potentially increasing regulatory complexity.
The new French and EU funding announcements, alongside Macron’s repeated promotion of Mistral at the Summit, also underscored that EU member states do want future frontier AI models to be trained in Europe. Commission President Ursula von der Leyen also pledged that the new gigafactories would prioritise these efforts. While the EU is advancing the AI Act, the Commission also made the rare decision, shortly after the Summit, to withdraw its proposed AI liability directive - as part of broader efforts to streamline the EU’s regulations and boost competitiveness.
Still, many remain sceptical about whether France and the EU can rapidly secure and deploy the new AI funding, much of which remains to be mobilised. There are also doubts about whether they can narrow the wider gap with the US AI ecosystem, particularly when it comes to training the most advanced AI foundation models. With these challenges in mind, von der Leyen emphasised the need to prioritise ‘industry-specific’ AI applications. This could include sectors like green energy, where the EU has strong expertise but faces intense competition from China and others.
This suggests that, while a small number of EU AI startups will continue developing frontier foundation models, the most promising efforts may emerge in areas that draw on local economic strengths, such as in finance, tourism, or healthcare.
Study watch
Building a market for AI safety
What happened: Philip Moreira Tomei and colleagues at the AI Objectives Institute published a paper arguing that market-based mechanisms could help reduce AI safety risks, by complementing regulatory efforts.
What’s interesting: Discussions about AI risks often quickly pivot to how to pass or adapt new regulations. However, uncertainty about specific AI risk scenarios makes it difficult to craft rules that are targeted and effective. When regulation does arrive, it is often vague, hard to implement, and can create uncertainty that inhibits wider AI adoption. The rapid pace of AI development also makes it difficult to design regulation that is resilient and adaptable.
The authors argue that market-based mechanisms could help to complement AI regulation by providing AI developers and deployers with financial incentives to identify, evaluate, and mitigate AI risks, while distributing risk management across a broader range of actors. The paper outlines four market-based approaches, citing examples from other high-risk industries:
Insurance: Firms could take out liability insurance against AI risks, for example building on existing cybersecurity insurance or technology errors and omissions insurance, which have encouraged firms to invest in risk mitigation.
Auditing & certification: Firms could hire third-party auditors to assess AI safety practices, leading to certifications for meeting certain standards. For example, after facing scrutiny over their cybersecurity, Zoom engaged Trail of Bits and NCC Group for an audit, which led to enhanced end-to-end encryption.
Procurement: Large purchasers of AI could demand performance on safety benchmarks or require specific disclosures - similar to how governments use procurement to shape markets or how corporations push suppliers to improve working conditions and environmental standards.
Investor due diligence: Investors could also demand safety and transparency measures from AI companies, similar to how investors pressured BP to share more about their risk management processes, following the 2010 Deepwater Horizon oil spill, which also accelerated BP’s transition to renewable energy.
Similar ideas have been explored by other organisations and sectors in the past. A related promising angle is supply-side interventions: governments or philanthropic organisations could act as 'buyers of first resort' by establishing Advance Market Commitments that guarantee future purchases of innovative products, like AI safety tools, to incentivise their development. Regulatory markets' - where governments license private regulators to compete to provide AI safety oversight services to companies - could also address gaps left by traditional regulatory mechanisms.
Although they hold promise, market-based mechanisms also face a range of challenges, including how to encourage bottom-up action from a diverse range of organisations (from insurance providers to investors); how to prioritise and price different AI risks; how to ensure sufficient independence and skills among auditors and certification agencies; and how to balance the goal of AI safety against others. For example, after BP’s strategic redirection towards renewables, its financial performance slumped, and the company recently reversed course, saying it had gone "too far, too fast" in the transition away from fossil fuels.
Sector deep dive
The deployment of autonomous vehicles slowly accelerates
What happened: In January, Kodiak Robotics announced that its client Atlas Energy Solutions - which serves oil and gas companies in the Permian Basin (West Texas and New Mexico) - had successfully delivered 100 loads of material using driverless trucks.
What’s interesting: Excitement and skepticism about autonomous vehicles has fluctuated over the past decade, but parts of the sector are now seeing renewed momentum. For example, Waymo now logs 200,000 paid robotaxi rides every week, a 20x increase in two years, and will soon begin testing in Japan, following a recent $5.6B funding round.
The stop-start development of autonomous vehicles highlights a classic challenge in technology development: the mismatch between a technology’s capabilities and its practical deployment. Widespread adoption of autonomous vehicles has faced multiple barriers, including: a complex and evolving regulatory landscape that varies by country and (in the US) by state; the far higher safety standards expected of autonomous vehicles compared to human drivers; low levels of public trust; and the sheer complexity of real-world roads, particularly in dense urban centres.
In response, some companies, like Kodiak, are prioritising industry-specific use cases in more controlled, remote environments, such as mines, seaports, large industrial farms, and military domains. These settings can also expose autonomous vehicles to harsh conditions - like dust, uneven terrain, and strict local site regulations - which could help improve the technology.
These deployment decisions are also influenced by labour market trends. While concerns persist about AI replacing drivers and supply chain workers, a shortage of personnel is arguably a greater challenge. In the US alone, there are about 3.5m truckers, but companies struggle with an ageing workforce and high turnover rates, which exceed 90% in some sectors, partly owing to poor working conditions. This in turn is leading to supply chain disruptions and higher consumer costs.
A broader question is whether the deployment of autonomous vehicles over the past five years offers any insights into how other types of AI-enabled robots might be deployed across the economy in the coming years. Traditional industrial robots are already well established in manufacturing and warehousing, but most are limited to a narrow range of repetitive tasks in structured environments. Companies are now using foundation models to develop more general-purpose robots that could learn novel tasks and adapt to real-world environments. If technical challenges can be overcome, these robotics could be particularly valuable in sectors like agriculture, healthcare or social care, where labour shortages are mounting. That said, they may face similar deployment obstacles to autonomous vehicles, which could lead them to seek out use cases where safety risks are lower, environments are easier to control and cost pressures are high.
AI is sprinting past policy, and we’re losing control. The UK is dithering, elections add risk, and the UN is just posturing while AI wealth centralizes in a few hands. Frontier AI is advancing rapidly, and Stanford’s warning that deepfakes could erode trust in democracy by 2026 is a critical wake-up call. But the real issue we’re overlooking is centralized AI wealth—trillions concentrated in the hands of a few tech giants while access to compute remains an exclusive privilege. UBI is a short-term fix, but a true AI reset requires structural change: taxing AI rents, democratizing compute power, and building shared AI infrastructure. The UN’s resolution gestures toward equity, but unless we challenge Big AI’s dominance, it remains symbolic. Time to rethink who holds the power.”
#AIReset #TechEquity #GovernanceNow #AICommons #FutureofAI