From April to August this year, Dean Ball played a central role in drafting America’s AI Action Plan. Now, he’s back in the think tank world, as a senior fellow at the Foundation for American Innovation in Washington, while continuing to write about AI policy on his influential Hyperdimensional newsletter. Dean recently stopped by Google DeepMind’s London office for a discussion. Here are 10 takeaways from the chat.
The White House AI experience: Dean was surprised by how congenial and non-bureaucratic the White House was. He expected “turf wars and weird procedural blockers” but generally found a collaborative environment that was focussed on executing—a welcome contrast to the administrative hurdles he faced in academia. In terms of missed opportunities, he wished the administration could have articulated a more coherent framework for how chip exports will work, an area he felt was under-developed in the AI Action Plan.
The AI for Science opportunity: Alongside developments such as automated labs, AI could transform how science is practiced. Dean sees chemistry and biology becoming “information sciences” that give humanity increasing dominion over everything from the clothes we wear to the buildings we live in—a veritable revolution in human affairs. This has big implications for governments, which play a leading role in science. One challenge will be the recurring tension between open data and national security concerns for more sensitive scientific information like fusion simulation codes or viral sequences. Companies should think about how their science research, and their AI models, could help solve priority government problems, such as the potential role of AI materials science in addressing rare-earth metals challenges, or the role of robotics in US reindustrialisation.
Manageable vs. emergent AI risks: Dean believes there are significant risks from AI to cybersecurity and biosecurity, but also conceivable ways to manage them, and that AI will also improve defences in these areas. In terms of more unpredictable risks, he pointed to the strange outcomes that may occur when autonomous AI agents interact at scale in adversarial contexts, for example in legal transactions. From an alignment perspective, he noted the concern that LLMs may have some fundamental properties that lend themselves to a sort of intrinsic “parasitic” need to self-replicate, a risk with no obvious policy response. Such emergent risks explain what he described as “exceptionally strong attention” to alignment and interpretability in the Action Plan.
Regulation (1): In the near term, we don’t know what harms advanced AI may trigger, so Dean argued for a flexible approach that avoids premature, prescriptive AI regulation. Taking inspiration from machine learning, Dean noted that a “gradient is better than static rules”, and called for:
Modest transparency requirements that require frontier AI labs to share documents like model specs and responsible scaling policies that explain their models’ intended behaviours, a user’s ability to customise these behaviours, and the things that the model should never do.
Using common-law liability and the framework of “reasonable care” to address harms as they arise. He cited the recent AI child self-harm issues which are a leading concern in the US, but were largely absent from leading international AI regulation and governance efforts, as an example of how difficult it is to predict the most consequential, or politically salient, AI risks.
Regulation (2): For more severe longer-term risks, Dean suggested laying the foundation for entity-based governance—regulating frontier AI labs and their business processes and information flows much as financial institutions are regulated. However, he didn’t think this was necessary yet, and acknowledged the challenges, including the potential for regulatory capture and technology path dependence. He also pointed to the potential to use AI as a tool of governance, for example enabling regulatory bodies to stream telemetry to help them do compliance and oversight.
International coordination: The US administration is focussed on bilateral deals and partnering directly with nations to build and diffuse AI infrastructure. They view most global governance bodies as outdated. Rather than a UN-style body to govern AI, Dean envisions a future governed by technical protocols, similar to the role that SWIFT plays in global finance. This wouldn’t require large teams of bureaucrats to write rules. Rather, the protocols could emerge from industry competition before government steps in to help standardise the strongest ones.
The West’s cultural hesitancy: Dean believes that many in the West are more negative towards AI compared with the relative optimism found in Asia and the Global South. He attributed much of this to Western populations being older and wealthier. As a technological determinist, Dean considers almost everything downstream of technology. As a result, the best hope for changing culture, he said, was to develop “incredibly good technology” that demonstrates the immense upside of AI.
The coming AI political flashpoints:
Employment: Dean thinks a non-linear increase in US unemployment is possible in the coming months. AI may contribute, but other macroeconomic trends will likely be the main drivers. Still, AI could become a scapegoat, and pushback from vested interests is likely. We need better policy responses, with Dean contending that ideas such as universal basic income “don’t smell right”.
Data centres: In the United States, local opposition to data centres is growing. But the general dynamism of the US economy and the country’s “competitive federalism” means that data centres don’t have to be located in any one specific location, so getting infrastructure deals done will be easier than in many other countries.
Anthropomorphism: Many on the American right worry that anthropomorphic AI is “tricking” people, which could lead to calls for bans on AI that claims to be human or expresses overly human preferences.
New media: As a popular writer on Substack, Dean sees positive policy impacts from this kind of work, noting that articles and viral tweets are often shared within the White House and can directly influence internal debates. Dean noted that he now sees himself primarily as a columnist and that LLMs were not yet much competition in that regard, even though they are “smarter than me in many ways”. This is partly because Dean tries to inject some ‘entropy’ into his content and also because there are social capital factors at play - it matters to readers that Dean’s blogs “come from him”.
The future of democracy: Dean argued that AI could affect democratic institutions and authoritarian regimes, noting the risks of “neo-feudal outcomes”. Against this backdrop, he called for imagination regarding the future, and to avoid grafting old institutions onto new technologies. He encouraged AI labs’s leadership teams to think seriously about their role in this transition.




The tension between flexibility and preparedness in AI regulation really hits home here. Ball's point about gradient descent beating static rules makes intuitive sense when we can't predict which risks will actually materalize politically or technically. That said, I wonder if his entity-based governance concept forshadows inevitable regulatory capture once were forced into it. The finacial sector analogy is interesting but also cautionary, given how much compliance infrastructure gets weaponized for incumbents.
Hmmm, Dean does this, Dean does that. It would be good to know whose perspective are we actually reading? Who produced this? This is something that will need to be on *all* written material going forward--you should start! Whomever you are.