This essay is written by Harry Law, who works on policy research at Google DeepMind. Like all the pieces you read here, it is written in a personal capacity. The goal of this essay is to explore the potential and limitations of using analogies to guide AI policy discussions by considering how analogies have been used in science.
In public policy discussions, you will regularly hear that AI is like many things. It’s like electricity, it’s like nuclear weapons, it’s like a stochastic parrot. But is it really? Does it matter if it’s not? And why, exactly, do we use analogies in the first place?
This isn’t an essay about whether or not certain analogies hold, or how effective metaphors are for reconfiguring the policy environment. Others (see here and here) have it covered. Instead, it’s concerned with how and why AI policy researchers and practitioners use metaphors – and what their limitations are.
Yes, in general, metaphors are used to persuade. If you can make the case that AI is like nuclear energy, then it follows that it ought to be treated in a way that assumes a similar risk profile. Once those links have been established, it’s a straight shot to transplanting ideas, concepts, and procedures aimed at the governance of the former to the latter.
Whether or not these analogies work isn’t the point. The broader conceit is that, while we all know metaphors convey meaning, this fact isn’t especially helpful for telling us how they are actually used in AI policy in a practical sense.
Generally, the most popular approach for mapping the relationship between meaning, metaphor, and the policy environment is Jasanoff’s ‘sociotechnical imaginaries’ concept. Originally developed with specific reference to the state as a political actor, it has since been expanded to include any political actor seeking to influence the trajectory of technoscientific development.
But we’re interested in the use of analogies in the policy development process itself. To clarify how and why we use metaphors in AI policy, we could do worse than looking at how scientists use analogies. Obviously, technical researchers also use them because they have a rhetorical appeal. But that’s not the only reason.
Hopfield networks
Consider an important moment in AI history: the publication of John Hopfield’s 1982 paper credited with the invention of the ‘Hopfield network’, a type of recurrent neural network where all units are connected to each other – typically used for pattern recognition and memory storage.
Hopfield, who worked for both Bell Labs and Caltech, was widely regarded as the man responsible for an increased interest in connectionism (the ancestor of deep learning) during the 1980s. In 1989, reflecting on the field’s return to prominence, researcher Tom Schwartz put it in no uncertain terms: ‘Hopfield should be known as the fellow who brought neural nets back from the dead.’
Despite the praise, it’s not the case that Hopfield was the first to design and implement a fully connected network. Stanford researcher William Little had previously introduced versions of neural networks in 1976, and so had Japanese neuroscientist Shun'ichi Amari in 1972. The cognitive scientist Stephen Grossberg, who first wrote about the ideas described in the Hopfield model in 1957, even said: ‘‘I don’t believe that this model should be named after Hopfield. He simply didn’t invent it. I did it when it was really a radical thing to do.”
But as every popular scientist knows, research needs rhetoric and papers need presentation. Hopfield removed dense mathematical descriptions in favour of persuasive prose written for cognitive scientists, published his paper in the influential Proceedings of the National Academy of Science, and travelled extensively to talk about ‘his’ networks. These are, practically speaking, the primary reasons that today we know the systems not as Grossberg networks – but as Hopfield networks.
One of the paper’s most influential ideas involved describing the networks as ‘spin glasses’, a term derived from the magnetic state of matter. ‘Spin’ is a quantum property of particles that allows them to behave as miniscule magnets that can point in different directions. ‘Glass’, meanwhile, draws from an analogy with conventional glass, which is known for its irregular, amorphous structure at the atomic level. In a typical glassy material, atoms are arranged in a non-crystalline, disordered state — unlike in a crystal form in which atoms have a regular, repeating arrangement. Voila, spin glass.
In the Hopfield network, the system’s dynamics—how these states settle into patterns—are inspired by the way magnetic spins interact in physical systems like spin glasses, where the arrangement of spins leads to complex behaviours. Hopfield’s work connected this notion, the potential for systems to transition from a disordered state to a stable one, to the concept of ‘associative memory’ in which you give a system a piece of the desired output and it ‘remembers’ the rest.
The connection was straightforward: just as a spin glass system transitions from a high energy state to a low energy state, a Hopfield network minimises its energy to represent and retrieve stored patterns. It was this analogy, which explained how a sea of smaller components could settle into stable states representing stored memories, that encouraged the next generation of researchers to couple the operation of Hopfield networks with principles similar to those observed in physical systems.
Jack Cowen, an influential researcher who worked with Hopfield, understood the symbolic currency of the idea: “I think that's neat stuff, but I still think it's an artificial system, as is the antisymmetric one. It may have nothing to do with the way things really work in the nervous system, but it's a very interesting idea.”
By drawing a parallel between associative memory and the behaviour of physical systems like spin glasses, Hopfield reinforced the idea that complex cognitive processes might emerge from the collective behaviour of simple interacting units. This perspective supported the well-established notion of the brain as a complex system, where higher-order cognitive functions could be explained through the interactions of simpler components.
As Hopfield explained: “Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons).” The result, according to this model, is that intelligence is an emergent property that may be produced through the interaction of smaller, simpler units. Faced with such a conclusion, it is perhaps unsurprising that the paper caught fire.
Toy models
To explain the functioning of the systems, Hopfield leaned heavily on Hebb’s law named after the Canadian psychologist Donald Hebb: “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
But Hopfield networks are not brains. And neither are they spin glasses. As Alexander Reutlinger describes, these types of comparisons are better thought of as ‘toy models’ that seek to help scientists understand the world. Frank Rosenblatt, on whose famous perceptron algorithm Hopfield aimed to build, stressed the importance of toy models to AI research in 1961:
“The model is a simplified theoretical system, which purports to represent the laws and relationships which hold in the real physical universe…the model deliberately neglects certain complicating features of the natural phenomena under consideration, in order to obtain a more readily analyzed system, which will suggest basic principles that might be missed among the complexities of a more accurate representation.”
These models tell researchers about the problem they are trying to solve, but they also widen the epistemological field to accommodate new concepts, ideas, and perspectives. Philosophers of science Knuttila and Loettgers, for example, have shown that mental models “provides modelers with a powerful cognitive strategy to transfer concepts, formal structures, and methods from one discipline to another”.
Historically, Derdre Gentner's influential theory of analogy has been used to describe how analogies can be used, while Mary Hesse has distinguished between ‘formal’ analogies that allow for the direct comparison of relations (for example, by representing each through equations) and ‘material’ analogies in which analogues have certain properties in common.
The Hopfield network makes use of the ‘formal’ spin glass analogy alongside ‘material’ analogies drawn from neuroscience to explain the functioning of the associative memory process. Through this process of analogical reasoning, the twin abstractions of spin glass modelling and neurophysiology offered a way of describing the functioning of the systems that was both persuasive and epistemically valuable.
Analogies in AI policy
So what do Hopfield networks tell us about how we construct metaphors in AI policy? In the first instance, it reminds us that analogies are powerful expressive tools. Part of the enduring success of the Hopfield network is that it piggybacked on ideas drawn from neuropsychology, which led to some juicy conclusions about the emergent nature of intelligence.
But that’s the obvious part. If we accept that spin glasses represent a ‘formal’ analogy and allusions to the brain were closer to ‘material’ analogies, then we can better understand why these ideas were introduced in the first place. One is to describe, and the other is to connect.
While policy-focused analogies don’t need to have properties in common (in fact, I generally find the points at which comparisons break to be more instructive than those that neatly overlap), they do take like-for-like comparison as their organising principle. That is to say: metaphors in AI policy are shortcuts for mapping the relationship between the target (e.g. AI) and the source (e.g. nuclear energy).
Analogies in AI policy development are in this sense material analogies. Even if we’re drawing on concepts from fields like game theory or control theory, we're typically comparing properties or procedures rather than establishing rigorous connections. When we link AI's impact on labour markets to previous technological shifts or AI governance challenges to those in biotechnology, we are weighing certain characteristics—from simple property comparisons to structured analogies—rather than making formal equivalences.
Scientific analogies, as we saw in Hopfield's use of spin glasses, aim to establish structural or functional similarities that can lead to testable hypotheses or formal mathematical models. But analogies in AI policy are conceptual to the core. They focus on perceived similarities in societal impact or governance mechanisms that (especially with respect to the former) can be tricky to empirically verify.
But, much like scientific analogies, they are still useful for the purposes of research (and of course, rhetoric). We compare AI to different industries, introduce historical case studies, and assess governance mechanisms to open new avenues of exploration and discard others. We generate hypotheses and world models, conduct anticipatory analyses, and build toy models whose purpose—much like in science—is to make the complex legible.
Whether in science or policy, analogies are both epistemically and rhetorically useful. But where the former is capable of dealing with formal parallels, the latter is not. But that’s not necessarily such a bad thing.
After all, it is that flexibility that lets me draw loose parallels between science and AI policy in the first place.
Analogies can be used far too easily to poison the well through introducing bias if they are only superficially analogous. Saying that AI is like nuclear weapons immediately introduces the thoughts of danger at a global scale, even though there is no analogous links between the two. Similarly, saying that AI is like the invention of the steam-engine introduces the thought of productivity and economic revolution, even though the links between 18th century steam engines and AI are tenuous.
Analogies don't need to be formal, but the less analogous they are, the more bias they introduce.
This was a valuable and interesting read. You took a specific route; but I feel the main point is more universal. For the bulk of our day-to-day knowledge, we "know" new things by metaphor first....by comparisons to other things that we "know," which in turn also spring from earlier metaphor. And for the large majority of our knowledge base it remains there, in this informal and analogy-based form; refined over time but still based in comparison/analogy/metaphor. Rare indeed is the knowledge gained by formal proofs of logic or mathematics. (Special and powerful yes... but rare).
In this, I think the foundations of our human thinking are reflected by AI, far more than most people realize.