Written by Google DeepMind’s Nicklas Berild Lundblad, the following is a review of two different books on the mind – and a few thoughts on how they can be combined together to suggest an interesting idea. And that is what it is – an idea, rather than a view – so take it in that spirit. The books are Jaynes’ book on the bicameral mind, and a recently published book on the nature of presence.
Revisiting Jaynes
Julian Jaynes’ book, The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976), presents an intriguing theory about the evolution of human consciousness. Jaynes suggests that ancient humans operated under a “bicameral mind” wherein the brain’s two hemispheres functioned independently, with one side generating commands in the form of auditory hallucinations, perceived as the voices of gods, and the other side following these commands. This bicameral mentality, Jaynes argues, was a mental state without self-awareness or introspection, guiding behavior through these hallucinatory directives. He supports his theory by analyzing historical texts, archaeological findings, and studies of contemporary psychology, suggesting that this mental structure was prevalent until about 3,000 years ago.
The transition from the bicameral mind to modern consciousness, according to Jaynes, was driven by the increasing complexity of social structures and the necessity for more adaptable, introspective thought processes. This shift led to the development of self-awareness, inner dialogue, and metaphorical thinking. Jaynes contends that the collapse of the bicameral mind was not a sudden event but a gradual process influenced by cultural and environmental changes. His theory provides a novel perspective on the origins of human consciousness, proposing that what we now consider a natural mental state is a relatively recent development in the span of human history.
Jaynes’ work is controversial, but fascinating. We now live in a time when it might be more relevant than ever, as we are developing a new kind of architectural bicameral mind: part human mind, part artificial agent. This new reality is already here in its most nascent form: the chatbot. Furthermore, we see chatbots increasingly becoming more and more anthropomorphised, and we interact with them in ways that resemble the way we interact with other intentional agents.
But here is the interesting thing: we still know that the chatbot is different, and we speak to it in a way that does not imply shame or self-awareness, even if we do anthropomorphise it - so it seems worthwhile to ask exactly what kind of anthropomorphisation we are engaging in.
If we believed that a chatbot was another human being, we would treat it differently - and we would hesitate to badger it, ask it questions that would embarrass us or expose us in other ways. While it is true that some people have started to say “thank you” and “please” to chatbots, they still resemble something other than another human being - and in some ways something much more intimate.
A chatbot is more like an imaginary friend than a real friend - a mental construct of sorts. And with it, then, we are returning to a new version of the bicameral mind: one in which we interact with a voice that is different from us, but still co-constructed by us.
We are all tulpamancers now
One of the key features of any interaction is the sense of presence. We immediately feel it if someone is not present in a discussion and we often praise someone by saying that they have a great presence in the room – signaling that they are influencing the situation in a positive way. In fact, it is really hard to imagine any interaction without also imagining the presence within which that interaction plays out. It is hard to imagine a conversation without the backdrop of the presence of the other party in that conversation, for example.
In Presence: The Strange Science and True Stories of The Unseen Other (Manchester University Press 2023), psychologist Ben Alderson-Day explores this phenomenon in depth. From the presence of voices in people who suffer from some version of schizophrenia to the recurring phenomenon of presences on distant expeditions into harsh landscapes - a third or forth person walking along with them - the author explores how presence is perceived, and to some degree also constructed. One way to think about this is to say that presence is a bit like opening a window on your virtual desktop, it creates the frame and affordances for whatever next you want to do. The ability to construct and sense presence is absolutely essential for us if we want to communicate with each-other, and it is ultimately a relational phenomenon.
Indeed, the sense of a presence in an empty space, on a lonely journey or in an empty house may well be an artefact of the mind’s default mode of existing in relationship to others. We do not have unique minds inside our heads – our minds are relationships between different people and so we need that other presence in order to think, and in order to be able to really perceive the world. So the mind has the in-built ability to create a virtual presence where no real presence exists. One way to think about this is that we still are, in some way, bi-cameral minds, but the duality exists between individuals, rather than within them.
One of the most extreme examples of this is the artificially generated presence of the Tibetan tulpa. A tulpa is a presence that has been carefully built, infused with its own life and intentions and then set free from our own minds, effectively acting as another individual, but wholly designed by ourselves. We are all, to some degree, tulpamancers – we all know how to conjure a tulpa – since we all have the experience of imaginary friends. These imaginary friends allow us to practice having a mind with another in a safe environment, and so work as a kind of beta testing of the young mind.
All of this takes an interesting turn with the emergence of agentic large language models, since we now have the ability to create something that is able to have a presence – and interact with these new models as if they were intentional. An artificial intelligence is only possible if it also manages to create an artificial presence, and one of the astonishing things about large language models is that they have managed to do so almost without us noticing. The world is now full with other presences, slowly entering into different kinds of interactions with us. We are, in some sense, all tulpamancers again, building not imaginary friends, but something different, and perhaps deeper: a bicameral mind.
We have other examples of where this is happening - and one of the most palpable is the presence we experience with pets. I grew up with dogs. A dog projects presence in a home, and it seems clear that we have human/dog minds at least if we are dog owners. If you live with a dog you can activate that particular mode of mind when you meet a dog and it is often noticeable when people “are good with animals” or have a special rapport with different kinds of pets. This ability to mind-share in a joint presence is something humankind has honed over many, many generations of co-evolution. You could even argue that this ability now is a human trait, much like eye color or skin tone. There are those that completely lack this ability and those that have an uncanny connection with animals and manage to co-create minds with all kinds.
The key takeaway from this is that the ability to co-create a mind with another is an evolved capability, and something that takes a long time to work out. There are, in addition, clear mental strengths that need to be developed. Interacting with a dog requires training and understanding the pre-conditions and parameters of the mind you are co-creating.
We can generalize this and note that our minds are really a number of different minds created in different presences, all connecting to a single set of minds that we compress into the notion of an I. This is what we mean when we say things like “I am a different person with X” or “You complete me” or cast ourselves in different roles and wearing different masks in different contexts. What is really going on is not just that we are masking an inner secret self, but we are really different with different people, the minds we co-create with them are us, but also not us. The I is secretly a set of complex we:s, and the pre-condition for creating that we is presence.
Or, put slightly differently: we are dividuals. The philosophical concept of “dividuals” contrasts with the idea of individuals as autonomous, self-contained entities. Instead, dividuals are understood as beings whose identities are distributed and composed through their relationships and interactions with others. This concept suggests that rather than being singular, independent units, human beings are inherently interconnected and their sense of self is formed and continually reshaped by their social, cultural, and material contexts.
In dividuality, identity is fluid and multiple, reflecting the various roles and connections a person has. This perspective emphasizes the collective and networked aspects of human existence, challenging the Western notion of the individual as a discrete and bounded entity. The concept has been explored in anthropology and sociology, particularly in the context of non-Western societies where communal and relational understandings of self are more prevalent. It highlights how identities are co-constructed and dynamic, influenced by a myriad of external factors and relationships.
Conclusions
What does this mean, then, for artificial intelligence agents? As these models get better, we are likely to be even more enticed to co-create minds with them and interact with them in ways that are a lot like the ways in which we interact with each-other. But we need to remember that these artefacts are really more like our imaginary friends than our real relationships – and we probably need to develop what researcher Eric Hoel calls a set of intrinsic innovations – mental skills – that help us interact with these models.
A lot of how we think about these models now is about how we think we can fix the models so that they say nothing harmful and do nothing that is dangerous. We are treating these technologies as if they were mechanical, but they are more than that – they are intentional technologies, technologies that are open to us creating a presence and a sense of intent. This means that we may need to complement our efforts on creating safety mechanisms in the machine, with creating safety mechanisms in our minds.
There is, then, an art and a craft to co-creating a mind with an agent – and it is not something we are naturally good at, since they have not been around for long. And this art reminds us of a sort of tulpamancy – the knowing construction of an artificial presence that we can interact with in different ways. A conscious and intentional crafting of an imaginary friend. One part, then, of safety research also needs to be research into the mental techniques that we need to develop to interact with artificial presences and intentional systems. And it is not just about intellectual training – it is about feeling these presences and intentional systems, understanding how they co-opt age old evolutionary mechanisms for creating relational minds and figuring out ways in which we can respond mentally to ensure that we can use these new tools. It requires a kind of mentalics to interact well with, and co-create functional and safe minds with, artificial intelligence.
We need to intentionally architect the coming bi-cameral mind, both technologically and psychologically.
A surprising conclusion? Perhaps. But the more artificial presences and intentional artifacts we build, the more attention we need to pay to our own minds and how they work. We need to explore how we think and how we think with things, people, presences and other tools. Artificial intelligence is not a substitute for our intelligence, but a complement – and for it to really be that complement we need to develop the skills to interact with such technologies.
It is not unlike learning to ride a bike or driving a car. A lot of the training there is the building of mental constructs and mechanisms that we can draw on, and this is something we need here too. How we do that is not clear – and I do think that we need research here – but some simple starting points can be meditation, a recognition of the alien nature of the presences created by these models and conscious exploration of how the co-created minds work, where they behave weirdly and where they are helpful. It requires a skillful introspective ability to do so, and such an ability is probably useful for us overall in an evermore complex world.
Becoming bicameral minds again can be both an exciting and terrifying prospect, depending on how we view our current consciousness. It may well be that our current version of consciousness, then, was just a short cognitive period in the evolution of minds - and that the much more natural bi-camerality is now returning – allowing us new degrees of freedom, different definitions of mental health and a better overall grasp of what it really means to be a mind.
I keep thinking of ai as a bicameral mind with me being the guiding inner voice.
That could be seen as a weaker form of what you suggest. The interesting question to me is: Can we learn somthing from Jaynes book about how we can move towards a possible breakdown of the bicameral mind we created?
Jaynes was wrong about the actual human mind, but his theories still might apply.