Part one: Will copies of human beings one day end up in the metaverse?
In my upcoming novel, Beside an Open Window, human beings make regular digital scans of their brains while alive so that these can be activated in a vast online world once they die. The book is set sixty years into the future and ‘dead residents’ interact in this world with living residents who access it in much the same way as we do Second Life® today.
The idea of creating brain scans is one I’ve been thinking about for several years. Some time before SL existed, I remember wondering if it might one day be possible to create ‘archived’ copies of brains on computers. It was less an issue to me at the time that we might do so in order to extend in some way human existence and much more that we might do this to prevent the loss of people’s thoughts and memories. I think this had a lot to do with the death of my father, who I missed profoundly and whose thinking and experiences I considered a genuine loss to his fields of interest.
When I got into SL, the idea that such archives might connect to the metaverse – and thereby have natural movement in a virtual world – was very compelling to me. I hadn’t put a great deal of thought previously into how digital brains might interact with the world, oscillating broadly between a very basic ‘brain-in-a-jar’ scenario where an archive was switched on periodically for electronic consultation and the full-blown (and, frankly, unlikely) ‘holodeck’ notion promoted in Star Trek. Somewhere in the middle of all that I’d also thought fleetingly about uploading brain scans to robots – an idea I later discovered was explored by Janet Asimov in her novel, ‘Mind Transfer’.
In a virtual world, however, a brain could roam about with freedom in a virtual body and consume only a fraction of the energy and costs of any robot or far-fetched holodeck idea.
Could such a thing, then, actually be possible? There are a few conditions which would have to be met. Firstly, it would have to be possible to scan a brain at a resolution able to identify individual molecules. Memory is stored via pathways through different neurons, the route that an individual signal takes through them being determined by the quantities of neurotransmitter chemicals passing across the tiny gap between one neuron and the next – the synapse – and the receptiveness of the receiving neurons to these chemicals. Only by knowing the exact state of all of this could we create a scan that was in any way functional. No such technology exists today, although the resolution of brain scanning is continually improving. By interesting coincidence, one recent innovation allowing users to scan and view their brain activity in real-time – ‘Glass Brain’ – has been co-developed by none other than SL’s own Philip Rosedale.
Secondly, we would need gigantic computer memory capacity for storing brain scans. One estimate I read recently was that there are something approaching 500 trillion trillion atoms in a human brain. Assuming this is true and assuming we assigned one byte of computer memory to the description of each atom, my back-of-an-envelope calculations indicate we would need something in the region of 50 trillion terabytes to store all this. Applying Moore’s Law to computer memory growth – starting at 8 gigabytes for a mid-price desktop system today – we might predict that the computers of 2064 will have memories in the region of 250,000 terabytes, which is rather a long way short of what we’d need. Add another sixty years of Moore’s Law progression, however, and you’re pretty much there.
Thirdly, we’d need to be able to bring these scans to life: their data would have to mean something to the computers they’re loaded into, just like a jpeg means something and an MP3 means something else. We’d need to understand the precise function of neurons and brain chemistry in order for this to happen, such that each neuron’s data description can be turned into a fully emulated brain cell once the model’s switched on and digital blood applied. We’d need to know how visual input is encoded in the eye and sent down the optic nerve if we want our dead people to see in the metaverse and how auditory input is encoded in the cochlea if we want them to hear. Sensory input, in fact, would be a huge area for further research: contrary to popular belief, the brain receives input far more complex than just ‘the five senses’. For example, shut your eyes and hold your hand at arm’s length, then move it towards your nose but stop just short of touching it: how did you know where your hand was in terms of what sight, sound, smell, taste or touch were telling you?
Even supposing we work out how to do all these things, however, there could still be another enormous barrier to emulating the mind: consciousness, without which a human brain is nothing. In Beside an Open Window the theory of consciousness as emergent behaviour is assumed. Emergent behaviours are apparently organised behaviours that emerge from the more simple behaviours of large collections of smaller organisms. The seemingly simultaneous movements of flocks of birds or shoals of fish – movements which give the impression of an organised whole rather than lots of disorganised individual components – are examples of this. In science fiction, the idea of higher order behaviours arising out of the more mundane work of component individuals is something that’s most famously been explored in Star Trek through the notion of the ‘hive mind’ of The Borg. Human consciousness as an emergent behaviour of neurons – ultimately, then, an illusion of sorts – is something that ‘just happens’… but would it happen also in a digitally modelled brain? That’s the sort of thing we can’t possible know until we actually try it out.
Supposing, then, that consciousness does happen, what would existence be like for these resurrected brains? What would they do? What would it be like to live in a digital world and only be able to look back into the real one, as though through a window? I’ll examine some of these issues in part two of this article next weekend.