Now that everyone’s panicking about the atomic bomb dropped by Linden last month when they announced their successor to Second Life (which, I’m now given to understand, has nothing whatsoever to do with competing in a suddenly rapidly expanding market and is just the next step in the company’s mission to screw residents in every last way achievable), I thought it might be a good moment to start thinking about the ways in which a ‘next generation’ virtual world could differ from the present one.
A new metaverse which works in broadly the same way as the present one – albeit with better graphics, less lag, and full immersion via the Oculus Rift – might sound like a good thing, but would it really capture the imagination of the masses? A lot of us thought that 3D cinema was a new and amazing thing when Avatar was released a few years back, but when it came to buying a 3D TV, few people could really be bothered and Nintendo’s 3DS handheld games console – complete with its built-in 3D camera that would enable us all to record our moments in stereoscopy – completely failed to capture the public’s imagination (though, admittedly, not as much as the Wii U did). If SL2 really is going to capture the attention of hundreds of millions of people rather than just millions of people, as Linden CEO Ebbe Altberg has recently claimed as its objective, it will need to bring with it something genuinely new. The same is true of VR more generally. In my mind, one such thing is objects with function.
Many objects in SL do already have function, but it’s an extremely limited function. You can sit on a chair. You can lie on a lounger. You can open a door. You can close your blinds. Perhaps the most sophisticated functional object I’ve seen so far is one of those fancy television screens that links to channels showing old movies or which can play YouTube videos: it’s a method for watching something with someone, for sure, but it’s hardly bringing into being something that can’t be done outworld. No. The sort of function I’m thinking of is far more complex.
Just over a year ago, I was fortunate enough to get a short tour of future concepts being developed by IBM. These included a facial recognition system for use in commercial environments (remember those billboards in Minority Report that changed when Tom Cruise walked past them to show him personalised adverts? – that technology exists right now) and a remote control toy car that you can drive with your mind. But centre stage for me was the big black table in the room with a surface that acted like a giant iPad. If it had actually just been a giant iPad it wouldn’t really have impressed me all that much; what blew my mind was the way in which it was possible to manipulate documents on this thing: you could spread them all around you like pieces of paper, you could tap one to bring up a localised keyboard alongside it for editing; when you were done with it you just pushed it to one side for filing. We’ve seen similar fictional systems to this in movies like Quantum of Solace and, more recently, The Amazing Spider-man 2; what I saw at IBM was nowhere near as whizz-bang as either of these, but it was real and – by God – it worked.
I’m particularly excited by technology such as this because for years I‘ve struggled with the concept of the ‘paperless office’. I’ve been interested in computers for over thirty years now, but my enjoyment and knowledge of them hasn’t stretched so far to any acceptance on my part for replacing paper in my everyday work. Sure, I use a PC to write reports and emails like everyone else, but the moment two documents are required for any particular job, I start reaching for the print button. To give you an example, when I’m marking an essay I need to see both the essay itself and the marking grid I use: I could switch between them on my PC screen, but I dislike doing so intensely. I want to see them side by side, so I end up printing both essay and grid, completing the latter by hand and then later typing it up. It’s an inefficient way of working, I know, but it’s the best fit there is for the way in which I need to think. For people like me, then, the interactive surface I saw at IBM represents a way in which the paperless office could actually happen.
But do I see such technology turning up in regular office spaces such as mine in the near future? I do not. The cost is likely to be prohibitive without a mass market to sell to and a mass market is likely going to be very difficult to establish when – quite apart from anything else – people are living in smaller and smaller spaces. If 3D TVs costing hundreds of pounds were a difficult sell, I hardly imagine interactive tables costing thousands or tens of thousands of pounds are going to walk their way into people’s dining rooms.
But virtual reality might just be the way through which people like me could access this way of working, and at a fraction of the price. I sit at my regular table or desk and put on my Oculus Rift and activate/teleport to my office in the virtual world: there I’m sitting at an interactive desk where I can spread all my electronic documents around me and work on them in the manner that suits me. So what I feel through my fingers is the surface of my real life desk, but what I see is my interactive desk with all its documents and applications. The system would of course be linked to a cloud storage account so that I can access outside of the metaverse the work I do inside it: swiping a document into a particular folder on my desk would store it in – let’s say – my Dropbox account, so I would then be able to bring it up in the real world on a PC or tablet.
There would be other benefits to working this way. Rather than being an isolated room, my office in virtual reality could be connected to the virtual offices of all my co-workers so that we could use the interactive desks for meetings or joint working. Whole buildings could be constructed in the metaverse for individual companies or organisations: buildings where people actually work rather than the business-themed dolls’ houses we see in SL composed of empty room after empty room. Working from home would never have to be the solitary thing that it is now, where contact with other people comes in the form of emails and the occasional phone call.
Is current technology up to this? I don’t know. I’ve not had any experience so far of using a virtual reality headset, so it might be that my expectations don’t quite match the reality of this technology as it stands at the moment. It might be, for example, that the graphics resolution isn’t quite so good that I’d be able to read the text on documents comfortably without enlarging it significantly or bending over to see it. Also, in addition to the headset, some sort of device would be required for reading my hand and finger movements. I know that the Microsoft Kinect is capable of reading body movement, but I don’t know whether it’s fine-tuned enough to do so sufficiently well to distinguish between different virtual key presses or to be able to keep up with my typing speed. A system that constantly produced typing errors because it was only 99 per cent accurate would be infuriating.
Then there’s the creation of the document management software itself. Whilst not beyond the scope of technology today (as I saw at IBM), this would be no small issue: it would effectively be the creation of a whole new operating system, the sort of thing it takes Microsoft, Apple and Google years to develop (and, in the case of Windows, still get wrong). I say it wouldn’t be beyond the scope of technology today, but there I’m thinking of a system for use in real life: implementing such a thing in a virtual world would require an inworld scripting system light years ahead of what’s achievable with something like Linden Scripting Language. And it would require lots and lots of processing power.
But this is future-gazing, and from the vantage-point of a period in time that’s not even yet the beginning of the virtual reality era. Whatever does start to emerge next year, it will be certain to be improved upon quickly. And it’s been acknowledged by the current architects of virtual reality that VR as yet has no ‘killer application’ concept that might make it a must-have rather than a novelty or niche interest. The first ever killer app, incidentally, was VisiCalc, the first spreadsheet program (for the Apple II computer). Can you imagine working life now without spreadsheets or any other the other killer apps that succeeded them, such as word processing software or email?
I realise you were probably hoping for something a little more exciting from the metaverse than yet another reworking of the way you use a word processor, but it might just be that one day you can’t imagine working life as possible without your virtual reality office.