From Text to Graphics and Back in the Metaverse

All of us humans are data processing machines: we acquire information from our environment through our senses; proceed to digest the flow of information, recognising different patterns across several layers of abstraction; and finally execute a certain number of steps, acting on those patterns, according to the goals that we have set ourselves.

Many of our activities are independent of the particular process, the kind of data, and how it is acquired. For example, when we are driving a car, we are not thinking about how we are turning the steering wheel or how we are pressing the accelerator. We are thinking about where we want to go and what we need to do to get there. When we are talking to someone, we are not thinking about the mechanics of speech. We are thinking about what we want to say and how we want to say it. When we are reading, we are not thinking about the mechanics of reading. We are thinking about the meaning of the words and how they are advancing the plot.

When building metaverse experiences we can take advantage of this knowledge, achieving a greater degree of integration in day-to-day activities, smoother adoption by a wider set of potential users, and exploring more broadly the frontiers of this new interactive medium. There is no single interface that we must think of when we think of the Metaverse.

The Role of Imagination

Reading and writing are a great example of a medium that supports an unbounded degree of identification with the experience, stimulating our imagination. The text based adventures of the 80s were for many an introduction to personal computers, as well as to the first concepts of programming, when editing the worlds described. 

Even before the availability of personal computers the role of a game master was evident in the tabletop games that required great amounts of creativity and active participation by the players, verbalizing the actions of their characters, for example in Dungeons and Dragons. 

(Thanks to Archimedix for a recent stimulating conversation in Turin, Italy around these topics.)

Now that AI systems are ever more proficient in understanding and generating content in natural language, it can be again the basis of valuable and entertaining interactions, in the Metaverse, too.

Induced Necessity of Beautiful Graphics

At the opposite end of the spectrum, more recently we have come to believe that it is necessary to constantly push the envelope of realistic computer graphics in order to make the platform attractive and efficient. Yes, ever more powerful hardware is available in our machines, and we like to see them exploited to their maximum, but the dogmatic assumption, that the platform cannot be successful unless it pushes the boundaries of what is possible at any time, is evidently false.

Even something that we take for granted today, visors required for immersive three-dimensional visualizations, are not truly necessary for a powerful interactive experience, as demonstrated by the initial success of Second Life, whose interface only required a standard monitor.

The Computer Knows that Digital is Real

When designing your digital worlds it is tempting to employ shortcuts that allow you to cram as many features in the experience as possible while maintaining compatibility with a relatively less powerful hardware installed base. For example you could texture map surfaces, and render a foggy or dark environment.

As much as possible you should instead pick generated objects. Why? Because the computer knows that digital is real! The better you can describe the world in semantic terms, the better the interaction engine will be in adapting to novel needs emerging from the use of the world in unexpected ways.

Graceful Shifting of Optimal Interfaces

We have become accustomed to picking up the phone to complete certain tasks, maybe to send an email, or to reply to a message on one of the many chat systems. When we sit down in front of the computer, our reasonable expectation is to see these recent tasks to be immediately reflected in the web based interface that we will then be using.

The ideal design of a Metaverse should leverage the semantic information of its objects and interactions, making it possible to gracefully shift from one interface to another, as needed. The latest advances in generative neural networks make this possible. Future systems will have to be able to support a rich and varied set of interaction modalities, fully exploiting textual, graphical, immersive, and conversational options.

I definitely want my AI assistant to understand what is going on in the Metaverse while I am sleeping, and to describe to me the highlights, while I am brushing my teeth in the morning, taking action based on what I tell it to do, as I start my day!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.