Wayfinding the Future

The journey to create ora was not easy.

After years of contemplating Bret Victor’s essay from 2011 titled “A Brief Rant on the Future of Interaction Design” I decided to pursue his argument that in order to make a more natural relationship between humans and computers, we should look at the primary function of our hands; to feel and to manipulate objects. His main criticism is in reflection to the abundance of, what he calls “pictures under glass,” a phenomenon relating to many projected visions of our future.

My project serves as a midway point between his perfect future and the one we are presently experiencing. I was looking at the manipulation of interface and less so the feeling portion due to technological constraints. In the end, Victor served as the primary inspiration for my thesis project and I started to think about gestural communication with technology. Through that I decided that I wanted to create a gestural language that would serve as a way to interact with virtual and augmented reality technologies.

Movies like the Minority Report, with broad and taxing gestures, instantly came to the minds of my peers when I began talking about this project. However, this was not the kind of future I had in mind. In 2013, Fjord wrote an article on “Why the Human Body Will Be the Next Computer Interface” where they extrapolated on the idea of micro-gestures being a part of our future. Micro-gestures, paired with virtual/augmented reality technology was exactly what I wanted my thesis to explore.

Of course, nothing was as simple as it sounds!

Toward the beginning when I was framing my project, I wanted to explore both touch and manipulation related to our virtual futures. Unfortunately, tactility through technology is currently impossible to simulate on an affordable scale. However, I still strongly believe our future is tactile. I believe feedback through stimulation of all of our sense through technology will become the primary way of interaction. A natural integration, if you will.

A common representation of our future involves broad and elaborate gestures to manipulate interface complete minimal tasks, again think Minority Report. I disregard the notion that we will have to interact with the technology around us in overexerting and tasking performances. Rather, I see the future utilizing minute “micro-gestures” to run commands and have it fit seamlessly within our daily lives. Saving a memory could be as simple as bringing your hand to your heart. Sharing with others could be as simple as tapping their sternum. Our hands can express a complex array of movements which are finite in nature. Technology will be able to recognize those expressions. We will learn the gestures through immersion much like how we learn to talk or display emotions.

Once I had a general understanding of virtual reality and gestural communication, the question I posed was: How might we visualize a gestural and tactile interface of the future? Within that question, I had to create my own parameters for the project and assumptions to design under such as virtual reality already being real, that it is deviceless and seamlessly integrated into your life, and haptic holography was capable and easily replicated. From an initial thesis presentation: “I am designing based on the assumption that AR/VR can be projected into your retinas...let's say by a contact lens. I also am assuming that haptic holography is real...holograms that you can see feel and touch to interact.” These parameters helped create a framing device for my project.

The first portion of the project that I investigated was potential gestural interactions. I started the project by coming up with a set of instructions that one would typically perform on a traditional 2D interface. From that data, I then recreated what I thought would be good gestures for performing tasks. The tasks ranged from saving, down to simple copy and paste commands. Once I had my preferred gestural interactions, I had others perform the same gestures in a multitude of combinations. What became apparent from those exercises was the gestures I had come up with were not as easy for other people to perform. Afterwards I worked with quite a few people to perfect some of the interactions and they can be seen throughout the three videos that I created for my thesis.

I started the project by coming up with a set of instructions that one would typically perform on a traditional 2D interface.

With the gestures set, I needed something to begin experimentation in virtual reality. The Oculus DK2 was decommissioned and I was left in a predicament on how to proceed to have an interactive component in my exhibition. Luckily, late in my fall semester during my thesis year, A-Frame was released to the public which helped me create the virtual interactive experiences I had within my exhibition. A-Frame, is a coding language similar to HTML and CSS. A-Frame allows you to develop virtual reality experiences in the browser and run them on your phone to use with Google Cardboard. With A-Frame I was able to create four virtual reality experiences, three of which were displayed in the exhibition.

Along with gestures, and VR people could experience, I created videos where I explored three different scenarios relating to my conceptual prototype. In the first video I investigated what the potential shopping experience could be like in early models of Mixed Reality. In the second video I looked at what working and writing content might be like in the future and in the third video I explored what the future of making might be like, specifically related to 3D artists. All of these incorporated the gesture controls developed for working in VR.

In the next phase of my research beyond the MFA, I will be exploring how virtual and augmented reality worlds will start to function similar to how a mind palace works. Instead of digging around on your computer trying to find the exact file that you need in the folder that you saved it in, you'll be able to use spacial/external memory instead of working memory.

I've already started to redefine the computer systems archaic office analogy, and I have devised some new vocabulary:

mem - a full memory (i.e. all experiences/knowledge/etc) – similar to a hard drive

(mem) block - a collection of mem fragments - multiple units acting as one piece of memory (i.e. a photo album) – similar to folders

(mem) fragment - a distillation of a whole - one unit (i.e. a picture) – similar to files

Before I close, I must say the future for designers is incredibly exciting for this new medium. As VR/AR/MR develop, some questions I have moving forward are: Will we go back to a skeuomorphic approach to design? What kind of programs will be created to collaborate with designers and developers in VR/AR/MR? Will virtual reality succeed or fall to augmented and mixed reality? The list goes on, but food for thought!