The Nintendo “Virtual Boy” was released in the United States in 1995 and discontinued in 1996 — making it the white whale of gaming tech in our little home town. My youngest brother spotted one at a garage sale and traded the sweat of his summer labor for a glimpse into the future. Unfortunately, this future inflicted blinding headaches and a disturbing red image ghosting that lasted long after the experience ended.
According to Nintendo Wikia:
The 3D effects are a result of two 1x224 linear arrays, each one directed to an eye that are presented to the player through oscillating mirrors that cause the Virtual Boy to emit a murmur. The 3D effect can cause trauma in the ocular area (in fact, Nintendo urged parents not to let children under the age of seven to play the system since it had the potential to damage their eyes). Knowing this, Nintendo inserted an option within each Virtual Boy game released that pauses the game every fifteen or thirty minutes.
The end effect is closer to that of parallax rather than true immersive reality. The result was two-dimensional gaming enhanced (arguable) by dimensionality — “virtual” in name only.
Fifteen years later VR is back in a big way with Oculus, Samsung Gear VR, Project Morpheus, Google Cardboard, HTC Vive, OSVR, and other smaller or yet to be announced players.
VR technology has taken a big step forward, but the thinking behind VR experiences has not.
Designing for a flat screen and designing for an immersive environment are two fundamentally different challenges. These are my key takeaways from experimenting with VR at Instrument:
Think like a human
A lot has change in the 200,000 years that homo sapiens have been walking the earth.
We’ve moved from open grasslands where we could see danger or reward…to urban spaces where we rely on signs to inform…ultimately to computers where we rely on GUI to communicate.
Each of these can be seen as an interaction model in its own right:
The oldest of interaction models. We can see everything, we are grounded. Content obeys the space. Objects in the present are close at hand. The future is on the horizon before us, the past is behind.
Like the savannah, the shop implies a space that you can move around in but with a higher level of density. Content can be locked to the walls or planes inside of a space.
The last 40 years have seen the rise of the digital landscape; a two dimensional plane that abstracts familiar real-world concepts like writing, using a calendar, storing documents in folders into user interface elements (UI). This approach allows for a high level of information density and multitasking. The down-side is that new interaction models need to be learned and there is a higher cognitive load to decision making.
I suspect that the older, instinctual danger or reward cues are easier for us to pick up on. For example, if you’re in a VR landscape and there is a pit on one side of you and a flat road on the other, you will take the road — even though there is no real danger. Now let’s say there is a sign for a pit and a sign for a road. It takes time to read and comprehend the sign (cognitive load) and as we know people don’t tend to read much, so about half will head towards the pit and the rest will take the road
Use perspective to your advantage
Designers use size, contrast and color to denote hierarchy. These tools are still available in VR, but they are a little different. Size is based on the distance between the user and a piece of content.
Content can be treated like a Heads-up Display, locking it to a set distance from the viewer.
It can be locked to the environment so the users’ view of the content changes as they move through the environment.
It can float free, locked to the world.
Designers now have a full field of vision to play with, and humans are used to turning their heads or whole bodies.
Despite this, designers are trying to force 2D solutions into a 3D space, just like the Virtual Boy.
There is a (sort of) understandable reason for this; a small cone-of focus.
What’s actually going on inside a VR set like Cardboard or Gear VR: A single screen is effectively split into two, dividing the resolution. Your eye actually focuses on the center of this area, a cone-of-focus that quickly fall-off towards blurriness. This makes for a fairly small, fairly low resolution area to work with.
There are several ways to solve for the small cone of focus, these are a few options using a common tile menu:
1 Flat: A common solution
The interface is skinned for the 3D space. It’s difficult to read text or images in perspective. There is no sense of grounding in the space. It’s a wall.
2 Curved: Marginally better
The content is curved around the user, so the tiles always face the user — making it much easier to read text or images.
3 Less content: Better
Less content is better, even if that requires some way to move through it.
4 Surrounded: Best
Hierarchy can be implied by nearness to the cone of focus. Secondary content can be pushed out of immediate view but still remain accessible.
…or separate complex interaction into different devices. For example, the screen density of Google Cardboard is fairly coarse. Rather than trying to put a complex and dense interface into a VR environment, use the phone to get you where you need to go and then jump into the VR environment to explore it.
Build to scale.
Technology is going to improve. Headsets will get lighter, screens will become denser, and we will have more ways to interact with virtual environments. Right now those inputs are fairly limited and can be platform dependent, but their affordances do not need to be.
Affordance is a common term in user experience design that means:
A situation where an object’s sensory characteristics intuitively imply its functionality and use. usabilityfirst.com
A simple version of this is can be found on the web. Rollover a text link and the arrow icon should change to a hand icon signifying that something will happen if you click. Rolling over a link with a mouse, trackpad, or stylus does not change the affordance; it stays a hand. We are conditioned to expect the same behavior irrespective of input.
VR will need affordances to indicate what can be interacted with, and when that interaction takes place. The sensory display of those affordances should scale with technology just like screen affordances. A highlight that now occurs on gaze, should still work for hand tracking, micro-gestures or a mind event.
Focus on the experience
Virtual Reality is immersive; design should support and enhance the user’s sense of presence in the virtual environment.
* Avoid rapid movement, it makes people sick.
* If there is a horizon line, keep it steady. A rolling horizon in VR is like a rolling horizon when you’re on a ship — not good.
* Avoid rapid or abrupt transitions to the world space, they are very disorientating.
* Do not require the user to move their head or body too much. Not only is this disorienting, but the user may be wearing their headset in an environment that they cannot turn around in, like on a plane.
* Be careful about mixing 2D GUI and 3D, the change can be jarring.
* Keep the density of information and objects on the screen low, much lower than in standard screen design. Not everything has to be in view.
* Use real-world cues when appropriate.
* Bright scenes are fatiguing.
* When in doubt; test, test, test.
Where do we go from here?
The way we experience the world is changing and will continue to change drastically. When we look back in 20 years the move to AR/VR will be seen as impactful as any of the major paradigm shifts of the twentieth century, including the internet.
It’s especially exciting to work in experience design right now. The problems are all new — we’re not bound by old interaction models. We can and will fail, but the successes will change how we experience the world.