Learnings From VR Research And Testing

Category: 
Learnings From VR Research And Testing

Recently I reviewed my first foray into research, design and testing for virtual reality software as part of my role at REA Group. It was the most amazing experience and took me back to the days of 3D modelling and animation that I first learned at university.

 

In this article I detail some top level insights around usability testing for VR in our first product launch in this space, some general research insights and what I’d imagine, as a designer and researcher in VR, I’d do next.

 

What has been our biggest learning about UX in VR?

 

Acquired learning in VR happens once the participants experience and interact in a repetition state

 

  • * Many but a key one is that immersion is a thing/wow factor response and users want to “dive in”.

 

  • * Usability challenges relating to multiple hardware options and non-standard controls, limited familiarity with the paradigm.

 

  • * Acquired learning in VR happens once the participants experience and interact in a repetition state.

 

  • * It can only simulate a limited reality, certain senses aren’t activated such as smell, touch or taste and users still articulate this absence of tangibility.

 

  • * Likewise if users have a poor experience with space or navigation most participants in the research wanted to “get out” quickly and that could potentially diminish re-engagement.

 

  • * Provided us with key insights around issues to do with controls, screen interactions (focus vs tap or swipe on the headset).

 

  • * Issues to do with height (issues with “floating”) and legibility of text wouldn’t have been recognised if we didn’t measure this through testing.

 

  • * We wouldn’t have questioned our hypothesised value proposition or validated how people feel about the experience.

 

What was our biggest challenge?

 

Providing the right flow and experience with the challenge of a tertiary (z) axis / spacial depth interactions in VR. This requires a different way of thinking for designers.

 

  • * Achieving an interface with depth and spacial variation away from the 2D experience and understanding the impact of this beyond our usual interaction with glass.

 

  • * Height is a thing and we need to deal with it (participant’s height mismatch with axis in environments can cause discomfort).

 

  • * Controls can be confusing and we need to onboard users (maybe with video as text is a drag in VR).

 

  • * Hardware setups were initially complicated and sometimes we were simply testing a response to a scene so we created workarounds or hacks on scenes to cover this.

 

  • * Occasionally hardware failed but we had strong tech and engineering support to get things happening again.

 

  • * The density of information needed made each test session action packed and somewhat exhausting! We needed a recovery day after each round and better evaluated the density of information we were looking for at later sessions. Perhaps this was because we needed to know so much.

 

  • * When we came to a saturation point the information needed to be synthesised by the team in order to make clear next steps. In the end we did this via discussion and a design studio to induct thinking from the team around problems in VR. This helped suggest what might be the next slice and a future horizon we could work towards for the impending delivery goal.

 

We needed a recovery day after each round of VR testing and better evaluated the density of information we were looking for at later sessions. Perhaps this was because we needed to know so much.

 

What could we have done better?

 

  • * Meshing the the consumer feedback in the value proposition and testing earlier before we prescribed strong decisions in the interface.

 

  • * Being confident we are able and can move forward and pivot with confidence even though this is new — don’t be scared be daring!

 

  • * We lost a designer mid-cycle and this slowed us down slightly however this was late in the build and after we’d done the lion share of the testing pre-MVP 1 release decisions.

 

  • * More low fidelity treatment testing around value prop issues, these could be solved outside the build.

 

Pivot with confidence even though VR is new — don’t be scared be daring!

 

What have we done well?

 

  • * How do we encourage focused and immersive interactions and guide users through spaces in a way that is clearer? I’d like to take the design language further, using VR prompts.

 

  • * A big list of usability things and interaction design adjustments.

 

  • * What we can do now with scene artists and 3D modellers and how we have a conceptual model of the default view.

 

  • * Further investigation based on headset model and control differences for a consistent experience.

 

  • * More interest in height adjustments and focus to help the users.

 

How do we encourage focused and immersive interactions and guide users through spaces in a way that is clearer? Perhaps we can take this further using VR UI prompts.

 

What would we do differently if we started again?

 

  • * How do we encourage focused and immersive interactions and guide users through spaces in a way that is clearer, let’s take it further, using VR prompts (3D objects, active states).

 

  • * Use screen/scene hacks earlier to save time for engineering.

 

  • * Questioning if we used too may 2D metaphors in a 3D environment.

 

  • * Test low fidelity earlier with participants and get a health check on information and interaction design.

 

  • * Test early and often, low fidelity first, regardless of channel — proved true with VR also, given impact on engineering timeframes and ability to change less easily in the scene.

 

Use screen/scene hacks earlier to save time for Unity engineers

 

Are there any successes we can think of that we wouldn’t have had unless we focused on this?

 

  • * “Wow” was often the response to the VR experience.

 

  • * Absolutely no context of the VR space or the awareness of the delight users experienced in being able to simulate a reality * based use case in VR.

 

  • * It helped us identify bugs with real users — our engineers observed a number of live testing sessions.

 

  • * Engineers were engaged with the feedback.

 

  • * Shared understanding along the way — informed decisions and resulted in sketch sessions with the full team. This helped us move forward with further VR prototypes for evidence and validation.

 

“Wow” was often the response to the VR experience

 

Other thoughts worth sharing

 

  • * VR is super exciting and a creator’s playground. Without rigour and awareness of usability and clear use cases it could become a place of confusion. This requires clear thinking and careful slicing to make amazing working software that provides clear product thinking in a new and exciting market channel.

 

  • * User testing and research needs to be strategised with the team’s involvement and goals need to be clear especially when moving quickly.

 

  • * Iterative testing worked well, we didn’t have time to create artefacts but design studios/ sketch sessions with the team worked well afterwards to clarify thinking and what was possible around product / engineering and design decisions.

 

VR is super exciting and a creator’s playground. Without rigour and awareness of usability and clear use cases it could become a place of confusion.

Related articles

VRrOOm Wechat