Neither voice nor mixed reality interfaces are new: Bluetooth headsets accepting voice commands and augmented reality apps using fiducial markers have been around for years. But the user experience with both types of interface has been novel at best, and frustrating and complex at worst.
One of the reasons for this has been immature technologies, but another has been that we’ve been trying to layer entirely new classes of interface on top of interaction models designed for desktop or mobile. The technical constraints are now beginning to evaporate, with public releases of technologies like Amazon Echo, Microsoft Hololens and Google Tango happening over the past year. Which takes us to the interaction models.
When the user has a large, high-resolution screen, they can comfortably consume a relatively high volume of content in a non-linear way. They can dip in and out, discover new content, and seamlessly move between text, audio, and video. Navigation is achieved through hyperlinks; intuitive to most users by now, and easy to measure through analytics platforms. Mobile interfaces are similar, although with more navigation gestures available – swiping and pinching, for example.
At CES this year, we’re starting to see an understanding emerge of how different the interaction models are for these new interfaces. Amazon Echo is everywhere, with products either integrating with the Echo through Amazon’s Alexa Skills Kit, or having Echo technology embedded within them. The Lynx robot from Ubtech Robotics has embedded Amazon Echo technology within its robot form, with additional facial recognition capabilities and movement. The Abilix Oculus robot is another example of a voice controlled domestic robot, although this time with its own natural language processing in the cloud as opposed to an Echo integration.
Somfy Systems showcased the Somfy One, a connected home security device with embedded Echo technology. This allows the user to control Somfy products throughout the home via voice commands, but also provides all the integrations that the Echo provides by default.
DigitalStrom demoed a smart home platform, converting non-connected products to connected products via an adapter that plugs into a standard power outlet. Their demo included integrations with both Amazon Echo and Aldabaran Robotics’ Pepper Robot.
Qorvo provided another example of a network layer designed to underpin a consolidated smart home of connected devices, with direct Echo integration for voice control. One of their use cases is senior lifestyle services, using technology to improve the standard of living for senior users.
Augmented and mixed reality products and services were flourishing, much of it around gaming. Design Mill demoed Torch, a mixed reality gaming platform consisting of gaming software running on Intel RealSense compatible hardware. eyeSight presented singlecue, a device that allows users to upgrade their home experience with gesture control for existing devices. Whilst this isn’t strictly mixed reality, the interaction model is very similar. Another application of their technology was an in-car monitor, detecting drowsiness or loss of attention in the driver and firing alerts.
ODG gained a huge amount of attention with their smart glasses. They have successfully produced enterprise products for some time, but for the first time we saw the SmartGlasses 8 – a lightweight consumer version of its core product. It runs a version of Android and requires no tethering to any other devices. Gesture control isn’t built into the system, but through a small network of partners providing gesture control APIs, application developers can ensure that their applications are gesture enabled.
As digital marketers, we need to understand the shift in interaction models when working with new interfaces. Until natural language processing improves a little more, voice control is suited to utility more than content – issuing commands to make something happen. When content is required, it should be far more transactional than with a desktop or mobile device. Asking simple questions to get a clear response. There are currently no real standards around gestures for mixed reality interfaces, so we need to be aware of standards as they emerge, and in the meanwhile consider the user’s context. Flamboyant or odd gestures could be limiting for users, especially in public spaces, so stick to discrete, simple movements.
We should also be looking at where we can partner and integrate, rather than trying to build systems ourselves. The ubiquity of Amazon Echo integrations and the ease with which this is possible make for amazing experiences, without a lot of investment in the technology. This is an incredibly exciting time for digital agencies and our clients, with new interaction paradigms emerging, providing us with the opportunity to create unique and valuable experiences for users.