Augmented Reality Is Coming To Your Ears

Category: 
Augmented Reality Is Coming To Your Ears
December 25, 2016

Headphones and earpieces that filter out unwanted noise and notify us are the future of augmented reality.

 

Music is a joy - until you start learning how to play it. Then it can be difficult and demoralising. Unless, that is, you practise with Amped, an app that algorithmically transforms bum notes into tuneful ones. Created by Finnish-Swedish startup Zoundio, Amped deconstructs harmony and chord structure to blend users' playing with existing tracks. The first instrument it's designed for is the electric guitar. Plug in, follow the lessons, and even if the reality is hesitant or jarring, it sounds good through your headphones.

 

Amped is an example of aural - not visual - augmented reality (AR). Mistakes on Amped aren't erased completely; when you miss a beat or a chord, you'll hear a discordant note. But because the overall effect is good rather than bad, you hear what you wished you sounded like, so your motivation doesn't fall away. Using Amped feels counterintuitive, even faintly sinful. (After all, isn't learning meant to be hard?) But here's the thing: it works.

 

Augmented reality is most often described as a digital overlay on physical reality. Its 
true promise, however, is not technical, but sensory. As its root meaning of "increase" or "expand" suggests, augmentation digitises our senses, giving us virtual powers in the physical world. From Google Glass to Microsoft's HoloLens, the focus so far has been on vision. Now augmented aural reality (AAR) is here and, in the near future, ready to use.

 

Take Nura, a pair of "in-ear and over-ear" headphones that, in summer 2016, raised $1.88 million (£1.3m) on Kickstarter. Listening with them varies vastly from person to person; by measuring the minute vibrations of the inner ear (an adaptation of the test used to screen babies for deafness), Nura corrects its input to fit each wearer's earprint. Developed at the HAX accelerator in Shenzhen, the $399 devices are scheduled to ship in spring 2017. They are, in effect, a hearing aid for the hearing.

Credits: Nura

 

Other aural AR devices go further than correction. San Francisco-based Doppler Labs' 20p-sized wireless Here One earbuds can isolate specific sounds, allowing the listener to cancel out the noise of a building site, for instance, or focus on a conversation in a busy restaurant, a process Doppler calls adaptive filtering. The company's previous earbuds, Here Active Listening, were designed for live music and had to be adjusted via smartphone, but even so, the effect could be impressive: they 
picked up the bass in a badly equipped club far better than natural hearing. Launching in November, Here One costs $299. "They're actually meant for everywhere, not just places where you'd wear headphones," says Doppler Labs CEO and co-founder Noah Kraft.

 

This technology's insight is psychological, or, to be more precise, psychoacoustic. "Psychoacoustics helps us detangle what's happening in the physical world from what's in our minds," explains Nura co-founder Kyle Slater. Psychoacoustics allowed MP3 designers to strip out almost all the information but leave a sound that's coherent to the human ear. In Slater's case, doctoral work on hearing impairments led to the realisation that headphones could communicate with touch, as well as sound. "Nura uses the same technology to enhance the way we connect to the bass and the beat," he explains.

 

Using psychoacoustics in software is very much a work in progress. "Understanding acoustic signals is still at the beginning," says Michael Breidenbrücker, co-founder of Last.fm, who has been working on augmented aural products since the 90s. "There are very basic acoustic problems we still don't understand." Computers can be trained to see that a person standing behind a car is distinct from the vehicle, but when the same effect occurs acoustically, like when a bell rings during a speech, the system is flummoxed. "We are using simple algorithms," says Breidenbrücker. Visual bias has left audio lagging behind.

 

Even with this disadvantage, augmented headsets have a huge benefit when it comes to wearability. AR glasses have not changed fundamentally since Google's "glassholes" were assaulted in public in the summer of 2014. By contrast, if you're reading this at work or in a public place, chances are you'll be surrounded by headphone-wearing people leaking muffled beats. We are less precious about our ears. Consider this: humans actually wore Bluetooth headsets in public.

 

The point of AAR, its proponents say, is mindfulness. "It's about being present," says Kraft. "We think about it as a way in which we can optimise the world."

 

Some experts are even describing AAR as the next platform in computing. Kraft discusses targeted promotions for opted-in mall shoppers, or announcements directed at certain sections of stadia. "The ear is the last mile to the brain," says Breidenbrücker. "Whoever owns that last mile will be in a very powerful position." With AI voice assistants - Siri, Google Assistant, Cortana, Echo, Viv et al - increasing in power and prominence, the ability to communicate by talking to our devices grows closer. Imagine: then we'll be distracted not by our phones, but by our headsets, pushing a stream of notifications directly into our ears.

 

Even if in-ear alerts prove too disruptive, the mere act of acoustic filtering is still a dystopian prospect. Listening is democratic, because it is in most cases passive; it takes what it is given without care or favour. AAR presents a processed version of the world, loud noises sanded down, strident voices washed away. Social media already cuts us off from different social and political views. Will AR extend this principle into the physical world? Hear no evil, then - once the glasses arrive - see no evil?

 

Perhaps. But AAR brings possibilities as well as worries. One augmented aural device due to arrive in 2017 promises instant translation in near real time. That's the claim of Waverly Labs, a New York-based startup that raised more than $2 million on Indiegogo for Pilot, its language decoder. The questions surrounding the device - there have been no live tests, and many in the industry doubt it will ever come to market - suggest such assertions should be taken with a pinch of salt, but the prospect is there all the same. "Real-time translation will happen in the next two to three years," says Kraft. "This can be done," adds Breidenbrücker. Once it is, we'll all be able to speak to each other. Hopefully we'll like what we hear.

Related articles

VRrOOm Wechat