BLIND VR: Generating Emotions Through Sound

BLIND VR: Generating Emotions Through Sound
May 15, 2019

Blind is a narrative-driven psychological virtual reality thriller where the player experiences a complete loss of sight and must explore their surroundings using echolocation. In this interesting article by sound designers Paolo Armao and Aram Jean Shahbazians, they discuss the challenges they faced - and the solutions they adopted - to support the VR experience using sound and music.

Aram and Paolo at the Zero dB Studios, Turin


Blind is a narrative-driven psychological thriller game built for virtual reality by developer Tiny Bull Studios, where the player gets to explore its surroundings using echolocation. We wake up as Jean, a young woman who finds herself in a strange room with a hazy memory and loss of sight. Led by the voice of the mysterious and unsettling Warden, the player uses echolocation to briefly reveal the outlines of objects, navigate the eerie mansion, solve puzzles and uncover the mystery of Jean’s past. Approaching the truth, however, Jean will be forced to confront her worst enemy – that which she does not, or will not, see.


It should be noted that this was never an exercise in simulating the experience of a visually impaired human being, but rather an exploration of how echolocation could be used as a game mechanic to guide the player through an experience where sight is limited by darkness.


Due to the circumstances, we found ourselves compelled to look for new solutions, and this affected our production: sometimes constraints can be twisted enough to become occasions to investigate and reveal new creative approaches.


In this article, we will discuss the challenges we had to face and the solutions we adopted to support the VR experience using sound and music.

Emotional Engine

It is hard to not consider emotions when we talk about immersive experiences. In Blind we wanted to focus on the concept of empathy to better represent the emotional status of Jean during gameplay. Even though there were some evident restrictions based on the budget of a small indie team, we were driven by the curiosity of shaping the emotional experience throughout the whole evolution of the game. Thus, we started designing an emotional engine to control Jean’s reaction to what was happening around her.


We got in touch with Vincenzo Lombardo, Associate Professor of Computer Science at University of Turin. Simone Pellegrini, one of his students, was working on a master’s thesis about the role of sound in creating emotions. His thesis analyzed the main speculations regarding emotional theories, from James-Lange to Two Factor Theory, and the emotional models (systems to classify emotions) derived from them, with a particular focus on theories that represent emotion as weighted reaction to cognition and world interpretation.


Inspired by Russel, Ortony, Clore, Collins and many others, Simone designed a model to guide emotional elicitation through sound during gameplay. As we had to strengthen our means, he researched sound design techniques to highlight the emotional state of Jean. Keeping in mind that Jean is in a coma during the whole game, he focused his research on aural disease and aural illusion, using Weinel’s research as a starting point to analyze the aural representation of ASC (Altered State of Consciousness).

During QA sessions, we noticed that the emotional status was negatively influencing the player experience, i.e. distorting sounds that were essential for the gameplay or for the comprehension of the story. Also, we had moments in which a particular emotional status was lasting too long, distracting the player from the main scope of the game. After discussing it with the game designers, we worked on a “control engine” that reduced unwanted twists in the world perception of the player.

Emotional sound Spheres first proposal and final FMOD implementation


The results obtained during our R&D phase gave us the confidence to take sound design choices on solid theoretic bases and inspired our creativity, giving us the power to solve audio-related challenges supported by documented principles and psycho-cognitive theories. In the following example, sounds and voices previously heard are recomposed into a dynamic soundscape that reflects the altered mental state of Jean.



In Blind, the work behind the foley production was particularly intriguing, as we wanted to make the player feel the interactions with the world resonating into Jean’s body.


We know today that part of hearing happens (and resonates) through our body: tones between 4 Hz and 16 Hz can be perceived via the body’s sense of touch (but in this case we are limited by the technology used). 60 Hz, 120 Hz and 240 Hz resonates on our fingertips, and frequencies between 4 Hz and 60 Hz are perceived as a whole body vibration (Howarth and Griffin,1988). All the foleys for the main character were recorded together with Vito Martinelli at the Zero dB sound studios in Turin using a Sennheiser MKH 416 shotgun microphone, supported by a Barcus Berry 4000 PI positioned on the objects (Jean can interact with more than 120 objects placed in the environment, so this part of the production required extra care to convey the appropriate feeling to the players) or on the surface of the foley pit.

Environmental Modeling and Binaural Rendering

As Early Reflections help us determine the direction and the distance from an audio source, you can imagine how precise we needed to be in the case of Blind VR. After several attempts as mentioned above, our choice fell on the Oculus Spatial Reverb, a very precise rendering engine developed by the Oculus Developer Center.


We defined each room’s characteristic (room height – width – depth and walls reflectivity), automating these parameters through the use of FMOD snapshots. Late Reflections (Tail) contribute to our sense of space and, in order to optimize CPU use, these have been managed through the use of FMOD standard reverb. This solution for the environmental modeling thus consists in splitting the processing of Early Reflections and Late Reflections, mixing a World Geometry & Acoustics technique for the ER with the support of artificial (2D) reverb, computationally less intensive that gives more depth to the aural space.


As the game development process began in 2014, the available engines for binaural rendering were still in early development stage. During pre-production we had to decide which tools best suited our technical needs. This choice had a strong influence on our workflow, as the use of echolocation was one of the main mechanics of the game. Furthermore, the binaural render plugin had to be compatible with the middleware (FMOD) and our target platforms (Oculus Rift, PSVR), making the support of third party developers essential. Luckily, the plugin developers have always been available for support, even though some of them were acquired by other companies and in some cases their development suspended.

Related articles

VRrOOm Wechat