Blog homepage

Introduction To Audio In VR: Opportunities and Challenges

Spatial Audio / VR Experiences

Among the many new technologies on the market, virtual reality is one of the most in demand, not only by consumers but also by the enterprise and the government sectors.

By 2020, the VR industry is expected to reach $120 billion. In the report by Digi Capital, it states that the arrival of high-end VR equipped with powerful CPU and GPU chipsets and premium features will be the initial consumer VR market drivers. Companies, like Samsung with their Gear VR, are expected to make big changes in the industry. The same report mentioned that the Korean tech giant is making great improvements in addressing user demands, such as display and wireless audio output. It is evident that Samsung is focusing on VR with the massive upgrades on the Galaxy S8. As featured by O2, the handsets come with VR-focused features, such as the ‘infinity display,’ a 64-bit octa core processor (4 GB RAM), and Bluetooth 5.0. These technologies promise to deliver a reliable and robust audio connection (four times the range, two times the speed, and eight times the broadcast message capacity) for a more immersive VR experience.

banner-1571999_960_720 (1).jpg

When it comes to virtual reality, it’s common for many users to discuss VR apps based solely on the image content presented to them and less on its aural score. Just how important is 3D audio in virtual reality? The truth is that for VR to be truly immersive, it needs convincing sound to match. Badly implemented audio in VR can be off-putting and can impact user acceptance.

What 3D audio offers is a little map in your brain, Engadget author Mona Lalwani explained, even when you are not viewing objects it will allow you to know where things are even if they are not in your field of view.

“The premise of VR is to create an alternate reality, but without the right audio cues to match the visuals, the brain doesn't buy into the illusion. For the trickery to succeed, the immersive graphics need equally immersive 3D audio that replicates the natural listening experience,” Lalwani stated.

However, there are fundamental problems that need to be addressed, one of which is externalising sounds. In television shows and films, you will notice that the sound of the narrator is different from the people you’re actually seeing on the screen due to the way in which it was recorded (it’s recorded closer than ‘on set’ dialogue with a condenser microphone). The voice of the narrator is the audio inside the viewer’s head, which is what developers need to avoid to achieve a seamless virtual environment for users.

Developers need to ensure that the perspectives are convincing. How you give players information about distance is vital for users to be able to localise something accurately in the virtual reality space. There’s a need to recreate the acoustic behaviour in VR for it to be convincing with a delicate balance between the volume of the sound waves (directly to the ears, bounce around the room, the length of reverberation, the ratio between direct and indirect sound, and perceived loudness).

Another way to recreate a natural listening experience is by making a binaural recording that creates a clear distinction between left and right sound. It is an important element to  successful 3D audio as it helps the brain of the user to pinpoint the exact source of the sound. However, it may not apply to all directions as the sounds coming from the back and the front are more ambiguous. A response called ‘Head-Related Transfer Function’ (HRTF) is created when the sound from the front interacts with the outer ears, neck, shoulders, and head. This audio gets coloured with modification that assists the brain in solving confusion. Overall, binaural recording embodies the core of a personalised immersive audio.

Developers also need to consider the limitation of a human brain. Audio’s power is to give users information or to influence their emotional state, but there’s a limit to the amount of auditory information that they can process. Film editor Walter Murch said it needs to apply the ‘Law of Two and a Half,’ wherein a human can easily isolate two sets of sounds, but the third sound takes out the ability of the brain to distinguish individual elements. The sound needs to be light and shaded when presented to users. However, more than two positional cues at a time can still be effective if there’s a need to briefly disorient the player on purpose.

Although this article may hold a possible solution, the truth is that the audio for VR is still a work in progress. But, the combination of 3D audio and head-tracking seems to compliment and make virtual reality complete.

"Audio, from an evolutionary perspective, is the thing that makes you turn your head quickly when you hear a twig snap behind you," said Joel Susal, director of Dolby’s AR and VR business. "It's very common that people put on the headset and don't even realise they can look around. You need techniques to nudge people to look where you want them to look, and sound is the thing that has nudged us as humans as we've evolved."

TechJVB

Blogger

Freelance

TechJVB

Blogger

Freelance

TechJVB is certified audiophile and gaming expert with expertise in AR, VR, AI and the likes. She has attended several tech conferences in Europe and Asia. She has been invited to be a guest speaker at different schools in Manchester to inspire young minds to enter STEM fields and explore the bigger potential and future of technology.

Comments

Andrew Menino

June 25, 2017 at 01:38 pm

Great article! Looking forward to investigating the "Law of Two and a Half"! Thanks a lot

Leave a Reply

Your email address will not be published.

More articles

Auralizing soundscapes through virtual acoustics

In this series, we are taking an extensive look at past, current, and emerging reverberation...

14.2.2017 - By Benoit Alary

Integrating Spatial Audio in Unity

Live integration in the Unity Sample - the Wwise Adventure Game, coming soon!

30.4.2020 - By Mads Maretty Sønderup

Implementing Two Audio Devices to your UE Game Using Wwise

First, let me introduce myself. My name is Ed Kashinsky and I am a sound designer and musician from...

20.5.2020 - By Ed Kashinsky

A Wwise Approach to Spatial Audio - Part 2 - Diffraction

Part 1: Distance Modeling and Early ReflectionsPart 2: DiffractionPart 3: Beyond Early Reflections...

6.8.2020 - By Louis-Xavier Buffoni

Reintroducing Wwise Audio Lab (WAL)

Wwise Audio Lab (WAL) is an open-source game-like 3D environment developed with Unreal Engine 4...

30.9.2021 - By Damian Kastbauer

Virtual Acoustic Reality in Architectural Design

Within the Architecture Engineering and Construction industry (AEC), presenting design ideas through...

22.10.2021 - By Keith Yates Design

More articles

Auralizing soundscapes through virtual acoustics

In this series, we are taking an extensive look at past, current, and emerging reverberation...

Integrating Spatial Audio in Unity

Live integration in the Unity Sample - the Wwise Adventure Game, coming soon!

Implementing Two Audio Devices to your UE Game Using Wwise

First, let me introduce myself. My name is Ed Kashinsky and I am a sound designer and musician from...