KID A MNESIA Exhibition: An Interview With the Audio Team

Audio interactif / Musique interactive

Kid A Mnesia Exhibition is a digital exhibition of music and artwork created for Radiohead albums Kid A (2000) and Amnesiac (2001). We sat down with the audio team behind the project to chat about its development, their collaboration, use of Wwise, challenges & unique solutions, spatial audio, and more. We hope you enjoy! 

1. How did Kid A Mnesia, the interactive experience, make its start? Were you involved at the beginning of production? Was there a clear vision or vision holder(s) that helped align the creativity of development with the ultimate results?

Matthew Davis, Producer : The idea originated as an ‘IRL’ exhibition with a vaguely similar concept - showcasing the massive amount of artwork, both audio and visual, created during the band’s Kid A / Amnesiac era. Due to various constraints, not least of all being the pandemic (!), the idea of doing this virtually gradually gained steam throughout 2020. After a few meetings with the Band, Nigel, and Sean [Evans, the project’s director], it became clear that doing this in Unreal (& Wwise) actually unlocked a whole realm of pure possibilities that would let this concept flourish.

There was absolutely a clear vision from the start - Sean, Nigel, Thom, & Dan are all visionaries and were enormously helpful in keeping us on the rails. To quote Sean, “This was to be an exhibition of the output from that Kid A / Amnesiac era, explored in a forgotten alien ruin. It was to be a museum that combined a labyrinth with the Library of Babel. We wanted to instill a feeling of being lost without feeling hopeless. At times, the player was to feel overwhelmed. The design was to have no one correct path, and contain no dead ends.”

2. What was the process for translating the music of Radiohead to the spatial audio representation in Kid A Mnesia? Who was involved? What materials did you have access to? How were ideas communicated? What was the approval process once ideas materialized in the experience?

Matthew Davis, Producer : The first brief I heard was this notion of Exploded Songs. That there was so much material both in the records and on the floor, that blasting everything open into its component parts and laying them out in some way was not only true to the spirit of the material, but essential to building an experience off of. Nigel had a very concrete idea of how this could work - a lot of the audio design was to be more like a gallery, as opposed to hitting you over the head. This let us map out a continuous experience that had peaks, valleys, ebbs, & flows. We went back and forth a lot at the beginning over how to use specialization and reflections. Where was the line between diagetic & soundtrack, between spatialized and listener-locked? These were some of my favorite conversations and experiments - riding the lines between placing the viewer in an environment with consequential attributes, while maintaining the original integrity and nuance of the source material.

Since this all went down during the pandemic and everyone was remote, we would meet on zoom to discuss creative direction. Then I would go head-down and mock up different arrangements and mixes inside of Ableton using videos of our latest project and the original album stems that Nigel had prepped for the exhibition. Nigel did a ton of work remixing, re-arranging, and creating new versions of the material - he really made this special. So we’d make these roughly scored videos to agree on the layout, loop & trigger logic, rough mix, vibe, etc.. then would implement in Wwise/Unreal. 

3. Did you run into any technical limitations that required unique solutions?

Braeger Moore, Senior Sound Designer & Systems Engineer : Most of the “limitations” we came up against simply required using multiple Wwise features to supplement each other. Elevation based and asymmetrical falloff, strategic placement and toggling of emitters to create more controlled diffraction paths, room-progression based mixing and triggering…there were two “real” limitations that I had to work around though.

We used a version of Wwise that pre-dates the mixer plugin that allows UE media audio to be piped into Wwise; so in the theater room, we had to sync hours of audio and video manually. To do this, I exposed the GetSourcePlaybackPosition callback, grabbed the time information from the video, compared the two results and adjusted the pitch of the audio enough to keep the sync tight.

The other workaround ended up not being used in the release, but I created a system that allowed us to execute actions in UE in sync with the music. I did this by utilizing the existing stinger system - duplicating certain SFX as stingers and using python script to parse all of our work units, grabbing all the timing data and file associations and putting them into data tables in UE. The result was being able to do something like have doors open and close with the apex of those sound effects landing on the grid however we wanted.

4. In the non-linear possibility space that the player navigates within Kid A Mnesia, what were some of the decisions that were made to embrace things like: indeterminate progression, speed-running, or other interactive game-like mechanisms?

Braeger Moore, Senior Sound Designer & Systems Engineer : We didn’t do anything too fancy here. We just put a lot of work into managing our room states and mixing from all approaches. Lots of duct tape…especially in the CRT room!

5. How did you use the different spatial audio techniques available to you, like: Binaural Processing, Ambisonics, Audio Objects, Surround formats?

Clay Schmitt, Senior Sound Designer : It was decided that, in order to earn the dramatic shift into these beautiful songs, we would aim to ground the user as solidly in reality as possible in the interstitial spaces then let the grounded ambience fade away once we’re swept up by the music in our feature rooms. To accomplish this I used plenty of ambisonic recordings and surround recordings (either quad or 5.1) in conjunction with stereo sub layers that I designed in Ableton. The elevator inside the pyramid is a great example of where ambisonic recordings were utilized and it produced what I would consider terrific results. 

6. Attenuation is often seen as a way to represent the realistic representation of volume and other environmental factors. Can you speak about your creative use of distance based attenuations and other aspects of the player proximity to sounds in the experience?

Clay Schmitt, Senior Sound Designer : A lot of care was taken to ensure attenuations were honest and grounded in reality for things like NPC chatter, for example. We utilized a test level for this exploration and Thom gave good feedback regarding how he’d like to hear things attenuate. Where things needed to be grounded we used in-engine tools like Wwise Convolution Reverb and Wwise Reflect. Once these more realistic attenuations were set, it gave us an excellent base-line to deviate from when the circumstances called for a more heightened & dreamy attenuations and reverb tails. For these heightened instances, I was able to get creative with old hardware units like my old Roland RE-201 Space Echo, my MXR Pitch Transposer, and my Lexicon M300 Reverb to process stems that were then subtly layered in with the existing Wwise Convolution Reverb and Wwise Reflect. Two great places to hear this effect in action are in the entrance/exit hallways to the CRT room and on the footsteps for the large Minotaur in the inner-pyramid.

7. You mentioned that the use of diffraction played heavily in determining the amount of abstraction conveyed by the music. How did you approach this technique, what was the thought process behind it?

Braeger Moore, Senior Sound Designer & Systems Engineer :

The music diffraction paradigm we settled on really boiled down to two ideas:
1. We wanted to hear the music full-on, as produced, when inside the main rooms.
2. When leaving or entering the rooms, it needed to feel more realistic, but in a creatively controllable way.

Nearly all of the decisions we made with mixing and implementation serviced this idea, and how we could best create a precise, dynamic audio environment that would crescendo into a room’s experience and then fall away slowly, giving you space to reflect while taking in the stunning visuals and ambience.

8. Voice often plays a dual-role in music, both as a medium for communication as well as, especially in Kid A Mnesia, as an instrument and abstraction of sound. Did you do anything to ensure that voice was well represented in the mix?

Clay Schmitt, Senior Sound Designer : The music throughout this experience had been expertly mixed by Nigel 20 years prior, so when we placed stems on emitters we were first tasked with ensuring that balance was not impacted by our implementations. Nigel’s original mix is then altered by the player’s actions throughout the experience. Having said that, there were a few opportunities to get creative. In the CRT Room, for example, Thom’s voice is emitting from a phone receiver in a phone booth. To heighten this a bit, his vocal was re-recorded through a real phone receiver that processed further with filtering and distortion to give it a harsh quality.

9. Can you talk more about the creative use of ambisonic layers throughout? How were the stems created? What was the sound design intention?

Clay Schmitt, Senior Sound Designer : Sure! Another instance where ambisonic layers were used is in the opening forest setting. I had placed some nice wind and soft rustling sounds as well as the very occasional snap of a twig in the distance and Thom’s very rad collection of bird sounds had been placed throughout the scene. I enjoyed where it was at but I kept thinking it needed something more. I decided what it needed was some soft, wooden branch sounds and knowing that players would be able to use Dolby Atmos output I wanted to take advantage of the top channels! I always travel with my Tascam DR-100 (a handheld stereo recorder). While I was in Ohio visiting family I went out in the yard, hit record, and shook some oak branches! I processed the recordings in the Waves B360 Ambisonics Encoder so that players with Dolby Atmos playback would hear the softly rustling branches above them.

10. At the intersection of visual album and interactive, the addition of footsteps supports the player agency of the experience. How did you ensure the footsteps never detracted from the music?

Clay Schmitt, Senior Sound Designer : Early in the project, it was a question whether we would be using footsteps at all. The central concern being that it would distract from the music. However, well treated footsteps are an excellent way to lift the level of immersion for the player, especially in more quiet, interstitial spaces. Braeger had already created a system of ducking for the feature rooms so that things like roomtone, ambience, and my sublayers would softly fade as we transitioned into those spaces where the music would lead. It was easy to route the output of the footsteps bus to that same ducking bus and the result was exactly what I’d been wanting to hear!

11. How was Wwise Reflect used to enhance the music?

Braeger Moore, Senior Sound Designer & Systems Engineer : Wwise Reflect was, in a way, the cherry on top of our realism cake. We used it very sparingly, but it played a big role in delivering the realism we wanted without sacrificing musical impact and fidelity. I wouldn’t say we did anything revolutionary with Reflect though. It came down to spending the time up front to find our balance, and then doing regular mix passes to ensure it worked nicely with each of our spaces.

12. Can you speak more about Nigel Godrich and his contribution throughout development?

Matthew Davis, Producer : Obviously none of this happens without Nigel - he so clearly and eloquently laid out the audio concept to us from the beginning, and held firm to that vision throughout while tweaking and shifting his approach as we learned more about the possibilities within Wwise & Unreal. He went back through the old album sessions and created several awesome, major set piece 6-channel surround mixes, as well as a bunch of mashups that we peppered around various spaces. For the interstitial spaces, hallways, etc.. the places where sound would bleed from one room to another, we had a long back and forth over how much spatialization, reflections, reverbs, etc. were too much, thus putting the integrity of the original audio at risk - that was a big project during development - finding the balance between placing you in an immersive physical space vs. having the soundtrack be clear and present at all times. For me, having Nigel, Thom, & Dan really get in the weeds with us made all the difference between some sort of fan fiction thing we would have done on our own vs. the genuine article this project really turned out to be. Very cool.  

Mathew_Davis_circle
Matthew Davis
Producer

Clay_Schmitt
Clay Schmitt
Senior Sound Designer

Braeger_Moore
Braeger Moore
Senior Sound Designer & Systems Engineer 

 

 

KID A MNESIA Exhibition Audio Team

KID A MNESIA Exhibition Audio Team

Commentaires

Laisser une réponse

Votre adresse électronique ne sera pas publiée.

Plus d'articles

Découverte du nouveau plugiciel Impacter

Aperçu Impacter est un nouveau plugiciel source inspiré par l'esprit du plugiciel original SoundSeed...

3.11.2022 - Par Ryan Done

Visualiser les variations issues de la synthèse croisée d'Impacter

Bienvenue à nouveau dans notre série d'articles sur le plugiciel Impacter. Dans les deux articles...

24.11.2022 - Par Ryan Done

Derrière le son de It Takes Two | Discussion avec l'équipe audio d'Hazelight

It Takes Two de Hazelight Studios est un jeu de plateforme et d'action-aventure intégralement...

30.3.2023 - Par Hazelight

Wwise comme instrument central d’une conception sonore au théâtre

Au cours des dernières années, j’ai eu la chance de participer à plusieurs pièces de théâtre en tant...

11.3.2024 - Par Philomène Gatien

Assassin's Creed Valhalla | Système de musique de type Sandbox

L'élaboration de la musique d'Assassin's Creed Valhalla a constitué un travail colossal. Il...

24.4.2024 - Par Alexandre Poirier

Le chant de la World Beast

Qui suis-je ? Je suis un artiste et un programmeur créatif travaillant principalement sur des...

4.12.2024 - Par Sam Twidale

Plus d'articles

Découverte du nouveau plugiciel Impacter

Aperçu Impacter est un nouveau plugiciel source inspiré par l'esprit du plugiciel original SoundSeed...

Visualiser les variations issues de la synthèse croisée d'Impacter

Bienvenue à nouveau dans notre série d'articles sur le plugiciel Impacter. Dans les deux articles...

Derrière le son de It Takes Two | Discussion avec l'équipe audio d'Hazelight

It Takes Two de Hazelight Studios est un jeu de plateforme et d'action-aventure intégralement...