Blog homepage

Star Wars Jedi: Survivor | Q&A With the Audio Team

Game Audio / Sound Design

Star Wars Jedi: Survivor is a third-person, galaxy-spanning, action-adventure game from Respawn Entertainment, developed in collaboration with Lucasfilm Games. This narratively driven, single-player title picks up 5 years after the events of Star Wars Jedi: Fallen Order.

We sat down with the audio team to talk about the skillfully-crafted audio for the game, along with their vision, design & implementation techniques. Read on to learn what exactly what makes the party lightsaber sound like a party, how the team designed creatures for uniqueness while honoring the legacy of their design across the series, discover what they brought to the implementation table to prevent repetition acknowledgement, dive into some of the team's favorite-sounding creatures, and much, much more. Enjoy!

Interviewees

nvk
 Nick von Kaenel (as NVK)
Audio Director

Alex_Barnhart
Alex Barnhart (as AB)
Lead Sound Designer

oscar
Oscar Coen (as OC)
Principal Sound Designer

ashton
Ashton Faydenko (as AF)
Sound Designer

Lightsabers are at the core of the Star Wars universe and have become one of the most defining sounds in the galaxy. How did you prepare for and execute on the challenge of pairing sound with lightsaber customization, while maintaining the dynamic aspects required for gameplay?

OC: Color is the only part of lightsaber customization that affects the audio (note: this is something that is part of our game but isn’t part of Star Wars lore in particular). All the sound changes for the different colors are done in real time using effects on busses. The only bespoke sounds are the on/off sounds which are switched between in a Switch Container. Since the sounds are all going through the same bussing structure regardless of color, all the dynamic mixing is maintained.

Is it interesting to total up the possible combinations of sounds for a single saber?

OC: Each lightsaber is made up of around 20 unique swing articulations with about 6 variations each, an idle loop, and on and off sound with about 4 variations. So, in total, about 130 individual sound files per typical lightsaber. For Cal specifically, if you count the additional unique swings for the different colors created in Wwise and his crossguard stance (other stances use shared assets), then the total number of unique possible swings for Cal’s lightsaber is about 2,700.

Are there shared elements across lightsabers? 

OC: Yes, some of the swing variations are just pitched up or down versions of other swings, and Cal’s different lightsaber stances all share the same assets except for crossguard stance, and rely on real time changes to differentiate them.

Is there a balance between media vs parameter changes being used to modify the sound for different types?

OC: We generally tried to reuse lightsaber assets as much as possible while still sounding good, relying mostly on parameter changes, as it generally made mixing easier and led to a higher overall quality, making iteration easier as well.

Any interesting gameplay parameters modifying the sound of lightsabers in real time?

OC: For all lightsabers, the idle changes in pitch and volume depend on the object velocity. The idle is also ducked by the swing sounds. Specifically for Cal, we have a lot of dynamic mixing taking into account several variables like total number of enemy combatants, the overall volume of all currently playing sounds in the game, combat state, etc.

What makes a party saber sound like a party and why is magenta the best-sounding?

OC: The party saber audio is mostly just a happy accident. The party saber is just rapidly switching between the different colors which also causes the RTPC controlling the audio effects to rapidly change between presets, giving it a chaotic party sound that I decided to leave as is. Magenta is also one of my favorites. For any color that isn’t super common in Star Wars, I took more liberties in having it sound less like a standard lightsaber. For magenta specifically, there are several plugins processing it but most of its sound is coming from Wwise Tremolo with the rate set to 1000 Hz and set to about 50% depth, giving it that ringing tone.

How are the play / NPC Foley sounds affected by different character customizations?

AF: Customization was among the most requested features from our players, so we knew we wanted to go above and beyond to support it on the foley side. We selected a variety of materials that we felt covered the gamut of what clothes Cal would be able to wear and recorded full coverage for each. In addition to walk, jog, and run splits, we added another layer of complexity by separately recording the arms, torso, and legs of the foley artist. This gave us plenty of content to mix and match based on what the players decided to wear. If they chose denim pants, a cloth shirt, and a leather jacket, we could be confident that we had the ability to systemically make that sound authentic.

Many of the NPCs that can be found throughout the game use a series of randomized torsos, heads, and legs. We used a similar approach as above to tackle this problem, but added an additional layer for props, so that we could trigger jangling tools on prospectors or blaster rattles on characters holding weapons. This modular approach to both recording and implementation was a lot of work up-front, but ended up saving us a lot of time on the back end!

What special considerations and techniques did you employ to enable the different Output settings (Speakers, Headphones, and Mono) and Dynamic Range modes (Default, Midnight, Studio Reference)? Do you have any guidelines / processes that helped deliver these modes?

NVK: We actually changed a lot about how the Master-Mixer Hierarchy was set up compared to Star Wars Jedi: Fallen Order. In Star Wars Jedi: Survivor, we added support for 3D audio and also made use of the Mastering Suite to handle the different dynamic ranges. Supporting 3D audio meant that we had to take special care when setting the bus configuration for downstream Audio Busses in order to get the best results from a spatial perspective while not creating too many system audio objects.

To address the question specifically, the change made when the player selects either Speaker, Headphones, or Mono is the panning (between Speaker/Headphone panning) or changing the channel output to 1.0. We also adjust the dynamics a bit depending on the output. We found that compressing the mix on headphones didn’t make as much sense so we have the full dynamic range on Headphones i.e. Studio Reference. The only effect enabled in the Mastering Suite in Studio Reference is a limiter at -2 dB. For the Default dynamic range on speakers, we add some light multi-band compression to smooth things out a bit. Midnight mode, we increase the ratio on the multi-band compression a bit and also set the limiter to -6 dB. The other change we make is to reduce the pre-Mastering Suite volume by -3 dB while in Mono. This is because we noticed that the limiter was getting hit harder in Mono due to the summing of the stereo channels.

Vehicles in the Star Wars universe are both iconic and pervasive; from Fighters to Speeder-Bikes, the diversity and complexity of preserving the sound of these vehicles while extending into the interactive is a special challenge.

What were some of the considerations or techniques used to support vehicles that the player can interact with, as well as from a storytelling perspective?

AB: Most of the ambient- and encounter-based ships in the game are running a pretty simple loop-based set up, as most of the ships in the game are fairly simply set up since these aren’t actually player-controlled. Each ship has roughly one or two core loops and additional sweetener loops for different ways the ship can move. Ships often have a distant layer we crossfade with the core loop as they get further away from the listener. These loops are simplified, filtered, and processed to sit in the background of the mix so they aren’t intrusive. Lastly, inspired by a lot of the TIE Fighter sound design from Andor, I wanted to add some of that visceral energy during pass bys so there’s a extremely distorted “thruster” loop that on some ships that has a tight attenuation so you get a big rush of low end as they pass by you.

All of these loops are then mixed and manipulated with multiple RTPCs, including velocity and angular velocity, percentage through its spline and other means. This way the ships react to its movement in an organic way and it adds variation and depth as they move around the player. In addition, on scripted sequences we are able to trigger sounds like whoosh-bys and other sounds that blend with the engines to give the player heightened intensities when we need to draw attention to the ships for narrative reasons.

Was there a vehicle that was exciting or uniquely challenging to bring to life?

AB: The TIE Fighter engine is probably my favorite sound from Star Wars and probably one of the most recognizable sounds from the franchise. This made it really important to get this to sound right for the players. I luckily was able to build from what we had in Star Wars Jedi: Fallen Order.

Given that our game is a melee combat focused game and ships are not a huge part of the game, it was incredibly challenging to make sure that these sounds are appropriately present in the mix while not overpowering. In a few of the sequences with a lot of TIE Fighters flying around, it was a challenge to make sure we had the control over the mix from our ship system. The TIE Fighter engine is very tonal so it quickly became a big wash of noise. Lots of work was done to fine tune different versions of the engine loops for different situations to make sure we had appropriate attenuations, RTPCs, etc. to tailor each encounter for the best player experience.

Specifically with creature mounts: How did the taming of creatures work into the sound design process? What different considerations were there when defining the set of sounds, vocalizations, and dynamic elements that make up a creature set as a vehicle?

AB: On all creature AI and mounts, I added a breathing system that we use to make sure breaths aren’t overlapping with the rest of the creature's vocals. It uses Wwise marker callbacks to make sure it’s always in sync and it reacts to the stress level of the AI at any given time. This means that as creatures fight more and mounts run longer they start to get out of breath and they speed up the cadence of their breath. One nice thing about this system is we can track the progression through the breath cycle on a 0 to 1 float value and pass that value to the animation team to add blended animations for chest expansion. On the Spamel for example, the breath sounds and the chest animation stays in sync using this system.

From the recognizable Rancor to the creatures that inhabit each of the different planets, creature design is as diverse as it is foundational for gameplay.  How do you design sets of creatures for uniqueness while honoring the legacy of creature design across the series?

NVK: When designing the Rancor, I started by researching how the sounds were made in the first place. In The Sounds of Star Wars, by J. W. Rinzler, he talks about how the growl came from recording a neighbor’s dog and pitching it down. I recorded my own dog Zoey, a tiny Italian greyhound, and pitched her down in order to get some of the growls for the Rancor in the game. The original Rancor growls have quite a bit of flanger on it, but I decided to have it be a fairly subtle effect to help make it feel more realistic. As far as the roars of the Rancor, I sometimes would layer in the original roars from the film along with new ones I created from various animal recordings. I tried to deduce the animal layers I heard in the original and again process things a bit differently to make it feel powerful and work well in our game. I think it’s important to honor the legacy of the films and study the techniques used in them, while also bringing something new to the universe. Quite a while ago, I did sound design for a Rancor in the mobile game Star Wars Galaxy of Heroes. It was nice to get another chance to do it and see my progress as a sound designer over the years.

AB: I had the pleasure of designing the Mogu, which is inspired by the Wampa and sort of a distant cousin of the species. With that in mind, I wanted to make sure it felt related but could stand out on its own. I started by looking up what Ben Burtt originally did on the Wampa in the original movies and used that as a starting point. As Nick von Kaenel said, we referenced the book, The Sounds of Star Wars, and it mentions both elephants and sea lions were used as sources. I started there but quickly started adding other animals, my own voice, and slime recordings I made to fill in gaps in its voice as the Mogu had a much wider range of voices than the Wampa which never had much screen time. After a few revisions, most of the main layers of the Mogu were Walruses and Baboons but still retaining the same feeling of its cousin.

What dynamics do you bring to implementation to prevent the acknowledgement of repetition?

AB: Repetition is a tricky thing in games: too little and the game feels monotonous, but too much and you can lose a lot of the cohesiveness and risk missing out on iconic sounds. For ambiances and sounds that live with the player it’s hard to have enough variation so we would employ the standard sort of techniques like large looping random containers for emitters and the like. A lot of care was put into making sure we had a large number of ambient creatures, separating them often into categories like birds, mammals, etc., with sometimes over a dozen of each type in a single level. I tried to sprinkle species in different pockets of the levels so that as you progressed through a level the soundscape would subtly shift as one species would be represented more or less as you progressed.

On top of this, we tried to make our ambiances as dynamic as we could. We have a simple but extremely effective wind system that we use to drive different parts of the soundscape. All the wind sounds in the game are sent to a wind bus in the Actor-Mixer we have running a meter that controls a wind level RTPC. We can then use this RTPC to drive the level of anything from dust sprinkling across a metal surface in the desert or how loud a rickety building might creak. This gives the soundscape an extra level of variation and is reactive in a way the player shouldn’t notice but should feel.

NVK: I’ll just add to what Alex has already said. The sound team did a great job with adding tons of content into the world so that it changes as you progress throughout the game. I think just the fact that you are playing and listening to the game for years on end makes you fairly sensitive to anything that feels repetitive so it’s likely any repetitive sounds will get called out before you ship the game. I also think it’s important to consider how you design the sounds you know will be heard a lot. We like to use lots of tonality in our sounds to make them memorable and stand out, but the exception is sounds that you hear a lot. For sounds that get heard a million times by the player, we will opt for a more noise-based approach to the design which tends to feel less repetitive. Of course this isn’t always the case, as the lightsaber is quite tonal. But you might notice the traversal sound and Force powers are more noise-based for the sounds that get played a lot, and we add more tonality to sounds that are less common such as powered up abilities.

Do you have a favorite creature, in terms of sound design or execution?

AB: My favorite creature I worked on was the Rawka. The design and implementation of it was probably the most simple of all the creatures but I think that was a large part in why it sounds as good as it does. 95% of its voice is just me bowing a plastic gift card and just had to add a little extra saliva and beak clicking sounds to round out the rest of the sound. Bowing the sounds myself rather than using library sounds let me add as much variation as possible which ended up giving me a ton of different sounds to pick and choose. In the same container I could have very different-sounding variations as long as they had similar emotional responses. Hardest part of designing the Rawka was simply just choosing which sounds went with which animations.

NVK: I love the design Alex did on the Rawka too. As far as a creature I worked on, I think I’d have to say the Nekko is my favorite. I focused on creating a memorable bleat sound that you hear when you call it. It also makes use of the breathing system and ramps up to quicker and more intense breaths/grunts the longer you are riding on it.

Were you able to leverage the design of Force from other games / media to begin the process of creation for ones that already exist interactively?

NVK: There’s a lot of great sounding Star Wars games out there. I’ve always been especially fond of the work done on Star Wars Battlefront (2015). I remember being blown away when I heard that game for the first time, as I couldn’t remember the sound of Star Wars being reimagined so well in a video game before. Their success with translating the sounds from the film to a game is still one of the most impressive game audio achievements to date.

With the Force powers, we had a lot of freedom with how we designed the sounds. A lot of foundational design work for the Force powers was done by Kevin Notar, and we continued to build on and add to the style as we developed the character of Cal and the sound of his abilities.

Force Powers often operate on a large scale of destruction. What special considerations did you have when it came to Physics objects and their sounds?

AB: Physics objects, for me at least, are probably one of the most challenging types of sounds to get right. So much of what makes a physics object sound good is in the implementation, mix, and it’s almost more about feel than it is the actual sound of it.

Impacts are organized as a hierarchy of nested switches. Switch one is driven off of the velocity RTPC of the objects which we use to swap sounds based on intensity. That RTPC is also used to ride the volume of the sound for more granularity. The second switch is for determining whether or not the player caused the impact. This is to deal with the fact that the player's impact velocity has a multiplier effect to make it easier and more fun to push things around as the player but in turn, can affect the intensity of the sounds.

For asset creation of physics sounds, quantity can often be quality and making each sound as different as possible than the next variation helps more than any type of processing I’ve found. For example, when I started on the hanging chains in the interior section of Dredger Gorge, my first pass had all the more intense impact sounds all with similar lengths and cadences of the “bounces” of the chain. This ended up being too similar and boring, so after some feedback, I edited all the impacts to have different sounding transients and “syllables” to the sounds by doing some clever editing and re-recording. 

For slides and rolls, we had different events for each and different RTPCs, but both were treated more or less the same. The RTPC would change volume and filtering, and we would use switches to swap the loops based on surfaces. Some objects were deemed high enough priority to have these surfaces bespoke and baked into the asset but often this was achieved additively with an main “object” loop that and a “surface” loop playing at the same time that we can share with other physics objects.

There are various design and puzzling elements across the game that require bespoke sound design and implementation. Are there examples of creating engaging sound in-conjunction with these elements that reinforces the gameplay while providing the reward of sound once completed?

AB: In the Cantina, we have a number of music tracks that play jukebox-style while you’re inside of it. One of the goals of the team was to make the Cantina feel like it’s getting fixed up as you progress through the game and meet all the different characters so along with the music team we processed the Cantina music tracks depending on the Cantina’s state.

At the beginning of the game the Cantina is run down, so all the music goes through a bunch of processing to sound like it’s tinny and lo-fi with broken speakers. After recruiting Ashe and DD-EC however, they fix up the Cantina & the speakers, and the music goes through a much better set of speakers and the processing is much more true to the original mixes. When you interact with the jukebox UI, the processing goes away and you start to hear the normal mixes unprocessed.

Were there any Jedi Meditation Chambers puzzles that were challenging from a sound design standpoint? Some level of dynamic or otherwise unique in implementation?

AB: One issue I had when working on all the systemic Koboh Tech puzzle elements was getting the bridges to sound right. Trying to get the ambient idle of the bridges to sit right in the mix was a big issue. After trying some techniques like moving the sounds along a spline I found that positionally it always felt incredibly off when getting near the bridge, and given that you have to walk on it, this became a problem. I ended up going with a multi-positional emitter solution for the loop. When the bridge extends some math is done to figure out how many emitters are needed and it positions them equidistant across the length of the bridge. This way no matter where you stand on the bridge the sound always feels as if it is emanating from itself.

The Audio of Star Wars Jedi: Survivor | Wwise Tour Hilversum 2023

We were also lucky enough to be joined by Nick von Kaenel, Alex Barnhart, and Colin Grant, who revealed how they used Wwise to bring this epic game to life during our Wwise Tour in Hilversum in 2023.

 

Respawn Entertainment

Respawn Entertainment

Founded in 2010 by the original creators of the Call of Duty Franchise, Respawn was created with the philosophy that when talented people have creative freedom, they’ll make extraordinary games that achieve the unexpected. From our roots as an indie studio to joining the expansive roster of studios at Electronic Arts, this remains our guiding principle. We truly love what we do and want to share our passion with players worldwide.

 @respawn

Comments

Leave a Reply

Your email address will not be published.

More articles

Behind the Beautiful Sound of Monument Valley 2: Interview with Todd Baker

This interview was originally published on A Sound Effect With Monument Valley 2, Ustwo Games not...

16.1.2018 - By Anne-Sophie Mongeau

Game Audio Gamefication (Part 1)

This blog post is about using informant audio in video games and how to combine game design and...

17.4.2018 - By Bjørn Jacobsen

Blind Accessibility - Step One: Get to know your players

Over the last decade, game accessibility has seen a huge rise in awareness and support from the...

23.4.2019 - By Adriane Kuzminski

Approaching UI Audio from a UI Design Perspective - Part 1

In some cases, a game’s user interface might ultimately have very little impact on the player’s...

23.7.2019 - By Joseph Marchuk

Watson Wu takes us on board ATLAS by Studio Wildcard

This is a behind the scenes video of ATLAS, a pirate MMO game by Studio Wildcard (Grapeshot...

13.8.2019 - By Watson Wu

GME Voice Chat System in Suspects: Mystery Mansion

Introduction This blog post is about the voice chat system in the game Suspects: Mystery Mansion by...

12.8.2022 - By Felippe Lopes

More articles

Behind the Beautiful Sound of Monument Valley 2: Interview with Todd Baker

This interview was originally published on A Sound Effect With Monument Valley 2, Ustwo Games not...

Game Audio Gamefication (Part 1)

This blog post is about using informant audio in video games and how to combine game design and...

Blind Accessibility - Step One: Get to know your players

Over the last decade, game accessibility has seen a huge rise in awareness and support from the...