Version
menu_open

Positioning Tips and Best Practices

Before defining the positioning for your objects in Wwise, you may want to review the following sections, which provide you with a series of examples, tips, and best practices that can help you better manage the positioning of your objects in game.

Positioning - example (Part 2)

Now that the different positioning options available in Wwise have been described in detail, let's see how the different options can be used to define the positioning for the sounds and motion effects in our first-person Positioning - Example (Part 1).

  • Footsteps: Since this is a first-person game, the footstep sounds of the main character will always be attached to the camera. Since there is no movement and no attenuation for these sounds, basic speaker panning using Direct Assignment is appropriate in this case. For the other agents, however, you will need to match the footstep sounds to their movement by attaching the sounds to the “agent” game objects. 3D Spatialization with Emitter positioning would be appropriate in this case; but, no attenuation is necessary.

  • The torches that light up the enemy's jungle base: These sounds will be attached to the torch game objects. Although they are fixed in one place, the location of the sound emitter and its distance from the microphone will change as the player moves. To simulate this type of sound, you can use 3D spatialization with an Emitter position and attenuation.

  • A group of terrorists talking in a hut: These sounds will be attached to the terrorist game objects, which can move freely within the game environment. To simulate this type of sound, you could use 3D spatialization with an Emitter position and attenuation.

  • A mosquito buzzing overhead: The mosquito can be heard buzzing around, but cannot be seen. Since the sound emitter must move within the 3D space, 3D spatialization using either an Emitter with Automation or a Listener with Automation position would be appropriate in this case. A series of randomly played back sound paths using both spatialization and attenuation can create very realistic insect sounds. In choosing the Listener with Automation option, no actual game object is needed, but the buzzing will follow the player around. Instead, we could choose the Emitter with Automation option, so a mosquito game object would be used to define an area, perhaps around a stagnant pond, where the buzzing would be heard.

  • Updates received from headquarters: The communication received from headquarters is not associated with any particular game object and does not move within the surround environment; therefore, speaker panning with Direct Assignment would be appropriate in this case. Since the updates are crucial to the mission, you may also want to route some or all of these sounds through the center speaker.

  • The whispered communication between special agents on this mission: The teammates' whispered voices will be attached to their respective game objects, so 3D spatialization would be appropriate for these sounds. The agents will be moving around one another requiring some kind of spatialization, but since the agents must work together as a team, the communication between them will not require any attenuation. The communication between teammates is crucial to the mission, so you may also want to route some or all of these sounds through the center speaker.

  • The detonation of explosives used to destroy the base after the mission has been successfully completed: The detonation of the explosives will be heard and felt by the operatives. These sound and motion objects will be attached to the explosives game objects. Although they are fixed in one place, the location of the sound/motion emitter and its distance from the listener will change as the player moves. To simulate this type of effect, you can use 3D spatialization with an Emitter position and attenuation.

  • The constant rumbling of the island's volcano: The rumbling of the volcano is a constant sound and motion effect on this remote island. Both the sound and motion object would most likely be attached to the “island” game object. Some attenuation would make the rumbling appear louder or more intense as the players move closer to the island. Since there is no movement to the sound or motion, spatialization would not be necessary in this case.

  • The final enveloping eruption of the volcano: The explosions trigger a massive eruption of the volcano. So, the final scene plays out to the escaping player sitting in the back of a piloted transport helicopter. The eruption makes for a powerful ambient sound, which can be 3D spatialized, along with some attenuation, by using the Listener with Automation Position option. One or more paths could be created in the Position Editor (3D Automation) to reflect how the helicopter and, thus, the player weaves up, down, around, and ultimately past the volcano crater, the epicenter of the sound, while dodging flying debris in a maelstrom of smoke. We would enable the Hold Listener Orientation so that the eruption sounds move through different speakers to reflect the position of the player (the listener). Assuming a multi-speaker setup, one could also select Position + Orientation as the 3D Spatialization option to provide the added realism of the combined shifting helicopter orientation and attenuation spread of the eruption.

  • The interactive music: Since the music is not associated with any particular game object and requires no movement within the surround environment, Speaker Panning would be appropriate. For our example, we want to pan some of the Music Tracks so that the music is balanced between the front and rear speakers.

[Tip] Transition from 3D Spatialization to Speaker Panning

In our example, we could imagine that the final volcanic eruption eventually moves to the end of the scene with some celebratory music. To make the transition between the two sounds smooth, we would set the Speaker Panning / 3D Spatialization Mix with an RTPC curve that would gradually take us from 100, full 3D Spatialization, down to 0, full Speaker Panning.

Refer to the following table for a complete overview of the positioning options that could be used to create the different sounds in this example.

Sound

Speaker Panning

Attenuation

3D Spatialization

 

Direct Assignment

Balance-Fade

Attenuation

Emitter

Emitter with Automation

Listener with Automation

Agent's footsteps

         

Torches

   
   

Terrorists talking

   
   

Mosquito buzzing

       
 

Updates from HQ

         

Agent communication

     
   

Explosions

   
   

Rumbling of volcano

   
   

Eruption of volcano

   
   

Interactive music

 
       

This example describes one way to create different types of positioning and propagation using the different options available in Wwise. The options you choose will depend on the Effects themselves, the game you are creating, and the specific effect you are trying to create.

Performance Optimizations

  • Use mono sounds when not using spread Attenuation curve. If you are not planning to use the spread curve to widen your audio signal, you should use mono sounds to optimize performance. When spread is not used, all the input channels of a stereo sound will be mapped to the same position and will have to be rendered dynamically, whereas if you use mono sounds, the operation will be done offline and won't take any CPU during game play.

  • Reuse or reduce the number of curves in the Attenuation Editor to improve performance. Keep in mind that the more curves you create in the Attenuation Editor, the more processing power and memory is used. To improve performance, you can either reuse the Output Bus Volume curve (for Auxiliary Send Volumes) or not use a curve at all.

  • Use a small number of points and linear curve segments to improve performance. Keep in mind that the more points you add along the curve and the more complex the curve shape, the more processing power and memory is used. In most cases, a curve with two or three points using linear segments will be sufficient to get the attenuation results you need.

  • Share Attenuation property settings using ShareSets. If several of the objects within your game have similar attenuation properties, you can share these property settings using a ShareSet. By sharing the attenuation property settings, you can save on both memory and time to make changes to the attenuation properties.

  • Use the Positioning Type RTPC to reuse sounds for similar purposes. For example, the player's footsteps can be set to 2D and the enemies' footsteps set to 3D, to both using the same sound hierarchy. This can save lots of memory.

Overview of specific positioning scenarios

Let's take a look at some specific scenarios to give you a better understanding of how the different positioning and attenuation settings work in Wwise.

[Note] Note

By default, no sounds are played through the center speaker. To route any portion of a signal through the center speaker, use the Center % property slider.

Scenario 1

  • Listener Relative Routing: Enabled

  • Attenuation: None

  • 3D Spatialization: None

Result: These settings effectively give you the same positioning as a 2D sound.

Scenario 2

  • Listener Relative Routing: Enabled

  • Attenuation: Simple linear curve

  • 3D Spatialization: None

Result: These settings effectively give you a sound that will grow quieter as the listener moves away from the sound source (attenuation), but the sound will always be placed exactly the same as it appears in the original sound asset without any positioning or rotation applied (spatialization).

Scenario 3

  • Listener Relative Routing: Enabled

  • Attenuation: None

  • 3D Spatialization: Position

Result: These settings effectively give you a sound that originates from a specific location (spatialization), but where the volume never attenuates, no matter how far the listener gets from the sound source (attenuation).

Scenario 4

  • Listener Relative Routing: Enabled

  • Attenuation: Simple linear curve

  • Spread: Simple linear curve

  • 3D Spatialization: Position

  • Sound source: Mono

Result: These settings effectively give you the following:

  • When the listener is far from the sound source, the sound originates from a specific location (spatialization), is at a reduced volume (attenuation), and is played mostly in one speaker (spread).

  • When the listener is close to the sound source, the sound originates from a specific location (spatialization), is nearly at full volume (attenuation), and nearly distributed equally across both speakers (spread).

Scenario 5

  • Listener Relative Routing: Enabled

  • Attenuation: Simple linear curve

  • Spread: Simple linear curve

  • 3D Spatialization: Position

  • Sound source: Stereo

Result: These settings effectively give you the following:

  • When the listener is far from the sound source, the sound originates from a specific location (spatialization), and is at a reduced volume (attenuation). For a stereo source that is spatialized without spread, both channels are folded down to create a mono 'point source'. It is for this reason that we recommend using mono files when there is no spread as it is more efficient in terms of CPU.

  • When spread is used, new “virtual sources” are defined that are offset from the original source. For example, for small spread values, a virtual source will be computed to the left and to the right of the real position and their contribution will be added to the speakers, in exactly the same way as the normal no-spread sounds, only in a slightly different position.

    As the spread value increases, there will be more of those virtual sources to cover a larger arc around the listener. Obviously, the power of those sources is lower than the real source to maintain the total power constant.

    [Note] Note

    Note that these sources are used for volume computation only, and no new sounds are actually played.

  • When the listener is close to the sound source, the sound originates from a specific location (spatialization), and is nearly at full volume (attenuation). When used with a high spread value, the sound will come from all directions. The left and right channels of a stereo sound will be spread separately.

  • Now, the case where distance = 0, requires special attention. In Wwise, all spatialization computations (and cone attenuation) are based on angles. When distance = 0, Wwise can't determine if the listener is facing front, left, right, and so on. You should avoid letting this scenario happen in the context of your game. If such a case does happen during gameplay, Wwise will simply create a mono version of the stereo sound to avoid computing out-of-range volumes. The same logic applies for the cone attenuation. If the orientation of the listener is unknown, Wwise assumes there is no attenuation at all. The same is true for the Cone LPF.


Was this page helpful?

Need Support?

Questions? Problems? Need more info? Contact us, and we can help!

Visit our Support page

Tell us about your project. We're here to help.

Register your project and we'll help you get started with no strings attached!

Get started with Wwise