Version
menu_open
Wwise SDK 2019.2.15
Spatial Audio

The Spatial Audio module exposes a number of services related to spatial audio, notably to:

  • compute image sources for Reflect for a given geometry;
  • model sound propagation from Rooms and Portals by controlling 3D busses;
  • model diffraction of obstructed sound sources across geometric edges; and,
  • conveniently access the raw API of Wwise Reflect.

Under the hood, it:

  • controls 3D busses by managing game objects and their properties (positions, auxiliary sends, obstruction, and occlusion);
  • controls (multi-) positions and aux sends of spatial audio game objects;
  • runs geometric sound reflection and diffraction algorithms; and
  • packages data for Wwise Reflect.

It is a game-side SDK component that wraps a part of the Wwise sound engine, as shown in the following flowchart.

SpatialAudioFlow

Spatial Audio Concepts

The following paragraphs provide a quick overview of the fundamental acoustic concepts related to Spatial Audio:

Diffraction

Diffraction occurs when a sound wave strikes a small obstacle, or when it strikes the edge of a large obstacle or opening, and bends around it. It represents sound that propagates through openings (portals) and towards the sides, meaning that a listener does not need to be directly in front of the opening to hear it. Diffraction is usually very important in games because it gives a hint to players about paths that exist between sound emitters and them. The following figure is a sound field plot of a plane wave coming from the top right and hitting a finite surface (the black line) that starts in the center of the figure. The perturbation caused by this edge is called diffraction. The region on the left is the View Region, where the plane wave passes through unaltered. The region on the top right is the Reflection Region, where reflection with the surface occurs and is mixed with the incident wave, resulting in this jagged pattern. The region on the lower right is the Shadow Region, where diffraction plays a significant role. This figure is just a coarse approximation; in real life the field is continuous at the region boundaries, and edge diffraction occurs in the View Region as well, although it is generally negligible compared to the incident wave itself.

We see that the edge can be considered as a point source, with amplitude decreasing with distance. Also, the amplitude of higher frequencies decreases faster than that of lower frequencies, which means that it can be adequately modeled with a low-pass filter. Wwise Spatial Audio models diffraction via two of its APIs. Refer to Rooms and Portals' Diffraction to understand how Rooms and Portals lets you model portal diffraction, and to Using the Geometry API for Simulating Diffraction and Transmission to understand how geometry may be used to model diffraction of emitters and their early reflections.

Transmission

Sound transmission is another relevant acoustic phenomenon that is modeled within Wwise Spatial Audio. Transmission describes sound energy passing through an obstacle, and the term transmission loss describes the proportion of that energy that is dissipated by the obstacle. This is not to be confused with absorption, which describes the proportion of energy dissipated by a reflected sound wave. While the interactions that occur at the interface of two media can be quite complex, the ratio of reflected vs. absorbed energy can be thought of as being defined by the properties of the surface of a material, whereas transmitted energy vs. transmission loss are related to the size, shape and density of an obstacle.

When dealing with obstacles made of a dense material, such as concrete, the proportion of energy that reaches the listener through transmission can be quite small when compared to diffraction, particularly when there are openings nearby. However, if no such openings exists, or if the obstacle is made of less dense material, such as wood or glass, the contribution of transmission becomes significant and is important to simulate.

Room Coupling

After sufficient time, a sound emitter produces a diffuse field that depends on the acoustic properties of the environment it is in. In games, this is typically implemented using reverb effects with parameters that are tweaked to represent the environment with which they are associated. Diffuse fields also make their way across openings and through walls until they reach the listener, where they excite the listener's environment. Room coupling refers to the transfer of acoustic energy, also known as reverberation, from one environment or room to another. Games typicaly model this by feeding the output of the reverb of a room into the reverb of another room.

Obstruction and Occlusion

Obstruction represents a broad range of acoustic phenomena, and refers to anything that happens when a sound wave strikes an obstacle. Occlusion is similar but implies that sound cannot find its way around an obstacle. The Wwise sound engine lets games set Obstruction and Occlusion values on game objects, which are mapped to a global set of volume, low-pass filter, and high-pass filter curves. The difference between the two is that Obstruction affects only the dry/direct signal between an Actor-Mixer or bus and its output bus, whereas Occlusion also affects the auxiliary sends. Obstruction, therefore, better emulates obstruction by obstacles when emitter and listener are in the same room, whereas Occlusion is better to model transmission through closed walls.

API Overview

The Spatial Audio functions and definitions can be found in SDK/include/AK/SpatialAudio/Common/. Its main functions are exposed in namespace AK::SpatialAudio. There are 4 API categories:

  • Basic Functions
  • Rooms and Portals API
  • Geometry API
  • Helper functions for accessing Wwise Reflect directly ("raw" image sources)

The Rooms and Portals API is a simple, high-level geometry abstraction used for modeling propagation of sound emitters located in other rooms. The Geometry API uses triangles directly to compute image sources for simulating dynamic early reflections with Wwise Reflect, or to compute geometric diffraction. Spatial Audio also exposes helper functions for accessing the raw API of Wwise Reflect directly.

Basic Functions

Initialize Spatial Audio using AK::SpatialAudio::Init().

When using spatial audio, a single game object must be explicitly assigned as the Spatial Audio Listener. To do so, call AK::SpatialAudio::RegisterListener(), passing in the ID of the desired listener. The game object must also be registered and assigned as a listener in the sound engine. For more information on Listeners in the Sound Engine, refer to Integrating Listeners.

A game object becomes a Spatial Audio Emitter when it plays a sound that has one or more settings relating to spatial audio enabled in the authoring tool:

  • To enable room reverb, the sound must have game-defined auxiliary sends enabled in the general settings tab.
  • To enable reflection processing, the sound must have an early reflection bus assigned in the general settings tab.
  • To enable diffraction processing, the sound must have the 'enable diffraction' box ticked in the positioning tab.

The position of a game object, be it an emitter or a listener, is passed to the sound engine using AK::SoundEngine::SetPosition. Spatial Audio will retrieve the position information directly from the Sound Engine to determine the source position for reflections and diffraction processing.

Warning: At the moment, Spatial Audio only supports one top-level listener.
Warning: In most cases it is not desirable to use multi-positioning with Spatial Audio, since portal interpolation, diffraction and reflection processing require a single source position. If multiple positions are set on a game object, only the first position will be used for Spatial Audio calculations.

Using the Geometry API

The Geometry API allows the game to send a triangle mesh to Wwise Spatial Audio, for two purposes:

Describing Geometry

The game's geometry is passed to Wwise Spatial Audio via AK::SpatialAudio::SetGeometry(), and is described by the AkGeometryParams structure. Here is a high-level view of what it consists of:

  • Vertices (AkVertex) are defined in an array, AkGeometryParams::Vertices, which is separate from the triangles, and each triangle in the triangle array AkGeometryParams::Triangles references indices in the vertex array.
  • Each triangle also includes an index to an AkAcousticSurface structure, which defines the acoustic texture and description string.
  • The correspondence between acoustic surfaces and triangles is up to the user. For example, the user may choose to have one surface for each triangle, a single surface for all triangles, or something in between.
  • Defining acoustic surfaces is optional. If it is not desired to customize the acoustic properties of a surface, the triangles may be left as AK_INVALID_SURFACE and NULL may be passed into AkGeometryParams::Surfaces and AkGeometryParams::NumSurfaces set to zero.

How to create a triangle mesh for spatial audio

Generally, any triangle mesh can be used with Wwise Spatial Audio, however there are a number of important considerations.

  • The mesh should be as simple as possible. Acoustic calculations involve a large number of ray tracing operations and can get expensive. The smallest number of triangles that can be used to represent a scene is best.
  • All triangles are double-sided. Acoustic reflections occur off of both sides of the triangle, therefore it can be beneficial to create meshes that have no interior volume. For example, a wall could be a plane rather than a box.
  • Duplicate vertices should be “welded” together in order to create a continuous mesh. In other words, two connected triangles should reference the same two vertices in the vertex array. This is important for diffraction calculations, otherwise it is possible for sound to leak through the mesh.
  • All vertex coordinates are in world space.
  • Meshes that are sent to spatial audio via multiple calls to set geometry cannot reference the same vertices, and therefore are not considered connected or continuous. In this case, diffraction edges will not be generated for edges that span multiple meshes.
  • Meshes must not have any degenerate triangles. All triangles must have an area greater than zero.
  • Additional rules apply if you wish to use Geometric Diffraction with your geometry. Refer to Using the Geometry API for Simulating Diffraction and Transmission for more details.

Using the Geometry API for Simulating Early Reflections

The Geometry API uses emitter and listener positions, and triangles of your game's (typically simplified) geometry in order to compute image sources for simulating dynamic early reflections, in conjunction with the Wwise Reflect plug-in. Sound designers control translation of image source positions directly in Wwise Reflect, per tweaking of properties based on distance and materials.

Please refer to our Image Source Approach to Dynamic Early Reflections and Creating compelling reverberations for virtual reality blogs, for example, for an introduction to geometry-driven early reflections (ER for short).

Note: The reflection order refers to the number of surfaces hit by a wavefront before reaching the listener. For example, in a shoebox-shaped room with six surfaces, simulating first order reflections means that there will be six times one early reflections, also known as image sources, per emitter. This results in a total of six reflections. Simulating second order reflections means that there will be six first order reflections, plus six times five, accounting for the other walls, second order reflections. This results in a total of 36 reflections per emitter. The number of reflections grows exponentially with the order.

Wwise Spatial Audio currently supports simulating up to fourth order reflections. The reflection order is set globally, with the AkSpatialAudioInitSettings::uMaxReflectionOrder init setting. It can also be changed dynamically with AK::SpatialAudio::SetReflectionsOrder.

Wwise project setup

For each sound that should support dynamic early reflections, make sure an early reflections bus is assigned under the General Settings tab in the Wwise Authoring Tool to indicate the Auxiliary Bus that hosts the desired Wwise Reflect plug-in. Spatial Audio will establish a special aux send connection to this bus. You may also set a send volume.

Typically for environmental reverbs, the Auxiliary Bus is created on the listener game object allowing multiple emitters to share the same bus and effect instance. While still true for rooms used for late reverb by Spatial Audio, this is not the case for early reflections, because each emitter has its own set of reflections that depend on the unique position of the emitter. Instead, the ER bus instance is created on the emitter game object, and different emitters will send to different instances of the Auxiliary Bus. This is illustrated in the screenshot of the Voices Graph in 'Wwise project setup', seen below.

You need to understand the following aspects of the bus structure design for handling dynamic environmental effects in your Wwise project effectively.

Attenuation Design

There is an important thing to be aware of when designing attenuation curves on sounds that are used with Spatial Audio in order to ensure efficient computation. Sounds that are assigned an early reflections Auxiliary Bus and those that are marked with 'enable diffraction' in the Authoring tool must be assigned an attenuation with finite radius to limit the computation of paths.

Spatial Audio uses the attenuation of a sound to determine the maximum possible propagation distance for both reflection and diffraction path calculations, so it is important to make sure the maximum attenuation distance is a representative value. Futhermore, if the attenuation does not go below the platform's specified volume threshold, then the sound's radius is effectively infinite. In this case, Spatial Audio will attempt to calculate reflections and diffractions regardless of where the listener is placed in the world. Both the 'Output Bus volume' and the 'Auxiliary send volume' curves must have their final point on the far right side of the curve below the volume threshold to ensure that Spatial Audio calculations are limited to a finite radius around the emitter Game Object. Note that the volume threshold is defined in the project settings dialog, which can be found in the Project menu in the Authoring Tool.

Note: If a Game Object has multiple active sounds playing, each with different assigned attenuations, the largest of all attenuation radii is used to limit path processing. Path processing is only ever performed once per Game Object, the paths are then reused amongst multiple sounds if necessary.

Auxiliary bus design

Typically, different Auxiliary Busses are used to represent different environments, and these busses may host different reverb ShareSets that emulate the reverberating characteristics of these environments. When using dynamic ER, such as those processed by Wwise Reflect under Spatial Audio, late reverberation may still be designed using reverbs on Auxiliary Busses. However, you may want to disable the ER section of these reverbs (if applicable), as this should be taken care of by Wwise Reflect.

On the other hand, Wwise Reflect should run in parallel with the aux busses used for the late reverberation. The figure below shows a typical bus structure, where the three Auxiliary Busses under the EarlyReflections bus each contain a different ShareSet of Wwise Reflect. You will note that in this design, we only use a handful of ShareSets for generating early reflections. This is motivated by the fact that the "spatial aspect" of this Effect is driven by the game geometry at run-time. We only use different ShareSets here because we want different attenuation curves for sounds emitted by the player (listener) than those emitted by other objects.

Bus instances

The ER bus (hosting Wwise Reflect) will exist in as many instances as there are game objects currently playing sounds with an assigned ER bus. This is important because the location of image sources depends on the emitter's position. To correctly set up the routing of the ER bus, you need to enable the Listener Relative Routing check box, as shown in the image below. By doing this, the signal generated by the various instances of the ER bus will be properly mixed into a single instance of the next mixing bus downstream. This single instance corresponds to the game object that is listening to this emitter (set via AK::SoundEngine::SetListeners), which is typically the final listener, that corresponds to the player (or camera).

If different sounds playing on the same game object are assigned different early reflections aux busses, then multiple instances of the bus will be created on the same emitter game object. The reflection calculations that are performed by Spatial Audio will still only be done once per game object, however the results will be sent to the two unique instances of the Wwise Reflect plug-in. By doing so, users can customize reflections curves per-sound by using different share sets of the plug-in.

Warning: Although the ER bus's Listener Relative Routing must be enabled in order to ensure that all its emitter-instances merge into the listener's busses, the 3D Spatialization mode must be set to None, to avoid "double 3D spatialization" by Wwise. Likewise, you should not use attenuation, unless you want additional attenuation to be applied on top of the image-source curves in Wwise Reflect.

Early reflections sending to late reverberation

Also, by virtue of the game object (emitter) sending to the Auxiliary Bus used to process the late reverberation, a connection will also be made between the ER bus and the late reverb bus. This is usually desirable because the generated ER are then utilized to color and "densify" the late reverb. In order to enable this, you need to make sure you enable the Use game-defined auxiliary sends check box on the ER bus. You may then use the Volume slider below to balance the amount of early reflections you want to send to the late reverb against the direct sound.

The following figure is a run-time illustration of the previous discussion. Notice the following:

  • Weapon Fire SW is routed to FirstPerson (early reflections) bus because of the Early Reflection send
  • The FirstPerson bus is in the scope of FirstPersonCharacter game object. Another game object would thus send to a different instance of the FirstPerson bus, as desired.
  • There is a send connection from FirstPerson bus to Mezzanine2 aux bus because Use game-defined auxiliary sends is enabled.
  • FirstPerson's output bus Binaural is in the listener's scope, that is, PlayerCameraManager, because the Listener Relative Routing option is enabled on the FirstPerson bus. All early reflection bus instances should have this option enabled, such that they all return to the single Binaural bus instance of the listener. If you fail to do so, a separate instance of Binaural bus would erroneously be instantiated on the emitter game object.
  • There is no attenuation between FirstPerson and Binaural busses due to distance, as ER attenuation is already designed and applied within Wwise Reflect.

Using Acoustic Textures

For each reflecting triangle, the game passes the ID of a material. These materials are edited in the Wwise Project in the form of Acoustic Textures, in the Virtual Acoustics ShareSets. This is where you may define the absorption characteristics of each material.

Using Rooms and Portals

With Wwise Spatial Audio, late reverberation is designed using reverb Effects and auxiliary sends. Wwise Spatial Audio supports this workflow by exposing a simple, high-level geometry abstraction called Rooms and Portals, which allows it to efficiently model sound propagation of emitters located in other rooms. The main features of room-driven sound propagation are diffraction and coupling and spatialization of reverbs. It does so by leveraging the tools at the disposal of the sound designer in Wwise, giving them full control of the resulting transformations to audio. Furthermore, it allows you to restrict game engine-driven raycast-based obstruction, which is highly game engine-specific, and typically costly in terms of performance, to emitters that are within proximity of the same room as the listener. Note that you can also defer obstruction entirely to Wwise Spatial Audio by using the geometry API (see Using the Geometry API for Simulating Diffraction and Transmission).

Rooms are dimensionless and are connected with one another by Portals, which together form a network of rooms and apertures by which sound emitted in other rooms may reach the listener. Spatial Audio uses this network to modify the distance traveled by the dry signal, the apparent incident position, and the diffraction angle. The diffraction angle is mapped to obstruction and/or to a built-in game parameter called Diffraction, which designers may bind to properties (such as volume and low-pass filtering) using RTPC. Spatial Audio also positions adjacent rooms' reverberation at their portals, and permits coupling of these reverbs into the listener's room reverb, using 3D busses. Lastly, rooms have an orientation, which means that inside rooms the diffuse field produced by the associated reverb is rotated prior to reaching the listener, tying it to the game's geometry instead of the listener's head.

API Overview

In order to use room reverb on emitter game objects, you need to enable the Use game-defined auxiliary sends check box in the Wwise Authoring tool. The send to the room is applied by Spatial Audio, in addition to any game-defined sends that are requested by the game.

Room busses need to be created with the correct positioning options, that is, Listener Relative Routing enabled, 3D Spatialization set to "Position And Orientation", and Use game-defined auxiliary sends enabled. Alternatively, you could create a bus with Preset Room Auxiliary Bus.

You need to create Rooms and Portals, based on the geometry of your map or level, with AK::SpatialAudio::SetRoom and AK::SpatialAudio::SetPortal. Rooms and Portals have settings that you may change at run-time by calling these functions again with the same ID. Then the game calls AK::SpatialAudio::SetGameObjectInRoom for each emitter and the listener to tell Spatial Audio in what room they are. From the point of view of Spatial Audio, Rooms don't have a defined position, shape, or size. They can thus be of any shape, but it is the responsibility of the game engine to perform containment tests to determine in which Room the objects are.

Warning: Beware of Room IDs. They share the same scope as game objects, so make sure that you never use an ID that is already used as a game object.
Warning: Under the hood, Spatial Audio registers a game object to Wwise for each Room. The user can post events on this game object for ambience/room tone sounds, but should not attempt to alter the position or game-defined sends of the object in calls to AK::SoundEngine.

The most important Room setting is AkRoomParams::ReverbAuxBus, which tells Spatial Audio to which auxiliary bus emitters should send when they are in that Room. Other settings will be discussed in sections below (see Using 3D Reverbs and Transmission).

Portals represent openings between two Rooms. Contrary to Rooms, Portals have position and size, so Spatial Audio can perform containment tests itself. Portal size is given by the Portal setting AkPortalParams::Extent. Width and height (X and Y) are used by Spatial Audio to compute diffraction and spread, while depth (Z) defines a region in which Spatial Audio performs a smooth transition between the two connected Rooms by carefully manipulating the auxiliary send levels, Room object placement, and Spread (used by 3D Spatialization). Refer to sections Using 3D Reverbs and About Game Objects below for more details. Additionally, Portals may be enabled (open) or disabled (closed) using the AkPortalParams::bEnabled Portal setting.

Take caution when using AK::SoundEngine::SetMultiplePositions with any Spatial Audio emitter, because Spatial Audio takes only the first sound position for its various calculations including reflections, diffraction and portal transitions. It is, however, possible to use AK::SoundEngine::SetMultiplePositions on game objects that use room sends, but take note that if the game object is transitioning through a portal, the first position will be used for cross-fading between two rooms. It is also possible for the game to use AK::SoundEngine::SetGameObjectAuxSendValues in conjunction with with Spatial Audio rooms. The game's sends will be added in addition to the send to the room object and auxiliary busses. See section Implementing Complex Room Reverberation for more details.

Additionally, it is possible to use AK::SoundEngine::SetObjectObstructionAndOcclusion or AK::SoundEngine::SetMultipleObstructionAndOcclusion with Spatial Audio emitters, even though Spatial Audio also uses obstruction and occlusion for modeling diffraction and transmission, respectively. In any case where a API-driven occlusion/obstruction value competes with Spatial Audio driven values, the maximum of the two will be used in the sound engine. Refer to section Modeling Sound Propagation from Other Rooms and Modeling Sound Propagation from the Same Room on the Game Side for more details on obstruction and occlusion and how to use them with Spatial Audio Rooms and Portals.

The Integration Demo sample (in SDK/samples/IntegrationDemo) has a demo page which shows how to use the API. Look for Demo Positioning > Spatial Audio: Portals.

Obstruction and Occlusion versus Portals' Transmission and Diffraction

In the context of Wwise Spatial Audio, sound propagation from other rooms is entirely managed by the Rooms and Portals abstraction. Rooms with at least one propagation path to the listener via open Portals will simulate diffraction using either Obstruction or the Diffraction built-in game parameter. Additionally, rooms will utilize Wwise Occlusion to model transmission of sound through walls.

If you wish to implement your own solution for obstruction (for example, one driven by game-side raycasting), in conjunction with obstruction set from Spatial Audio, the game may use the sound engine API AK::SoundEngine::SetObjectObstructionAndOcclusion. For the occlusion of portal openings, the game may use the Spatial Audio API AK::SpatialAudio::SetPortalObstructionAndOcclusion. Under no circumstances should the game call AK::SoundEngine::SetObjectObstructionAndOcclusion with a room ID as the game object parameters, as the results are undefined. Refer to section Modeling Sound Propagation from the Same Room on the Game Side for more detail on how to use same-room obstruction with Spatial Audio.

Refer to Spatial Audio Concepts for a review of these acoustic concepts.

Summary of Sound Propagation Features

The table below summarizes the features of Spatial Audio Rooms and Portals by grouping them in terms of acoustic phenomena, describing what Spatial Audio does for each, and how sound designers can incorporate them in their project.

Acoustic Phenomenon Spatial Audio Sound Design in Wwise
Diffraction of direct path
  • Volume, filtering, or any property on Actor-Mixer
  • 3D panning, Distance attenuation
Diffuse field (reverb)
  • Send to room's auxiliary bus
  • Constant power transitions
  • Reverb, bus volume and game-defined send offset on Actor-Mixer
Room coupling: reverb spatialization and diffraction of adjacent room's diffuse field
  • Volume, filtering, or any property on Bus
  • 3D panning of busses, reverb, bus volume and game-defined send offset of Auxiliary Bus to other busses
Transmission
  • Occlusion
  • Volume or filtering on Actor-Mixer

Using 3D Reverbs

The Auxiliary Bus design used with Spatial Audio Rooms and Portals is not fundamentally different than in the traditional modeling of environments. It requires that an Auxiliary Bus be assigned for each Room, mounted with the reverb Effect of the designer's choice, and it is the same bus that is used whether the listener is inside or outside the Room. The only difference is that it should be made 3D by enabling Listener Relative Routing (in the Positioning tab) and setting the 3D Spatialization to either Position + Orientation or Position, as shown in the figure below. This way, Spatial Audio may spatialize the reverberation of adjacent Rooms at the location of their Portal by acting on the Rooms' underlying game object position and Spread.

The Room's reference orientation is defined in the room settings (AkRoomParams::Up and AkRoomParams::Front), and never changes. The corresponding game object's orientation is made equal to the Room's orientation. When the listener is in a Room, the bus's Spread is set to 100 (360 degrees) by Spatial Audio. Thanks to 3D positioning, the output of the reverb is rotated and panned into the parent bus based on the relative orientation of the listener and the Room. This happens because the Auxiliary Bus is tied to the Room's game object, while its parent bus is tied to the listener. The screenshot below shows an emitter, the radio, sending to the Auxiliary Bus Mezzanine2. You can see that a separate game object has been created for this room, Ak_RV_Mezzanine, that is neither the Radio nor the listener (PlayerCameraManager_0).

If, for example, there is a spatialized early reflection pattern "baked" into the reverb (such patterns exist explicitly in the Wwise RoomVerb's ER section, and implicitly in multichannel IR recordings used in the Wwise Convolution Reverb), then they will be tied to the Room instead of following the listener as it turns around. This is desirable for proper immersion. On the other hand, it is preferable to favor configurations that "rotate well". Ambisonic configurations are invariant to rotation, so they are favorable. Standard configurations (4.0, 5.1, and so on), less so. When using standard configurations, it is better to opt for those without a center channel, to use identical configurations for aux busses and their parent, and to set a Focus to 100. In these conditions, with a 4.0 reverb oriented with the Room towards the north, a listener looking at the north would hear the reverb exactly as if it were assigned to speakers directly. A listener looking straight east, west, or south would hear the original reverb but with channels swapped. Finally, a listener looking anywhere in between would hear each channel of the original reverb being mixed into a pair of output channels.

When the listener is away from a room's Portal, Spatial Audio reduces the spread according to the Portal's extent, which seamlessly contracts the reverb's output to a point source as it gets farther away. The Spread is set to 50 (180 degrees) when the listener is midway into the Portal. And, as it penetrates into the room, the Spread is increased even more with the "opening" being carefully kept towards the direction of the nearest Portal.

When mixing sounds into portals, Spatial Audio calculates the distance of the entire path length between the sound source and the listener. This distance is applied to the attenuation curve of each sound before mixing the sound into the room bus so that the relative volumes of sounds with different attenuations are preserved. An additional attenuation applied directly to the room bus is not necessary. If a Room Auxiliary Bus has an attenuation, it will be applied on top of each sounds' attenuation, post-mix, and will serve to further reduce the volume of the reverb or apply additional filtering to the signal that passes through a portal.

The Room game object position is maintained by Spatial Audio, and when the listener is inside a room, at the game object is placed at the same location as the listener. In this case, the attenuation curves, if assigned, are all evaluated at distance 0.

Warning: In the case that a Room Auxiliary Bus has an assigned attenuation, note that in order to let Spatial Audio modify the Spread according to the Portal's geometry, you must not have a Spread curve in your Auxiliary Busses' attenuation settings. Using a Spread curve there would override the value computed by Spatial Audio.

Coupling Rooms by Chaining Room Busses

Room coupling (see Room Coupling) is achieved by 'chaining' room busses. To allow this, make sure that the Enable Game-Defined Sends check box is enabled on Room Auxiliary Busses. This allows a Room to also send to the reverb of the listener's Room (or the next room in the path). A sound's Game-Defined Auxiliary Send volume and attenuation curve controls how much of each sound is mixed into the first bus of a room reverb chain, and the Game-Defined Auxiliary Send volume of each Room determines how much is sent to the next room in the chain. It affects how much acoustic energy is transferred to adjacent rooms via portals.

About Game Objects

Spatial Audio Rooms and Portals work by manipulating the position of the game objects known to the Wwise sound engine (emitters registered by the game and Room game objects registered by Spatial Audio), and some of their inherent properties, like game-defined sends, obstruction, and occlusion.

Emitters

When the Spatial Audio's initialization setting DiffractionFlags_CalcEmitterVirtualPosition is set, the position of the emitters located in Rooms that are adjacent to the listener is modified such that they seem to appear from the diffracted angle, if applicable. In the screenshot of the 3D Game Object Profiler, below, the listener (Listener L) is on the right of the Portal and the "real" emitter is on the lower left (Emitter E, without orientation vectors). Spatial Audio thus repositions the emitter to the upper left, so that the apparent position seems to come from the corner, all while respecting the traveled distance. The listener is about 45 degrees into the shadow zone of the Portal edge, resulting in a diffraction factor of 27%, as written at the junction between the two line segments.

When there are multiple Portals connecting two Room, Spatial Audio may assign multiple positions to an emitter (one per Portal). The MultiPosition_MultiDirection mode is used, so that enabling or disabling a Portal does not affect the perceived volume of other Portals.

Warning: Take caution when using AK::SoundEngine::SetMultiplePositions with any Spatial Audio emitter, because Spatial Audio takes only the first sound position for its various calculations including reflections, diffraction and portal transitions.

Rooms and Portals

Spatial Audio registers one game object to Wwise per Room, under the hood.

Warning: This game object's position and aux send values should not be manipulated directly.

When the listener is in a Room, the Room's game object is moved such that it follows the listener. Thus, the distance between the Room and the Listener object is approximately 0. However, its orientation is maintained to that which is specified in the Room settings (AkRoomParams). See Using 3D Reverbs for a discussion on the orientation of 3D busses.

When the listener is outside of a Room, that Room's game object adopts the position(s) of its Portal(s). More precisely, it is placed in the back of the Portals, at the location of the projection of the listener to the Portals' tangent, clamped to the Portals' extent. This can be verified by looking at the Room's game object, as seen in the screenshot of the 3D Game Object Profiler, above, in section Emitters.

For multiple Portals, a Room's game object is assigned multiple positions, in MultiPosition_MultiDirection mode, for the same reason as with emitters.

When transitioning inside a Portal, the "in-Room" and "Portal" behaviors are interpolated smoothly.

Modeling Sound Propagation from Other Rooms

With Spatial Audio Rooms and Portals, sound propagation in Rooms other than that of the listener is managed by the Rooms and Portals abstraction. An emitter in another Room reaches the listener via Portals, and their associated diffraction, and via transmission through rooms' "walls". Ensure that the Enable Diffraction box is checked in the Positioning tab of each sound that need to be propagated.

Diffraction

For each emitter in adjacent Rooms, Spatial Audio computes a diffraction angle from the Shadow Boundary, from the closest edge of the connecting Portal. See Diffraction, above. This diffraction angle, which may go to up to 180 degrees, is then mapped to a coefficient between 0 and 100%, and given to the Wwise user for driving corresponding audio transformations, by one of two means. It can set the Obstruction value on the emitter game object or set the value of a built-in game parameter, Diffraction. Whether Spatial Audio does one or the other, or both, depends on the choice of AkDiffractionFlags with which you initialize it.

To use the Diffraction built-in parameter, you need to create a game parameter and set its Bind to Built-in Parameter drop-down menu to Diffraction. Values pushed to this game parameter are scoped by game object, so they are unique for each emitter. You may then use it with an RTPC to control any property of your Actor-Mixer. The most sensible choice is the Output Bus Volume and Output Bus LPF, to emulate the frequency-dependent behavior of diffraction. Output Bus Volume and LPF are privileged over the base Volume and LPF because they should apply to the direct signal path only, and not to the auxiliary send to the Room's reverb.

Rooms' diffuse energy is also included in the sound propagation model of Spatial Audio as the output of Rooms' Auxiliary Busses. Spatial Audio computes diffraction for this too ("wet diffraction"). Spatial Audio assumes that the diffuse energy leaks out of a Room perpendicular to its Portals. Thus, it computes a diffraction angle relative to the Portal's normal vector. This diffraction value can be used in Wwise exactly like with emitters' dry path. When using the built-in game parameter, it should be used with an RTPC on the room's auxiliary bus, typically on the bus's Output Bus Volume and Output Bus LPF. The bus's Output Bus Volume property should be favored over the Bus Volume property for the same reason as with Actor-Mixers: it should not affect the auxiliary send path that is used for coupling this reverb to the reverb of the listener's room.

Alternatively, users can use built-in, project-wide obstruction for modifying audio from Spatial Audio's diffraction. When doing so, Spatial Audio uses the computed Diffraction value to drive obstruction. Compared to the Diffraction built-in game parameter, project-wide obstruction is mapped to curves that are global to a Wwise Project. They can be authored in the Project Settings. Obstruction Volume, LPF, and HPF are effectively applied on Output Bus Volume, LPF, and HPF, as discussed above. Because the obstruction curves are global, project-wide obstruction is less flexible than the Diffraction built-in game parameter. On the other hand, they require less manipulation and editing (using RTPCs). Also, different Obstruction values apply to each position of a game object, whereas built-in game parameters may only apply a single value for all positions of a game object. (When multiple values are set, the smallest is taken.) Recall that multiple game object positions are used when a Room has more than one Portal.

Transmission

When an emitter is in a different room, Spatial Audio also uses transmission to model sound going through walls, and does it using the emitter game object's Occlusion. The Occlusion value is taken from the Room settings' AkRoomParams::WallOcclusion; the maximum occlusion value is taken between the listener's Room and the emitter's Room. Occlusion maps to volume, LPF, and HPF via global, project-wide curves, defined in the Obstruction/Occlusion tab of the Project Settings. As opposed to Obstruction, Occlusion also affects the signal sent to auxiliary busses, so the contribution of an occluded emitter to its Room's reverb, and any coupled reverb, will be scaled and filtered according to the Occlusion curves. Occlusion thus properly models absorption (transmission).

Coupling

The diffuse energy of adjacent rooms penetrating into the listener's room through Portals can be seen as sources located at these Portals and, as such, they should also contribute to exciting the listener's room. In other words, they should send to the Auxiliary Bus of the listener's Room. As written earlier, you can do this by checking the Enable Game-Defined Sends check box on the adjacent Room's auxiliary bus. You may tweak the amount that is sent to other rooms' reverb with the Game-Defined Send Offset.

Modeling Sound Propagation from the Same Room on the Game Side

Obstruction of sound emitted from the same room as that of the listener can be handled by using Geometric Diffraction (see Using the Geometry API for Simulating Diffraction and Transmission and Combining the Geometric APIs with Rooms and Portals) but it is not covered by Spatial Audio Rooms and Portals alone. If one does not wish to send geometry to Spatial Audio for the purpose of obstruction calculation, obstruction in the same room must be handled on the game side. The representation of geometry, methods, and the desired level of details for computing in-room obstruction is highly game-engine specific. Games typically employ ray-casting, with various degrees of sophistication, to carry out this task. This section provides some tips on how to implement obstruction on the game side in conjunction with Rooms and Portals.

With Spatial Audio Rooms and Portals though, you don't need to do this for all emitters, but only those that are in the same Room as the listener. This is beneficial because ray-casting is usually much more expensive than the algorithm used by Spatial Audio for computing propagation paths. Since in-room obstruction between an emitter and the listener happens in the same room, by definition, we assume that the obstacle will not cover the listener or emitter integrally, and that the sound will reach the listener through its reflections in the room. This can be properly modeled by affecting the dry/direct signal path only, and not the auxiliary send, which means that Obstruction is the proper mechanism. For this purpose, the game should call AK::SoundEngine::SetObjectObstructionAndOcclusion.

Furthermore, Portals to adjacent rooms should be considered like sound emitters in the listener's Room. Therefore, games should also run their obstruction algorithms between the listener and the Portals of the Room in which it is. Then, they need to call AK::SpatialAudio::SetPortalObstructionAndOcclusion for each of these portals in order to safely declare in-room obstruction with the listener.

Multiple Room Traversal

Sound propagation also works across multiple rooms. The Room tree is searched within SpatialAudio when looking for propagation paths. Circular connections are avoided by stopping the search when Rooms have already been visited. The search depth may be limited by Spatial Audio's initialization setting AkSpatialAudioInitSettings::uMaxSoundPropagationDepth (default is 8).

Implementing Complex Room Reverberation

The sound engine API AK::SoundEngine::SetGameObjectAuxSendValues may be used to add additional Auxiliary Sends to the ones that are set by Spatial Audio. This may be useful when designing complex reverberation within the same room, for example, if there are objects or terrain that call for different environmental effects. A Room's AkRoomParams::ReverbAuxBus can also be left to "none" (AK_INVALID_AUX_ID), so that its send busses are only managed by the game via AK::SoundEngine::SetGameObjectAuxSendValues.

Using the Geometry API for Simulating Diffraction and Transmission

The geometry passed to Wwise Spatial Audio may be used to simulate diffraction and transmission of sound. As such, it can completely replace your game engine's raycasting methods for computing obstruction.

When an emitter is hidden from a listener by an object, Spatial Audio computes paths along its edges and, if some are found, computes the diffraction coefficient resulting from the sound bending around these edges. The apparent angle of incidence of the emitter is modified accordingly, and the diffraction value is sent to Wwise where you may control how it ultimately affects the sound. Typically, diffraction results in low-pass filtering.

Additionally, Spatial Audio computes sound paths going through geometry. Sound transmitting though an obstacle has a transmission loss coefficient applied to it, resulting from the surface properties assigned to the geometry via the API. Typically, transmission loss is modeled with a low-pass filter and a volume attenuation.

The image below is a screenshot of the 3D Game Object Viewer in Wwise and shows a sound diffracting around the edges of a thin wall.

Warning: Geometric diffraction and transmission can be used to entirely replace your game engine's raycasting method for computing obstruction, however the performance cost grows with the complexity of the geometry. Geometry passed to Spatial Audio should be kept as simple as possible. Also, it is good to use the efficient Rooms and Portals abstraction (see Using Rooms and Portals) in conjunction with Geometric Diffraction in order to reduce the computational complexity of the latter.

Geometric diffraction can be used to affect the direct sound propagation path between the emitter and listener, but also the path of its early reflections, when used in conjunction with Wwise Reflect.

Setting Up Geometry for Diffraction

Each geometry set that is passed to Spatial Audio needs to say explicitly whether it should be considered for calculating diffraction paths. This is done via the AkGeometryParams::EnableDiffraction flag. This flag enables generation of edge data that is necessary for diffraction calculations, and is used for both geometric diffraction on the direct path and for diffraction of reflections.

Also, consider whether or not you want the boundary edges of a mesh to be able to diffract sound. For a given mesh, a boundary edge is defined as an edge that is connected to only one triangle and, therefore, exists on the boundary of the manifold. The complexity of the diffraction calculation increases with the number of edges, so the option should be disabled if your mesh contains boundary edges that should not diffract sound.

Finally, note that acoustic textures assigned to acoustic surfaces do not have any effect on diffraction because edge materials don't absorb energy. Edges simply bend sound.

Setting Up Geometry for Transmission

First, ensure that the flag AkSpatialAudioInitSettings::bEnableTransmission is set to true to simulate transmission.

On geometry, it may be desirable to adjust the transmission loss coefficient for various geometry types. For example, a concrete structure is likely to block almost all sound transmission, whereas geometry composed of plywood may block significantly less sound.

Each AkTriangle in the AkGeometryParams::Triangles array contains AkTriangle::surface, an index into the AkGeometryParams::Surfaces array. The AkAcousticSurface::occlusion field describes how much transmission loss to apply to a sound transmitting though a triangle that references it. It is expressed as value between 0 and 1. The transmission loss is converted to a percentage and then used to evaluate the occlusion curve. The final volume attenuation and filter value applied to a sound with a given transmission loss will depend on the occlusion curves defined in the Wwise Project Settings.

Geometric Diffraction of the Direct Path

You are invited to review the Geometric Diffraction demo in the Integration Demo sample (in SDK/samples/IntegrationDemo) for an example of using geometry for the purpose of geometric diffraction of the direct path. Look for Demo Positioning > Spatial Audio: Geometry.

Setting up a Sound for Diffraction and Transmission

In the positioning tab in the Wwise Authoring tool, ensure that Enable Diffraction is checked. This box enables Spatial Audio features related to diffraction and transmission, including:

  • Computation of the diffracted path of the sound through geometry and/or portals, if applicable. The path calculation is performed by Spatial Audio on each game object that is currently playing a sound with diffraction and transmission enabled. If multiple diffraction-enabled sounds are playing on the same game object, the path calculation is only performed once.
  • Computation of the transmission path of the sound, through geometry and/or between rooms. The final transmission loss coefficient is always taken as the largest transmission loss value encountered along the transmission path, whether it comes from a room's AkRoomParams::WallOcclusion or a triangle's associated AkAcousticSurface::occlusion.
  • Generation of virtual positions for diffraction paths, which are sent to the Sound Engine for rendering the sound, assuming Spatial Audio's initialization setting AkSpatialAudioInitSettings::bCalcEmitterVirtualPosition is set.
  • Application of the obstruction curve according to the diffraction coefficient, assuming Spatial Audio's initialization setting AkSpatialAudioInitSettings::bUseObstruction is set. In the case that the game has also set an obstruction value via AK::SoundEngine::SetObjectObstructionAndOcclusion, the maximum of the two values is used.
  • Application of the occlusion curve according to the transmission loss coefficient, assuming Spatial Audio's initialization setting AkSpatialAudioInitSettings::bUseOcclusion is set. In the case that the game has also set an occlusion value via AK::SoundEngine::SetObjectObstructionAndOcclusion, the maximum of the two values is used.

Direct Path Diffraction in Wwise

Diffraction may be observed in the 3D Game Object Viewer, provided the proper profiling settings and view options are set (see images below). The calculated diffraction factor on a path from emitter to listener is displayed for each diffracting edge. This diffraction factor is conveyed to Wwise via either the Built-In Game Parameter called Diffraction or via the emitter's Obstruction value, or both, according to the AkSpatialAudioInitSettings::uDiffractionFlags that were passed when initializing Spatial Audio. Built-in Game Parameter values may be profiled directly in the RTPC curves where they are used, and Obstruction may be profiled in the Profiler's Obs/Occ tab.

Like with Portals, the Diffraction value is 0 when the emitter is in direct sight of the listener, and it begins to increase as the emitter penetrates the shadow zone (see Diffraction). Also, please refer to Rooms and Portals' Diffraction for more details on shadow zone diffraction and for a discussion about using the Built-in Diffraction Game Parameter versus Obstruction.

Direct Path Diffraction Interaction with Spatial Audio Rooms and Portals

With Spatial Audio Rooms and Portals (Using Rooms and Portals), Portals also model diffraction of direct sounds in adjacent rooms. The two systems complement each other in that no geometry-driven diffraction paths are searched for emitters that are not in the same room as the listener. Provided that Rooms and Portals are much more efficient to calculate than geometry, it is beneficial to use both systems together to reduce computational complexity.

Geometric Diffraction of Early Reflections

As was noted above, early reflections may diffract off of edges, and Spatial Audio supports modeling this phenomenon when emitters are routed to Wwise Reflect.

Prior to explaining how to do it, we need to define view zone diffraction.

Consider the figure below. The emitter is in direct sight of the listener, but the listener is not hit by specular reflections. It is thus in the view zone. As was said in Diffraction, diffraction occurs in the view zone as well. However, in Wwise Spatial Audio, neither the Rooms and Portals or Geometric Diffraction of direct path models consider diffraction in the view zone, as it is negligible compared to the actual direct path. However, for reflections, view zone diffraction has a dramatic impact. Without diffraction, early reflections are heard in the reflection zone only, where they are purely specular. As soon as the listener enters the view zone, reflections become silent. With diffraction enabled, the edge contributes to diffract the reflected wave. Thus, the listener perceives the reflection, albeit with additional filtering and attenuation as she goes around and away from the reflection zone.

In the reflection zone, no diffracted path and, therefore, no diffraction value is calculated, because the specular reflection is assumed to take over. The calculated view zone diffraction of a given edge is 0% at the boundary between the reflection zone and the view zone, and 100% at the boundary between the view zone and the shadow zone.

With higher orders of early reflections, both view and shadow zone diffraction come into play.

Enable Reflections on Applicable Sounds

In the Wwise Authoring Tool, set the desired early reflections send to an aux bus containing Wwise Reflect for all sounds requiring reflections. Refer to Wwise project setup for more details. There is no specific setting needed to enable diffraction for the purpose of reflections' diffraction, apart from enabling diffraction on the geometry.

Settings in Wwise Reflect

Reflections that undergo diffraction effects will appear as image sources in Wwise Reflect. You may design the effect of diffraction on reflections with the three curves that depend on diffraction: Diffraction Attenuation, Diffraction LPF, and Diffraction HPF. See Wwise Reflect's documentation for more details.

Combining the Geometric APIs with Rooms and Portals

Rooms are portals in Wwise Spatial Audio work in conjunction with the geometric APIs for reflection and diffraction. The rooms and portals network can be thought of a high level abstraction (or as a low level-of-detail version) of the surrounding geometry. With care, the combination of rooms and portals with level geometry can result in a acoustic simulation that is both detailed and efficient.

Geometric Diffraction Through Portals

In the case where an emitter (assuming it is playing a sound that has been set up correctly for geometric diffraction, see Setting up a Sound for Diffraction and Transmission), is not in the same room as the listener, the geometric path is calculated as follows:

  • The sound propagation paths from the emitter to the listener are calculated using the rooms and portals network.
  • For each path, the segment of the path between the emitter and the portal closest to the emitter is calculated using the geometric diffraction algorithm, as if the portal were the listener. Unless the emitter is directly behind a single portal from the perspective of the listener, only one geometric path is calculated (the shortest one found). Calculating additional paths between the emitter and the portal would not result in unique virtual positions, and is therefore unnecessary.
  • Path segments that are between two portals are also calculated using geometric diffraction if there is no direct line of sight between the portals. These calculations are done each time geometry and/or portals are added to or removed from the scene, and reused when required. In most cases, only the shortest path between the two portals is used. The exception being when the listener is directly behind one of the portals, in which case multiple paths are used to avoid discontinuities should the listener transition through the portal.
  • For each path, the segment of the path between the listener and the portal closest to the listener is calculated using the geometric diffraction algorithm, as if the portal were the emitter.
  • The resultant paths are taken as the combination of the above paths, branching and appending paths together where necessary.

Reflections Through Portals

Reflections are able to pass through portals, even if there are up to two planes intersecting the opening of the portal. Since the portal itself describes an acoustic opening, it is not necessary to also "cut holes" in the triangle geometry to allow a sound to pass through, which would greatly increase the number of triangles. One example is a room where the geometry is described by a box, with two triangles for each of the six sides. If users would like sound to be able to propagate outside the box, then they simply add a portal intersecting one of the walls along the portal's z-axis. As usual, the game is responsible to distinguish which game objects are inside the room and which are outside using AK::SpatialAudio::SetGameObjectInRoom (see API Overview). In the case where an emitter (which is playing a sound that has been set up correctly for reflections, see Wwise project setup) is not in the same room as the listener, and as long as the sound is reachable on the sound propagation network, then reflection simulation is performed. The reflections are calculated as follows:

  • A reflection calculation is performed between the emitter and each portal connected to the emitter's room, as if the portal were the listener.
  • The diffraction path between the portal and the listener is calculated as described in Geometric Diffraction Through Portals and appended to the reflection path between the portal and the emitter.
  • The reflection calculation is not performed if the diffraction path exceeds 100% diffraction and the calculation is abandoned if at any point it does so.

Tagging Geometry for Specific Rooms.

As an optimization to limit the search space for ray-triangle intersection test and surfaces which may generate reflections, it is possible to manually assign geometry to specific rooms. To do so, set AkGeometryParams::RoomID to the ID of a particular room. This indicates to Spatial Audio that the geometry in that room is only visible from other rooms through portals and not directly. Since a single geometry set can only be associated with one room ID, a room can not have geometry that should be visible in multiple rooms unless AkGeometryParams::RoomID is left invalid. Also note that if any geometry set is associated with a particular room ID, then that room can not longer "see" geometry that has not been explicitly associated with that room. After assigning a geometry set to a room, Spatial Audio will only look for geometry that is specifically associated with that room ID when simulating reflection and diffraction in that room.

Stochastic Ray Casting Geometry Guide

Introduction

Stochastic Ray Casting is a technique for efficiently evaluating nth-order reflection and diffraction. The basic idea is to randomly cast rays from the listener and follow their paths through a series of reflections and diffractions. The technique is inspired by graphic rendering techniques. The current implementation supports up to 4th order reflection and diffraction on the listener and emitter sides.

Concepts

  • Primary rays: the rays directly cast from the listener
  • Reflection: bouncing of sound on surfaces
  • Diffraction: bending of sound around objects
  • Paths: series of reflections/diffractions from the listener to an emitter
  • Emitter receptor: bounding box or bounding sphere centered around an emitter

Settings

  • Number of Primary Rays (uNumberOfPrimaryRays): The number of rays cast from the listener. Increasing the number of primary rays gives better results but costs more CPU time. The default value is usually good for most applications.
  • Maximum Reflection Order (uMaxReflectionOrder): The maximum number of times the ray will successively bounce off surfaces. Increasing the maximum order of reflection leads to a more detailed acoustic simulation but can highly impact the CPU performance.
  • Direct Diffraction Path (bEnableDirectPathDiffraction): A direct diffraction path between the listener and an emitter is a path composed exclusively of diffraction segments. Enabling direct diffraction path computation increases significantly the CPU usage.
  • Diffraction on Reflections (bEnableDiffractionOnReflection): Enables diffraction at the beginning and the end of a reflection path (a path composed only of reflection segments). Enabling diffraction on reflections prevents the simulation from unexpectedly dropping reflections when the emitter or listener moves behind an obstacle. As with a direct diffraction path, it significantly increases the CPU usage.
  • Maximum Path Length (fMaxPathLength): Maximum length of a path segment. High values compute longer paths but increase the CPU usage.

Limitations

There are a few limitations when defining geometries for the stochastic ray casting engine. The limitations concern both the performance and the quality of the results.

Geometry visible angle

When a triangle is smaller than the sampling density, the raycasting engine is less likely to find it.

The geometry's visible angle alpha is the angle at which the geometry is seen from the point of view of the listener. Depending on the number of primary rays, the average angle (gamma) between two rays varies. The relation between alpha and gamma influences the probability of finding an intersection (a reflection or a diffraction) with the object. If gamma is smaller than alpha, the probability of finding an intersection is high. If gamma is bigger than alpha, the probability of finding an intersection is low.

In this example, gamma is smaller than alpha. Hence, the probability of finding an intersection with the object is high.
In this example, alpha is smaller than gamma. Hence, the probability of finding an intersection with the object is low.

Number of triangles

The number of triangles contained in the geometry is directly related to the CPU usage of the engine: the more triangles, the higher the CPU usage. This is due to the fact that more intersection tests are required on the object. Usually, sound propagation does not require highly detailed geometry. Reducing the number of triangles can help increase the performance without sacrificing quality.

Here the plane is composed of 4 triangles: the rays have to be tested against each triangle.

Geometry shape

Some geometry shapes are more difficult to process than others. Usually, geometries like planes and boxes are simple to process and give the best results in terms of sound propagation. Spheres and cylinders are more prone to errors. This is due to the curvature introduced by the sphere and the cylinder. Some diffraction edges may not be found, which would cause some diffraction paths to be missed. The algorithm implements several heuristics to overcome this issue in most cases. Increasing the number of primary rays or simplifying the geometry can solve the issue as well.

In this situation, we expect to find the diffraction path from L to E that goes through L, E2, E3, E4, and E. Unfortunately, the surface between E1 and E2 is small and thus it is difficult to find the necessary intersection that will provide the diffraction edge E2. In this case, the intersection with E1 is more likely. L is not in the shadow zone of E1, preventing the algorithm from actually finding the diffraction path from E2.

Using "Raw" Image Sources

While Wwise Reflect may be used and controlled by the game directly using AK::SoundEngine::SendPluginCustomGameData, Spatial Audio makes its use easier by providing convenient per-emitter bookkeeping, as well as packaging of image sources. Also, it lets you mix and match "raw" image sources with surface reflectors (potentially on the same target bus/plug-in).

Game side setup

Call AK::SpatialAudio::SetImageSource for each image source. Target the bus ID and optional game object ID (note that the game object ID may also be a listener or the main listener). Refer to AkReflectImageSource for more details on how to describe an image source.

Image sources may be provided to Reflect by game engines that already implement this functionality, via ray-casting or their own image-source algorithm, for example.

Wwise project setup

See the Wwise project setup above for the Using the Geometry API for Simulating Early Reflections - which is the same. You can also refer to the Wwise Reflect documentation in the Wwise Help for an example design of Reflect on FPS sound.

Note: Early reflection send level and bus in the authoring tool do not apply to image sources set using AK::SpatialAudio::SetImageSource(). When using this function, the Reflect bus and send level may only be set programmatically.
See also

Was this page helpful?

Need Support?

Questions? Problems? Need more info? Contact us, and we can help!

Visit our Support page

Tell us about your project. We're here to help.

Register your project and we'll help you get started with no strings attached!

Get started with Wwise