Wwise SDK 2022.1.8
Events are created by the Wwise user, specifying actions to be performed on Wwise objects. For example, an event might contain a Play action on sound Bird1, and a Stop action on sound Bird2. A second event could contain a Set Volume action on sound CarEngine, setting a relative volume offset of -2, and a Set Switch action that changes switch group Ground_Material to Tiles.
Events are then packaged into SoundBanks which can be loaded in your game, after which those events can be triggered by your game's code. For example, when the player enters the kitchen, you would trigger the event that sets the Ground_Material switch to Tiles.
Once events are integrated in the game, the Wwise user can continue working on them, changing or modifying the actions they contain, or the objects they refer to. Since your game is still triggering the same event, the changes made by the Wwise user will take effect in the game without requiring extra work from the programmer, and without recompiling the code.
The AK::SoundEngine::PostEvent "PostEvent()" function queues Events to be processed, and can identify events either by their ID or by their name. It should be called by your game's code whenever an event needs to be triggered.
However, no event is processed until the AK::SoundEngine::RenderAudio "RenderAudio()" function has been called. Calling AK::SoundEngine::RenderAudio "RenderAudio()" once per game frame is a good practice.
When AK::SoundEngine::RenderAudio "RenderAudio()" is called:
Events will be processed in the exact order in which they were posted by the game. Actions within an event will be processed in the order in which they appear in Wwise.
As much as possible, all events posted in the same frame will be processed at the same time, with the exception of the actions that contain delays, which will be of course executed later.
If for any reason, the AK::SoundEngine::RenderAudio "RenderAudio()" function is not called for a long period (such as several frames), streaming sounds will continue playing normally, but no new events will be launched and no new positioning will be applied until the next time AK::SoundEngine::RenderAudio "RenderAudio()" is called.
To work with IDs, the banks must be generated with the "Generate header file" option in the Generate SoundBanks dialog box in Wwise. The definition file, named Wwise_IDs.h, contains all the required IDs. It is updated at each bank generation.
The AK::SoundEngine::PostEvent "PostEvent()" function accepts a pointer to an AkEventCallbackFunc() function that will be called when a marker is reached, or when the event is terminated. This may be useful for synchronizing events and sound playback.
An event is considered terminated when all of its actions have been executed and all the sounds triggered by this event have finished playing. Note that some events might never end by themselves. For example, if an event contains a sound that loops infinitely, the callback will only be triggered once this sound is stopped by another event.
Markers have to be created with an external wave file editor such as SoundForge® or Adobe® Audition®. The sound engine will recognize those cue points and will notify your callback function if you specified it when calling AK::SoundEngine::PostEvent "PostEvent()" .
We can estimate the latency between the time an event is posted and the time audio data starts being played back by the platform.
From the game thread, call the SDK function PostEvent(). Posting an event will post a request to play something. This request will not be processed until the game calls the function RenderAudio(); it does not actually render the audio, but simply sets a notification to process the requests that were posted since the last call to RenderAudio(). Normally, RenderAudio() is called by the game once per game frame. But, this is not a requirement; it can be called anytime to force execution of recently posted events as soon as possible.
Once RenderAudio() is called, the audio (EventManager) thread will be granted the permission to consume the events/commands that were posted previously. However, note that this thread is synchronized with the consumption rate of the platform's audio. An "audio refill" pass is only executed when the audio output module has consumed a buffer and, therefore, made a section of its ring buffer available for writing.
Finally, there is the output module's ring buffer size to consider. When initializing the sound engine on Windows, you may specify the ring buffer's size in the platform-specific parameters using AkPlatformInitSettings::uNumRefillsInVoice. This gives the number of "refill buffers" in the voice buffer, where two is double-buffered and the default is four. Picking this number is a balance between reducing the latency (smaller buffer) and making the system more resistant to memory starvation (larger buffer).
The "refill buffer" or "audio frame", as it is generally called, is determined by the sample rate over the frequency. So, on Windows in high-quality mode, this typically gives:
(1,024 samples) / (48,000 Hz) = 21.3 ms. If we set samples to 512, using AkInitSettings::uNumSamplesPerFrame, then the audio frame would be:
(512 samples) / (48,000 Hz) = 10.6 ms.
So, in summary, the "total sound engine's latency" is determined by:
0 to 21 msdepending on whether RenderAudio() was called at the end or at the beginning of the current audio frame.
2 * 21 ms = 42 msof latency.
2 * 11 ms = 22 msof latency.
Total, in a 60 frames per second update system:
On top of the sound engine latency calculations described above, if a sound is 100% streamed (nothing loaded into memory) then you need to add the inherent I/O latency. To avoid I/O latency, Zero latency can be specified so that the beginning of the sound is loaded into memory. The size of this buffer is determined by the Prefetch length, which is set to a normally safe 100 ms by default.
Here are few useful examples (pseudo-code):
For examples of integrating events, refer to Quick Start Sample Integration - Events.