Before jumping into the code and using the Wwise SDK, you should understand the unique approach Wwise uses for building and integrating audio into your game. There are also a few concepts that you should be familiar with in order to work efficiently and get the most out of Wwise.
The Wwise approach to building and integrating audio in a game includes five main components:
Each of the components will be discussed in further detail in the following sections, but before moving on, you should understand where each of these components fits in and how they relate to one another.
One of the goals of Wwise was to create a clear distinction between the tasks of the programmer and those of the designer. For example, the audio objects, which represent the individual sounds in your game are created and managed exclusively within the Wwise application by the sound designer. Game objects and listeners, on the other hand, which represent specific game elements that emit or receive audio, are created and managed within the game by the programmer. The final two components, Events and game syncs, are used to drive the audio in your game. These two components create the bridge between the audio assets and the game components and are therefore integral to both Wwise and the game.
The following illustration demonstrates where each of these components are created and managed.