Switches represent the different alternatives that exist for a particular game object within the game. Sound, music, and motion objects are organized and assigned to switches so that the appropriate sound or motion object will play when a change is made from one alternative to another in game. The Wwise objects that are assigned to a switch are grouped into a switch container. When an event signals a change, the switch container verifies the switch and the correct sound, music, or motion object is played.
To take this a step further, we are going to build a cascading set of switch groups that work in concert to play the correct footstep sound based on:
To unleash the full power of switches a programmer will need to define these in the game engine to drive the system.
It’s common for every asset in a game to include metadata about what the asset is, be it a rock texture or character model, along with the properties that define it. By using these already existing definitions, or creating this data to control the switching system, we move closer to a data-driven pipeline making things easier to manage and scale throughout development.
Using information from the game to drive the switching system in Wwise is something that should be discussed with an audio programmer as soon as possible during the development cycle. There are great opportunities to automate much of the challenge that comes with this aspect of implementation.
While seemingly complex compared to the simple method of footstep implementation, the flexibility of using information from the game allows for greater control of the resulting sound and puts creative control in the hands of the sound designer.