The Integration Demo application contains a series of demonstrations that show how to integrate various features of the sound engine in your game.
||Note: All code presented in this section is available in a sample project in the "samples\IntegrationDemo\$(Platform)" directory.|
The Integration Demo binaries are available in the "$(Platform)\[Debug|Profile|Release]\bin" directory. If you would like to rebuild the application yourself, follow these steps:
- Confirm that the version of the DirectX SDK installed on your machine matches the one mentioned in Platform Requirements .
- Generate the Integration Demo SoundBanks for Windows in their default paths.
- Open the solution found in "samples\IntegrationDemo\Windows" and build using the desired configuration.
To run the Integration Demo, simply launch the executable found in the directory mentioned above.
- Confirm that the version of the XDK installed on your machine matches the one mentioned in Platform Requirements and that the XEDK environment variable points to it.
- Generate the Integration Demo Soundbanks for the Xbox 360™ in their default paths.
- Build all three configurations of the XDK's ATGFramework library in their default target paths. The VCPROJ file for the framework can be found in the XDK's "Source\Samples\Common\" directory.
- Open the solution found in "samples\IntegrationDemo\Xbox360" and build using the desired configuration.
To run the Integration Demo on Xbox 360™:
- Open the Xbox 360 Neighbourhood and navigate to the hard drive of the Xbox 360™ that will be running the demo.
- Create a new folder on the hard drive, call it "IntegrationDemo".
- Go to the directory containing the Integration Demo binaries (eg. SDK\Xbox360_vc90\[Debug|Profile|Release]\bin in the Wwise SDK).
- Copy the "Media" folder and the IntegrationDemo.xex into the folder you created on the Xbox 360™ hard drive in step 2.
- Go into the "Media" folder on the Xbox 360™ that you just copied over.
- Create a new folder called "Audio"
- Copy all the SoundBanks generated for the Xbox 360™ into the "Audio" folder.
- You should now be able to run the demo from the Xbox 360™. On the title menu, scroll down to "IntegrationDemo.xex" and press the A button.
To run the Integration Demo on Xbox 360™ using Visual Studio:
- Open the Xbox 360™ solution of the Integration Demo and run the application (the necessary files will be copied over).
- Confirm that the version of the Playstation®3 SDK installed on your machine is the same as the one mentioned in Platform Requirements and that all related SDK environement variables point to the correct locations.
- Generate the Integration Demo Soundbanks for the PS3 in their default paths.
- Build the sample framework included with the PS3 SDK in the "PS3 Release" configuration. The VCPROJ file for the framework can be found in the "cell\samples\fw\" directory.
- Open the solution found in "samples\IntegrationDemo\PS3" and build using the desired configuration.
To run the Integration Demo on Playstation®3 using the Windows Command Line:
- Create a directory to use as the application's home directory.
- Copy the SoundBanks generated for the PS3 to this directory.
- Open a Windows Command Line shell.
- Go to the directory where the Integration Demo binary (IntegrationDemo.ppu.self) is located.
- Use the following command to run the application (replace <PATH> with the path of the folder you created in step 1): ps3run -f <PATH> IntegrationDemo.ppu.self
To run the Integration Demo on Playstation®3 using Visual Studio:
- Copy the SoundBanks generated for the PS3 to the directory designated as the PS3's app_home.
- Open the PS3 solution of the Integration Demo.
- Run the application from Visual Studio (the PS3 Target Manager and ProDG Debugger should start automatically).
Note: If the command is not recognized in the Windows Command Line, make sure that your PATH variable is set to include the paths to where the PS3 SDK tools are installed.
(Using the supplied Visual Studio solution)
- Confirm that the versions of the Revolution SDK and GNU Make match the ones mentioned in Platform Requirements .
- You must have the NMIC SDK patch installed to compile (used in the Microphone Demo)
- Ensure that the environment variable CYGWIN_PATH points to the location where Cygwin was installed.
- Open the solution found in "samples\IntegrationDemo\Wii" and build using the desired configuration.
To run the Integration Demo on the Wii™ using either the Windows Command Line or the Revolution SDK Shell:
- Create a directory to use as the Wii™'s DVD root.
- Copy the SoundBanks generated for the Wii™ to this directory.
- Either open a Windows Command Line shell or the Revolution SDK shell.
- Run the following command to set the Wii™ DVD root (replace <PATH> with the path of the folder you created in step 1): setndenv DvdRoot <PATH>
- Go to the directory where the Integration Demo binary (IntegrationDemo.elf) file is located.
- Run the following command to launch the application: ndrun IntegrationDemo.elf
Note: If the commands are not recognized in the Windows Command Line, make sure that your PATH variable is set to include the paths to where the Revolution SDK tools are installed.
- Confirm that the version of XCode installed on your machine matches the one mentioned in Platform Requirements .
- Open the Xcode project found in "samples\IntegrationDemo\Mac" and build using the desired configuration.
- To run the Integration Demo, simply launch the executable found in the directory "Mac\[Debug|Profile|Release]\bin".
Note: The banks are not included in the installer and have to be generated using the authoring tool.
- Confirm that the version of the iOS/tvOS SDK installed on your machine matches the one mentioned in Platform Requirements .
- Open the Xcode project found in "samples\IntegrationDemo\iOS" or "samples\IntegrationDemo\tvOS" and build using the desired configuration.
Note: The banks are not included in the installer and have to be generated using the authoring tool.
- Confirm that the version of the Cafe SDK installed on your machine matches the one mentioned in Platform Requirements.
- Open the Wwise project in "samples\IntegrationDemo\WwiseProject" and generate the SoundBanks for WiiU in their default paths.
- Open the Visual Studio solution in the "samples\IntegrationDemo\WiiU" folder and build the desired configuration (Debug/Profile/Release)
- Copy the contents of the folder "samples\IntegrationDemo\WwiseProject\GeneratedSoundBanks\WiiU" in the cafe_sdk/data/disc/content folder.
- Start a Cafe command prompt with Cafe.bat (in cafe_sdk)
- Change directory to "[WwiseSDK]\WiiU\Debug\bin". IMPORTANT: The CAFE tools can't boot an application from "Program Files". If your SDK is in "Program Files" you need to copy the executable elsewhere.
- caferun IntegrationDemo.rpx
- Confirm that the version of the VITA SDK installed on your machine matches the one mentioned in Platform Requirements.
- Open the Wwise project in "samples\IntegrationDemo\WwiseProject" and generate the SoundBanks for Vita in their default paths.
- Open the Visual Studio solution in the "samples\IntegrationDemo\Vita" folder and build the desired configuration (Debug/Profile/Release)
- Make sure the Working Directory property for the Vita is set to "<Installation Path>\SDK\samples\IntegrationDemo\WwiseProject\GeneratedSoundbanks\Vita". To set it in Visual Studio, select ProDG VSI Project Properties from the Project menu.
- Launch the application from Visual Studio (F5).
- Confirm the versions of SDK and tools for Android on your machine matches the one mentioned in Platform Requirements.
- Open the Wwise project in "samples\IntegrationDemo\WwiseProject" and generate the SoundBanks for Android in their default paths.
- Normally, banks should be packaged inside of your apk executable file. However, for simplicity sake, the banks are not packaged inside the IntegrationDemo apk. You will need to copy the android banks to the folder /sdcard/IntegrationDemo on your android device. If you don't have a SD card you can modify the path found in the file SDK/samples/IntegrationDemo/Android/platform.h to suit your needs.
- Load the eclipse project into your Eclipse workspace ( File > Import > Existing Project)
- Select "SDK/samples/IntegrationDemo/Android" as the root to the project
- Right-click on the integration demo project and select "Run as Android Application".
||Note: You will need to use the software keyboard or hardware keyboard to interact with the integration demo. The software keyboard can be opened by holding the menu button for 2 seconds.|
You can navigate through the Integration Demo on Windows using either the keyboard, a connected controller or any DirectInput compatible device.
- To navigate between controls on a page, use the UP and DOWN arrow keys or the UP and DOWN buttons on a gamepad's directional pad.
- To activate the selected control, hit the Enter key or the A/X button on a gamepad.
- To go back a page in the menu, hit the Escape key or the B/O button on a gamepad.
Certain controls (eg. Toggle Controls and Numeric Sliders) allow you to change values. To change their values, hit the LEFT and RIGHT arrow keys or the LEFT and RIGHT buttons on a gamepad's directional pad.
||Tip: The application has an online help feature! To access the Help page, press F1 on the keyboard or the START button on a gamepad.|
The code behind each demonstration can be found in the "samples\IntegrationDemo\DemoPages" directory. For example, the code for the Localization demo will be in the DemoLocalization.h and DemoLocalization.cpp files in that directory.
||Tip: Pertinent information about each demo can also be found in the Integration Demo application's online help.|
This demo shows how to implement localized audio. Localized sound objects are found in language-specific SoundBanks in subdirectories in the SoundBank generation directory. We achieve the localization effect by unloading the current SoundBank and reloading the desired language-specific Soundbank.
Use the "Language" Toggle control to switch the current language and then press the "Say Hello" button to hear a greeting in the selected language.
For more information about languages and localization, see Integration Details - Languages and Voices
The Dynamic Dialogue demo runs through a series of tests that use Wwise's Dynamic Dialogue features. Each of these tests demonstrates a different control flow so that you can hear the effect it produces:
- Test 1: Shows how to play a simple dynamic sequence using Wwise IDs
- Test 2: Like test 1, but uses strings instead of IDs.
- Test 3: Shows how to add an item to a dynamic playlist during playback.
- Test 4: Shows how to insert an item into the dynamic playlist during playback,
- Test 5: Shows what happens when an item is added to an empty playlist.
- Test 6: Shows how to use the "Stop" call on a dynamic sequence.
- Test 7: Shows how to use the "Break" call on a dynamic sequence.
- Test 8: Shows how to use the "Pause" and "Resume" calls on a dynamic sequence.
- Test 9: Shows how to use the "Delay" call when enqueuing an item to a dynamic sequence.
- Test 10: Shows how to clear a playlist during playback.
- Test 11: Shows what happens when the playlist is stopped and cleared.
- Test 12: Shows what happens when "Break" is called on a playlist and it cleared.
- Test 13: Shows what happens when a playlist is paused and cleared.
- Test 14: Shows how to use a callback function with custom parameters when working with dynamic dialogue.
- Test 15: Shows how to use a callback to perform tasks (in this case, to cancel playback after 3 items have played).
- Test 16: Shows how to use a callback to perform tasks (in this case, play a second sequence after the first sequence ends).
- Test 17: Show how to use Wwise events in conjunction with dyanamic dialogue.
For more information about Dynamic Dialogue, see Integration Details - Dynamic Dialogue
This demo shows how to use RTPCs. The RPM numeric slider is linked with an RTPC value (RPM) associated with the engine. Press the "Start Engine" button to start/stop car engine audio and use the RPM slider to change the RTPC value and hear the effect.
For more information about RTPCs, see Integration Details - RTPCs
This demo shows various ways to implement footsteps in a game. It also show surface-driven bank management to minimize both media and metadata memory when a surface isn't in use. Finally, this demo also shows a very simple case of environmental effects.
In this example, the footstep sounds are modified by 3 variables: surface, walking speed and walker weight.
- Surface issues (Surface switch) The surface will change the actual properties of the sounds so it can't be simulated. However, footsteps being impact sounds, we chose to use SoundSeed Impact for most of the surface types. Each surface has very specific resonance characteristics that SS Impact can modulate. SS Impact will give a large variety of sounds out of a small subset of real sounds, thus saving space.
- Walking Speed issues (Footstep_Speed RTPC) This project supports a smooth transition from walking to running in almost all cases. For this variable, we assume the following: the faster you walk, the shorter is the footstep and the harder you hit the ground. This translates in Pitch and Volume changes respectively. Look for RTPC on these parameters in the project. The Speed RTPC is driven directly by the joystick displacement in this demo.
- Walker Weight issues (Footstep_Weight RTPC) The footstep structure supports various walker weights. We assume that in real life, a heavier walker will have a longer footstep and that it will be more muffled. This translates in Pitch and LPF changes respectively. Look for RTPC on these parameters in the project.
With each surface, we show a different way of dealing with the sound samples and variables. These are only suggestions and ideas that you can use in your own structure.
- Gravel Our gravel samples are very noisy so they don't go well with SoundSeed Impact. Also, they are very similar, so it won't give us anything more to have a lot of samples of this surface. More variation is obtained with a bit of Volume, LPF and Pitch randomizations. The Weight influance is done through the EQ effect with its gain parameters driven by the Weight RTPC. For light footsteps, the higher frequencies are boosted and the reverse for heavy footsteps. Note the RTPCs effect on pitch and volume.
- Metal The metal surface is a textbook example of SoundSeed Impact usage; there is lots of resonance. In our samples, we could easily identify a heel impact followed by the toe impact. In order to have more variation, we split each sample in two. This allows us to independantly randomize the pitch of each section. We recombine both using a sample-accurate transition sequence. This gives us 25 basic combinations out of our 5 original samples. Add some pitch randomization and the natural randomization of SS Impact and we get a good variety of sounds. The Weight and Speed RTPC, drive the SS Impact parameters as well as the basic Pitch and LPF.
- Wood For the wood surface, the walking and running samples were very different. Same also for Heavy and Light footsteps. So this was organized in a more traditional switch hierarchy. Both switch containers are driven by a RTPC-driven switch (look in the GameSync tab, Footstep_Gait and Footstep_Weight). The wood surface works good with SS Impact as well.
- Dirt Samples for walking and running on this surface were somewhat similar so we decided to do the transition with a Blend container. RTPCs on Pitch and Volume were used to take the Weight in account.
In this demo, the banks were divided in four media banks (one per surface). We divided the screen in 4 with a buffer zone between each surface where both banks are loaded. This is to avoid a gap in the footsteps due to bank loading. In the bank manager, look at the GameSync tab. Note that each surface bank includes only the corresponding surface switch. This will include only the hierarchy related to that switch in the bank, and nothing else. In a large game, this setup has the advantage of limiting the amount of unused samples in a particular scenario, thus limiting the memory used. For level or section based games, it is easy to identify the surfaces used as they are known from the design stage. For open games, this is more tricky and depend a lot on the organization of your game but can still be acheived. For example, it is useless to keep the "snow and ice" surface sounds in memory if your player is currently in a warm city and won't be moving toward colder settings for a long time.
This demo shows how you can set up a callback function to receive notification when markers inside a sound file are hit. For this demonstration, we are using the markers to synchronize subtitles with the audio track.
For more information on markers, see Integrating Markers
Music Sync Callback Demo - This demo shows how to use music callbacks in general. Beat and bar notifications are generated from music tempo and time signature information. Music Playlist Callback Demo - This example to force a random playlist to select its next item sequentially. The playlist item may be stopped via the callback as well. MIDI Callback Demo - Shows MIDI messages the game can receive using callbacks. MIDI messages include the MIDI notes, CC values, Pitch Bend, After Touch and Program Changes.
For more information on music callbacks, refer to Integration Details - Music Callbacks
This example uses a Music Switch container. Try switching the states by triggering the event listed in the demo page. Switching state might produce a result that is immediate or occur at the time specified in the rules of the music container.
This is a multiplayer demonstration which shows how to integrate Wwise's motion engine into your game.
In this demonstration, each player has the option to either close a door in the environment or to shoot a gun that they are holding. A listener is set for each player which is active on the door game object as well as the player's own gun. This way, if any player closes the door in the environment, all players receive force feedback reactions. However, only the player who fired his weapon receives force feedback for that event.
||Note: A player using a keyboard should plug in a gamepad to participate in this demo.|
For more information on the Wwise Motion Engine, see Integrating Wwise Motion
This demo shows how to record audio from a microphone and input it in the Wwise sound engine. In the Integration Demo select the "Microphone Demo". and speak into the microphone to hear your voice played back by the Wwise sound engine. Toggle the "Enable Delay" to hear an example of how audio data fed to the Audio Input plug-in can be processed like any other sound created in Wwise.
Each platform has a very different core API to access the microphone. Check the SoundInput and SoundInputMgr classes in the Integration Demo code to see how they interact with the AudioInput plugin.
The microphone sample was tested using "Logitech USB Microphones" on all platforms.
See also Audio Input Source Plug-in
This demo shows how to use external sources. Both buttons play the same sound structure, but set up at run-time with either sources "1", "2" and "3" or sources "4", "5" and "6".
For more information on the external sources feature, see Integrating External Sources.
Additionnally, the external sources are packaged in a file packager and loaded when opening the demo page. Refer to the Wwise Help for more information on the File Packager, and to the Streaming / Stream Manager chapter for more details on the run-time aspect of file packages.
The Wwise project for this program is also available in "samples\IntegrationDemo\WwiseProject".
Note: The Wwise project for this program uses various audio file conversion formats, some of which may not be available depending on which platforms are supported by your Wwise installation. After opening the project in Wwise, you may see warnings such as:
'\Actor-Mixer Hierarchy\Dialogues\Captain_A\UNA-BM-AL_01\UNA-BM-AL_01' uses the conversion plugin 'XMA', which is not installed.
You can make these messages disappear by changing the conversion format for all unavailable platforms to PCM. Refer to the following topic in the Wwise User Guide for more information: Finishing Your Project > Managing Platform and Language Versions > Authoring Across Platforms > Converting Audio Files.
SoundBanks for this project are also installed with the SDK in the "samples\IntegrationDemo\WwiseProject\GeneratedSoundBanks" folder.
To regenerate the SoundBanks, make sure to do the following in the SoundBank Manager:
- Check all banks in the SoundBanks list
- Check off all platforms that are being tested
- Check all languages in the Languages list
Once these settings are correct, you can click on Generate in the SoundBank Manager to generate the banks.
The Integration Demo as well as its Wwise Project are kept very simple in order to demonstrate the basics of sound engine integration. For a more realistic integration project, refer to the AkCube Sound Engine Integration Sample Project.