On many consoles it is possible to replace the game music with the user's own music from his personal library. This feature is almost automatic in Wwise. However, there are some differences in implementation and behavior for some platforms.
On all platforms, the initial setup is the same: the sound designer must tag the busses that will be muted when the user starts his music. This is done with the "Mute for Background Music" check box. On the code side, some additional details are needed depending on the platform.
For the background music option to work on the PlayStation 3, you must also enable a switch in the sound engine initialization settings (AkPlatformInitSettings::bBGMEnable).
Nothing special is needed on the code side for this platform.
Make sure to fill the jNativeActivity member of the AkPlatformInitSettings. The Mute/Unmute action will occur only when the user switches from the music player app to the game. This means that there is no "Unmute" if the user music finishes by itself.
If the AudioSession flag "MixOther" is set in the sound engine initialization settings, the Mute/Unmute action will occur only when the user switches from the music player app to the game. This means that there is no "Unmute" if the user music finishes by itself. On iOS8 and later, if the AVAudioSessionCategoryAmbient category is used, muting and unmuting of the game music will occur for all application audio interruptions.
Some platforms have a DVR function that allows the gamer to record his gameplay and publish it. This raises a few legal issues regarding the copyrighted music that might be part of the game audio, or user-replaceable music. While the game studio have the rights to use the music in their game, the end-user doesn't have the rights to distribute it in any form. Thus the platform requirements usually state that user background music should not be recorded.
The cost-effective solution (CPU-wise) for this problem is to mix the music separately from the rest of the game. This is done using the Secondary Output feature.
The only thing needed in the Authoring tool, is to route the music objects to the Master Secondary Bus, or any other bus under that bus. If your project also runs on an older platform which doesn't have a separate BGM output, you will need to Unlink the Output Bus property for that platform and route the music to a different bus in the main hierarchy.
If your game is also playing sounds on the game controllers, your project also uses this bus hierarchy for the controller sounds. Do not worry about the music being mixed with the controller sounds or vice-versa, it won't be. There is an additional step for final routing decisions made through the Listener/Game Object pairings (see below). This is set up by the game programmer. So just as each player doesn't have the same sounds mixed in each of their controllers, the music will be sent to a different mixing structure internally.
The BGM output must be added manually, if used in the game. This is done with AK::SoundEngine::AddSecondaryOutput. The regular Listener/GameObject concept is used to route sounds exclusively to this device. The game must use a different listener than the one used for TV output (usually 0). See Concept: Listeners for more information. The programmer must also setup the listener mask of the Game Objects that should be heard by this listener using AK::SoundEngine::SetActiveListeners or Ak::SoundEngine::RegisterGameObj.
//Add a BGM output, associated with listener #8 (Why 8? Because it is not 0 which is usually associated to the player). AddSecondaryOutput(0 /*Unused for BGM*/, AkSink_BGM, 0x80 /*Listener 8 (8th bit)*/); //Setup a game object to emit sound to the listener 8 RegisterGameObj(MY_MUSIC_OBJECT, 0x80); //Play the music. This sound must be routed to the Master Secondary Bus (or any sub bus) PostEvent("Play_Cool_Music", MY_MUSIC_OBJECT);