Table of Contents
Wwise SDK 2018.1.11
Creating Wwise sound engine plug-ins is empowering, but it can be complex. We strongly recommend that you read through the following sections in tandem with the samples we provide in the Sample Plug-ins section. Our samples offer a reliable style and structure for the basis of your own creations.
- Wwise Sound Engine Plug-ins Overview
- Plug-in Static Registration
- Dynamic Libraries
- Allocating/De-allocating Memory in Audio Plug-ins
- Parameter Node Interface Implementation
- Audio Plug-in Interface Implementation
- Accessing Data Using AkAudioBuffer Structure
- Audio Plug-in Interface Implementation
- Using Global Sound Engine Callbacks From Plugins
Plug-ins allow you to insert custom DSP routines into the overall signal processing chain performed by the sound engine. The plug-in parameters can be controlled either from the authoring tool or in-game through RTPCs (refer to Concept: Real-Time Parameter Controls (RTPCs)).
Each plug-in consists of 2 components:
- A runtime component integrated into the sound engine that can be developed cross-platform or optimized for a specific platform. The game will need to register this plug-in (see Plug-in Static Registration) and link against its implementation.
- A user interface component integrated into the authoring tool. This component is found in a DLL loaded by Wwise (see How to Create a Wwise Plug-in DLL). All plug-in parameters determined in the UI are stored in banks so this component is not used in game. If a parameter is to be controlled in game through RTPCs, the necessary information will also be stored in banks.
|Tip: Create a common static library for the runtime component. This way it can be linked by both the game and the plug-in user interface, a DLL which is loaded by Wwise.|
A parameter node interface needs to be implemented to react to changes coming from either the sound engine's RTPC manager or the Wwise authoring tool and retrieve the current plug-in instance parameters during execution. (See Parameter Node Interface Implementation.)
There are 3 main categories of plug-ins that can be integrated into the sound engine:
- Source plug-ins, which produce audio content using synthesis methods such as physical modeling, modulation synthesis, sampling synthesis, and so on. Refer to How to Create Wwise Sound Engine Source Plug-ins for more information.
- Effect plug-ins, which apply DSP algorithms to existing sounds processed as input audio data. Refer to How to Create Wwise Sound Engine Effect Plug-ins for more information.
- Audio devices plug-ins, which receive audio data at the end of the pipeline and can pass it to other sound systems. Refer to How to Create Wwise Audio Device Plug-ins (Sink Plug-ins) for more information.
Plug-ins and their associated parameter interfaces are created by the sound engine through a plug-in mechanism which requires the exposure of static creation functions that return new instances of parameter nodes and new plug-in instances when required. The following code demonstrates how this can be done. The creation functions must be packaged into AK::PluginRegistration static instances that will be seen by the library users. You need one
AK::PluginRegistration class per plugin class/type.
|Note: Set the AkPluginType argument of the AK::PluginRegistration function appropriately depending on the type of plug-in you make. For example, AkPluginTypeSource for source plug-ins and AkPluginTypeEffect for effect plug-ins. See AkPluginType|
|Note: The naming of the Registration object is important. It goes with the AK_STATIC_LINK_PLUGIN(pluginname) macro which will concatenate "Registration" after pluginname. See below Plug-in Static Registration.|
If you are a plug-in provider and wish to ensure that your plug-ins will not have the same ID as plug-ins from other vendors, please contact firstname.lastname@example.org to obtain a reserved company ID.
Various instances of audio plug-ins are all handled by the Plug-in Manager, which recognizes different plug-in classes using both CompanyIDs and PluginIDs. Plug-ins must be registered to the Plug-in Manager before they can be used in game. The registration process binds a PluginID to the creation function callbacks provided as arguments.
The sample code below demonstrates how plug-ins can be registered by the game. Plug-ins provided by Audiokinetic must also be registered if they are used by a game.
For ease of use, all plugins also have a Factory header file which only contains the AK_STATIC_LINK_PLUGIN macro. To make plugin management easier, name your Factory header the same way as your library concatenated with "Factory". For example, the code below would be the content for MyPluginFactory.h associated with MyPlugin.lib.
A game using MyPlugin would simply include the MyPluginFactory file and link with MyPlugin.lib.
|Note: If you get a link error about symbols ending with _linkonceonly multiply defined, this means you included the Factory include in multiple CPP files. It needs to be included once only per linking unit (such as a DLL, SO, DYLIB, or EXE file).|
There are two ways to use plug-ins in a game: through static libraries and dynamic libraries. Distributing a static library is mandatory. Distributing dynamic libraries is optional but highly recommended because the Wwise integration for Unity uses dynamic libraries. The Wwise Unreal integration, however, needs to use static libraries.
Creating a dynamic library from the library is quite easy. Audiokinetic did this for all the Effect plug-ins, so you can find many examples in the folder \SDK\samples\DynamicLibraries. You must:
- Ensure the project links the static library you built.
- Include one reference to AK_STATIC_LINK_PLUGIN either directly or through a header file.
- Make sure your Dynamic library exports the symbol "g_pAKPluginList". Most platforms will export it automatically because it is explicitly declared as exportable. Some will need an explicit DEF file. Use the macro
DEFINE_PLUGIN_REGISTER_HOOKto define the symbol.
- Name your dynamic library according to your XML declaration. If you did not specify the
EngineDllNameattribute in the XML, name it the same name as the XML. Note that you can group multiple plug-ins in a single DLL.
In short, the code you need to transform your static library into a dynamic library is only:
To deploy your plug-in in Unity, put a copy of the dynamic library in the Wwise\Deployment\Plugins\[Platform]\DSP folder. All DSP plug-ins in that folder should be in an optimized configuration, ready for game release.
Note: On iOS, the build system prevents the use of dynamic libraries. Therefore, in Unity you must deploy both the .a file and the corresponding Factory.h file. The link between the usage of the dynamic library on other platforms and the static library on iOS is through its name: make sure that the DLL name is the same as the lib name (or a substring), without the "lib" prefix. For example:
Note: On Mac Wwise will load only DYLIB files. However, Unity doesn't recognize DYLIB as a valid extension. Therefore, it doesn't copy and deploy those files when building the game. To work around this problem, rename the extension to BUNDLE, even if the file itself is not a BUNDLE. For example:
|Note: On Android you need to prefix the dynamic library with "lib", such as libMyPlugin.so.|
You should perform all dynamic memory allocation or deallocation inside audio plug-ins through the provided memory allocator interface. This ensures that all memory consumed and released by plug-ins can be tracked by the Memory Manager, allocated from a specific memory pool, and displayed in the Wwise profiler.
Macros are provided to overload the new/delete operators and call malloc()/free() using the provided memory allocator. These macros are provided in IAkPluginMemAlloc.h.
To allocate objects, use the AK_PLUGIN_NEW() macro and pass the pointer to the memory allocator interface and the desired object type. The macro returns a pointer to the newly-allocated object. The corresponding AK_PLUGIN_DELETE() macro should be used to release memory.
To allocate arrays use the AK_PLUGIN_ALLOC(), which gets the requested size in bytes from the Memory Manager and returns a void pointer to the allocated memory address. Use the corresponding AK_PLUGIN_FREE() macro to release the allocated memory. The sample code below demonstrates how to use the macros.
The parameter node essentially centralizes read and write access to parameters. The plug-in parameter interface consists of the following methods:
This method should create a duplicate of the parameter instance and adjust necessary internal state variables to be ready to be used with a new plug-in instance. This situation occurs, for example, when an event creates a new playback instance. The function must return a new parameter node instance using the AK_PLUGIN_NEW() macro. In many cases, a copy constructor call is sufficient (as in the code example below). In cases where memory is allocated within the parameter node, a deep copy should be implemented.
This function initializes the parameters with the provided parameter block. When the provided parameter block size is zero (i.e. when the plug-in is used within the authoring tool), AK::IAkPluginParam::Init() should initialize the parameter structure using default values.
|Tip: Call AK::IAkPluginParam::SetParamsBlock() to initialize the parameter block when it is valid.|
This method sets all plug-in parameters at once using a parameter block that was stored through AK::Wwise::IAudioPlugin::GetBankParameters() during bank creation in Wwise. The parameters will read in the same format they were written into the bank by the Wwise counterpart of the plug-in. Note that data is in packed format and thus variables may not be aligned based on the data type as required for some target platforms. Use the READBANKDATA helper macro provided in AkBankReadHelpers.h to avoid theses platform specific concerns. There is no need to worry about endianness issues for the plugin parameters as data is properly byte swapped by the application.
This method updates a single parameter at a time. This is called whenever a parameter value changes, either from the Plugin UI, RTPCs, and so on. The parameter to update is specified by an argument of type AkPluginParamID and corresponds to the AudioEnginePropertyID defined in the Wwise XML plugin description file. (Refer to Wwise Plug-in XML Description Files for more information).
|Tip: We recommend binding each AudioEngineParameterID defined in the XML file to a constant variable of type AkPluginParamsID.|
|Note: Parameters that support RTPCs are assigned the type AkReal32 regardless of the property type specified in the XML file.|
This method is called by the sound engine when a parameter node is terminated. Any memory resources used must be released and the parameter node instance is responsible for self-destruction.
Each plug-in has an associated parameter node from which it can retrieve parameter values from to update its DSP accordingly. The associated parameter node interface is passed in at plug-in initialization and will remain valid for the lifespan of the plug-in. The plug-in can then query information from the parameter node as often as required by the DSP process. Because of the unidirectional relationship between a plug-in to its associated parameter node, responding to parameter value query by the parameter node is left to the discretion of the implementer (e.g. using accessor methods).
To develop an audio plug-in, you must implement certain functions to allow the plug-in to function properly within the engine's audio data flow. For source plug-ins you should derive from the AK::IAkSourcePlugin interface, for effects that can replace the input buffer with the output (e.g. no need for unordered access or change in data rate) you should derive from the AK::IAkInPlaceEffectPlugin. For other effects that require out of place implementation, you should derive from the AK::IAkOutOfPlaceEffectPlugin interface.
A plug-in life cycle always begins with a call to the AK::IAkPlugin::Init() function, immediately followed by a call to AK::IAkPlugin::Reset(). As long as the plug-in needs to output more data, AK::IAkPlugin::Execute() is called with new buffers. When the plug-in is no longer needed, AK::IAkPlugin::Term() is called.
This method is called when the plug-in is terminated. AK::IAkPlugin::Term() must release all memory resources used by the plug-in and self-destruct the plug-in instance.
The reset method must reinitialize the state of the plug-in to prepare it to accommodate new unrelated audio content. The sound engine pipeline will call AK::IAkPlugin::Reset() immediately after initialization and at any other time the state of the object needs to be reset. Typically all memory allocations should be performed at initialization but the status of delay lines and sample counts for example should be cleared on AK::IAkPlugin::Reset().
This plug-in information query mechanism is used when the sound engine requires information about the plug-in. Fill in the correct information in the AkPluginInfo structure to describe the type of plug-in implemented (e.g. source or effect), its buffer usage scheme (e.g. in place), and processing mode (e.g synchronous).
Note: Effect plug-ins should be synchronous on all platforms.
// Effect info query from sound engine.
out_rPluginInfo.bIsInPlace = true; // In-place plug-in.
out_rPluginInfo.bIsAsynchronous = false; // Synchronous plug-in.
Audio data buffers are passed to plug-ins at execution time through a pointer to an AkAudioBuffer structure. All audio buffers passed to plug-ins use a fixed format. For platforms supporting software effects, audio buffers’ channels are not interleaved, and all samples are normalized 32-bit floating point in the (-1.f,1.f) range running at a 48 kHz sampling rate.
The AkAudioBuffer structure provides means for accessing both interleaved and deinterleaved data. It contains a field to specify the number of sample frames valid in each channel buffer (AkAudioBuffer::uValidFrames) along with the maximum number of sample frames that these buffers can contain (returned by AkAudioBuffer::MaxFrames()).
The AkAudioBuffer structure also contains the buffer's channel mask, which defines the channels that are present in the data. If you just need the number of channels, use AkAudioBuffer::NumChannels().
A plug-in can access a buffer of interleaved data through AkAudioBuffer::GetInterleavedData(). Only source plug-ins should access and output interleaved data. In order to do so, they must correctly prepare the sound engine during initialization (see AK::IAkSourcePlugin::Init()). The sound engine will instantiate the DSP processed in order to properly convert the data into the native pipeline format.
|Tip: You can get better performance, if the source plug-in outputs data that is already in the sound engine's native format.|
A plug-in accesses individual deinterleaved channels through AkAudioBuffer::GetChannel(). The data type always conforms to the sound engine's native format (AkSampleType). The example code below shows how all deinterleaved channels can be retrieved for processing.
|Caution: A plug-in must never take for granted that buffers of each channel are contiguous in memory.|
The channels for processing audio are ordered as follows: Front Left, Front Right, Center, Rear Left, Rear Right, and LFE. The Low-Frequency channel (LFE) is always placed at the end (except for Source plug-ins), so that it can be handled separately because many DSP processings require it. A plug-in can query if the LFE channel is present in the audio buffer with
AkAudioBuffer::HasLFE(). It can access the LFE channel directly by calling
AkAudioBuffer::GetLFE(). The following code demonstrates two different ways for handling the LFE channel separately.
If you want to apply specific processing to a channel that is not the LFE, then you need to use the channel index defines of AkCommonDefs.h. For example, if you want to process only the center channel of a 5.x configuration, you would do the following:
|Tip: Notice that the channel index defines are independent of whether the configuration is N.0 or N.1. This is because the LFE channel is always the last one, except for Source plug-ins (AK_IDX_SETUP_N_LFE should not be used if the channel configuration has no LFE).|
|Note: In the case of 7.1, the channel ordering for interleaved data of source plug-ins is L-R-C-LFE-BL-BR-SL-SR.|
Plugins may use
AK::IAkGlobalPluginContext::RegisterGlobalCallback() to register to various global sound engine callbacks. For example, a plug-in class may need to be called once per audio frame, via a singleton, of which plug-in instances are aware. You may also use global hooks for initializing data structures that remain during the whole lifetime of the sound engine.
To do so, replace the default AK_IMPLEMENT_PLUGIN_FACTORY macro with your own implementation, to utilize the AK::PluginRegistration's "RegisterCallback". In the code snippet below, a static callback called MyRegisterCallback is defined for this purpose and passed to the AK::PluginRegistration object MyPluginRegistration.
For more information, refer to the following sections: