Wwise Versions
  • Sample Project
  • Wwise Fundamentals
  • Wwise Help
  • Wwise SDK
  • Wwise Unity Integration
  • Wwise Unreal Integration

Other Documentation

  • Strata
  • ReaWwise
  • Audiokinetic Launcher
  • Wwise Audio Lab​
  • Wwise Adventure Game
  • GME In-Game Voice Chat
  • Meta XR Audio

Certification Courses

  • Wwise Fundamentals (2024.1)
  • Wwise Interactive Music (2021.1)
  • Wwise Performance Optimization (2023.1)
  • Wwise Unity Integration (2023.1)

Other Sources

  • Website
  • Videos
  • Plugins
  • Creators Directory
  • Q & A
  • Blog
To avoid any clashes with the music playing at the time, I placed the main musical content into its own mix bus and then the stingers into a second isolated mix bus and applied a channel ducking function. So anytime a musical sting was triggered the mixer would simply drop the volume of the main music immediately, allow the sting to play and then slowly over 2 or 3 seconds raise the main music back ...
This is an obvious argument for a clear and well-mixed game where audio cues can always be heard along with their visual. Audio produces a stronger physiological response than Visuals alone This is mostly common knowledge at this point. We all know that music and sound along with a visual is significantly more arousing or stressful than a visual by itself. This is significant to performance however ...
atmoky Ears is the one stop solution for rendering hyper-realistic spatial audio experiences to headphones. It provides an unparalleled combination of perceptual quality and efficiency, whilst getting the best out of every spatial audio mix. atmoky Ears puts the listener first and offers a patented perceptual optimization. For those who want to squeeze out the very last drop of performance from their ...
It consists of a few small stateless helpers that accept WaapiClient as argument, so they can be mixed up with vanilla waapi-client code. All functions follow convention regarding getting properties, such that if a property doesn’t exist, the value should be None, plain and simple. I won’t be going into details here, as examples ahead will do a better job demonstrating what it looks like. Examples ...
Hoffman (Sound Designer, Insomniac Games)What is haptic feedback? How are haptics created? Is it possible to author, manage, and mix haptics within Wwise? Haptic feedback is an important feature in video games and new technology is changing the way players feel and connect with gaming experiences. Sound designers Rodrigo Robinet and Tyler Hoffman will provide a high-level overview of haptic feedback ...
In our previous blog, Simulating dynamic and geometry-informed early reflections with Wwise Reflect in Unreal, we saw how to mix sound with the new Wwise Reflect plug-in using the Unreal integration and the Wwise Audio Lab sample game. In this blog, we will dive deeper into the implementation of the plug-in, how to use it with the spatial audio wrapper, and how it interacts with the 3D-bus architecture.
Like the first method, it relies on pre-composed material, but allows for control of the mix. For example, a user could choose between one of several bass lines, or elect to have a horn section provide an accompaniment. Again, a navigable tree structure in the background could control groups of tracks and lead to logical musical choices. To gain more interactivity, the third level calls for asynchronous ...
For each player, voice chat mainly involves two audio stream linkages: the upstream linkage where the local mic captures the player's own voice and distributes it to remote teammates through the server, and the downstream linkage where the voices of all teammates are received from the server, mixed, and played back on the local device.Upstream linkage:The player's local chat voice stream will be sent ...
Setup  Wwise Project Settings  Configuring Unreal Integration  Unreal Settings (Optional)    Sound Attenuation Settings    Sound Submix Settings​AudioLink - Playing the sound  When specifying Sound Attenuation (Blueprint node)  When specifying Sound Attenuation (Audio Component)  When specifying Wwise AudioLink Settings  Sound SubmixConclusion What is AudioLink? AudioLink is an Unreal Engine feature ...
States then it can choose to move through them by using a custom sequencer unique to that system, or by using a generic sequencer that switches between them randomly over the course of however long the system decides to run for. Beyond this top level mechanic, what the systems actually look like in Wwise can be broken down as follows.  ACTOR-MIXER SYSTEMS   The relatively simple systems are the ones ...
Sleeping Dogs is a complex game that needed many different mix states depending on what was going on. We found that the recent improvement in the mix states accommodated us very well. We were able to apply many different mix states to our game. The system performed solidly and it was intuitive and reliable. We also used the Wwise meter as a sidechain which was very helpful when it came down to the ...
MIDI, as a game scoring delivery format, still makes many composers and developers cringe.  But in our rush to abandon MIDI game scores in favor of fully rendered PCM/wave mixes, we’ve thrown the baby out with the bathwater.  20 years later, a convergence circumstances allowed the audio team at PopCap to revisit the concept of real-time MIDI in games.  Audiokinetic announced the addition of MIDI functionality ...
Busses can be copied/pasted in the Master Mixer Hierarchy. Bus Presets can also be saved and loaded. Effects can now be copied from one object to another. Users can now double-click on an Effect within the Advanced Profiler view's Voices Graph tab, to directly open the Effect Editor. The "New Child" contextual menu now exposes all source plug-ins which speeds up the creation of new hierarchies. Sound ...
It also explains the added functionality of Positioning: Smooth 2D/3D Transitions: Speaker Panning / 3D Spatialization Mix Slider 3D User-defined around the emitter: Emitter with Automation   R.I.P. 2D/3D While improving the terminology used in the Positioning tab, we came to the conclusion that 2D was not the best term to define the behavior of these sounds. The 2D/3D sound concept was introduced ...
This gave us fast and easy control over which elements should be most present at any given moment and allowed us to immediately get new sounds fitting nicely into the mix as they came online. We also used Wwise meters to set up logic for ducking certain groups so that important sounds and key dialogue could cut through. The player moves through a number of different environments throughout the experience ...
Things Dynamic As a general rule, I try to ensure that every single possible camera position in a level has a bespoke ambience mix. Thinking like this helps me ensure I’m adding enough detail to my ambiences, and it helps me achieve a certain stylistic audio direction which I personally quite like: ambiences that sound like they would in a film. In films and TV, typically, whenever there is a camera ...
The mixing of an open-source lib with AK-licensed code like this -- especially in light of the comment in our "Open Source Components in Wwise" documentation -- is definitely an oversight in our code structure. For the near-term, I can note two or three options for you. 1) Proceed with inclusion of the rpmalloc compiled code anyway, since its license is so remarkably permissive. 2) Since you have ...
Now, let's look at what makes K-verb DSP so special.   K-verb DSP Diagram On the left, you see the input which is the dry component, and AUX, the mix for the wet component. The AUX is 5 channels for the listener, and it is remixed into 8 channels, for each of the eight angles on the absolute horizontal plane around the listener. The loop in the middle is the reverb itself, and the delay duration, ...
This educational video contains supportive content to lesson 5 from the Wwise-101 Certification course. To follow along and access complete course content, please visit: https://www.audiokinetic.com/courses/wwise101/ Topics: 00:49 Understanding Property Offsets 02:08 Understanding the Actor-Mixer / Master-Mixer Relationship 04:51 Using Schematic View 05:47 Using the Voice Profiler
The first is a direct path while the second path contains a reflection off the wall.  The top right graph shows the resulting spectrum of the mixed waves while the bottom right graph shows the time of arrival (ToA) of each wave: Simulation of the Phasing Fountain As the listener moves closer to the wall, the difference between the ToA of each wave becomes smaller and the spectrum begins to have a ...