Wwise 版本
- Sample Project
- Wwise SDK
- Wwise Unity 集成
- Wwise Unreal Integration
- Wwise 基础知识
- Wwise 帮助文档
其他文档
- Strata
- ReaWwise
- Audiokinetic Launcher
- Wwise Audio Lab
- Wwise Adventure Game
- GME In-Game Voice Chat
- Meta XR Audio
认证课程
- Wwise Fundamentals (2024.1)
- Wwise Interactive Music (2021.1)
- Wwise Performance Optimization (2023.1)
- Wwise Unity Integration (2023.1)
其他资源
- 网站
- 视频
- 插件
- 音频人脉网
- 问答
- 博客
AkUint32 target_bus_id = 0x123456; void speaker_volume_matrix_callback( AkSpeakerVolumeMatrixCallbackInfo* info ) { AkUniqueID bus_id = info->pMixerContext->GetBusID(); if ( bus_id != target_bus_id ) { // or some other way to making sure you only do this when you need it return; } AKASSERT( info->outputConfig.uNumChannels == 6 ); AKPLATFORM::AkMemSet( info->pVolumes ...
/qa/741/nt-make- er-plug-ins-work-since-build-5158-plug-registred/
Since the last build (5158) I can't make Astound RTI, or Auro3D or Two Big Ears 3Dception work in unity, I got the error : "Wwise: Plug-in not registered: 72159350 UnityEngine.Debug:LogError(Object) AkInitializer:CopyMonitoringInConsole(ErrorCode, ErrorLevel, UInt32, IntPtr, String) (at Assets/Wwise/Deployment/Components/AkInitializer.cs:218) AkCallbackManager:PostCallbacks() (at Assets/Wwise/Deploymen ...
Can anyone recommend a Plugin that compresses the quiet parts of the mix, not the loud parts. No Plugin that comes with Wwise can do that, neither compressor nor expander. I also gave HDR a shot, great tool, but comes with some unwanted side effects, too. So, what I really only need is a compressor that increases the overall volume when the mix is quiet, but leaves the mix untouched when it is loud.
How am I able to setup a reverb to fold down to sound smaller with distance? I have a giant room but the wet signal still hits the rear speakers, making it sound like the player is right behind me. Also, any reason why we can't control an aux send's high and low pass values in the attentuation editor on an audio object? It seems we can contol the aux send's volume. Thanks!
Our game is a mix of sandbox and enclosed experiences. As such, we use state mixing to make shared sounds usable across a variety of content since characters cross over between the multiple types of areas. Being able to change volumes and high/low-pass filters based on states has been an invaluable addition! However, the inability to change fall off, range, and priority are huge hindrances to offering ...
Wwise? We hope that after the game is recorded, we can get the video and 3 independent tracks, and still keep the bus mix dynamics information, so that if necessary, we can import the DAW again later to analyze the mix details layer by layer, and customize the overall mix or specific samples to optimize the game. Currently it is only possible to solo different buses separately, but then the ...
Hi Is there a way of exporting elements between projects. Lest say I have a great sequence container or interactive music setup i'd like to import into another project. Can I do it and if so how ? Thanks You can copy the work unit manually into the new project's folder structure. Step by step: - In Wwise Project A: Create a new Work Unit in Actor Mixer Hierarchy. Name the WU something like "Transfer".
So I have a simple game where zombies spawn at random points in the unity game. However I would like to use google resonance within Wwise to create more in depth spatialization - this means putting the random container of zombie sounds through an audio bus with the resonance plugin attached to the audio bus. Now, even though there are multiple zombies in the scene, only one of them is emitting the ...
Hello, I have a question about the metadata: Is it possible to include information like BPM, artist name, year of production ... ? or even the soundbank to which it belongs? The goal is to retrieve this information to use them directly in Unity. Where are these "Additional Metadata plug-ins can be installed via the Wwise Launcher" ? Is it only for internal Wwise mix info? Does anyone have any info ...
Hi,I want to play audio through a secondary output (HTC Vive headphones) while at the same time play audio through the main output (TV speakers) for a single player VR experience. I am using Unity3D 2019.3 and Wwise 2019.2.1. I am able to play audio out of the two audio devices in Wwise from following this guide by Ed Kashinsky by creating a separate Audio Device Shareset "System_VR" and a new ...
There could be individual mix changes on assets, or other properties that need to be set when mixing. If we want to reuse this content in a different location in the heirarchy, we would have to duplicate all of this content. And then any change we want to make to the original content would have to be duplicated in all of the other places we used it. We might have a character using the same footstep ...
Enabled 3D Spatialization: Position + Orientation Speaker Panning / 3D Spatialization Mix: 100 Attenuation: Enabled 3D Position: EmitterThis is then routed to a bus that is explicitly set to stereo.If I set this sound source 1m away and rotate it along the horizon clockwise, I notice that there is a "deadzone" ...
I tested by myself and found a solution: You can create a Music Segment containing MIDI Clip with a single MIDI note as a Stinger (to play your stinger in actor-mixer hierarchy), so that their playing time can be scheduled by an interactive music system. Then move the music content you want to play from the interactive music hierarchy to the actor-mixer hierarchy. There are two advantages to this: ...
We've been experiencing this issue, and I found a culprit that fixed our case today. I'll explain what I did, and hopefully, it will help anyone who may have encountered it. I believe this to be a bug. We recently enabled spatial audio on our project and have been experiencing this message excessively: "3D audio object limit exceeded; object xxxxx instigated by voice/bus will be mixed." Upon thorough ...
A music event cue feature has also been added in version 2019.1, providing you with the ability to influence parameters related to game mixes by posting event with the music timeline Hope this helps
If so, is there any Ak API that could help us convert raw data to stereo channels according to positions? It seems channels could be mixed on application side using the following utilities, but we don't have access to AkDevice (at least). CAkSpeakerPan::ComputePositioning AkMixer::MixNinNChannels - Is there any configuration flag we could toggle so the capture callback will provide us with stereo ...
IMHO, this had better return a list of paths, each corresponding to a valid language of the project. Similarly, if a SoundBank has mixed localized and non-localized content, one would expect the query to return the paths to all existing BNK files. Hope this helps. Thank you Beinan for reporting the issue. WG-43852 was created.
Did you ever find a solution? Currently I'm experimenting with a mix of stingers, event cues and midi triggers, but haven't found the perfect solution yet
You are not allowed to use existing sounds. You can however re-use the existing actor-mixer stuctures, but this is not something I would personally recommend. It is probably easier to start from scratch, by using only the events and game parameters from the project. Great stuff, thanks very much for the reply!
/qa/9942/n-not-assign-auxiliary-actor- er-lesson-course-wwise101/
can not assign Auxiliary Bus for the "actor-mixer"(main charactor) (in lesson 5 course wwise101) I am having this same problem. Did you ever find the answer? I cannot for the life of me figure this one out...,I had the same problem. I deleted the newly created Aux Bus (env_corridor), then created a new aux bus. Before assigning an effect to that aux bus, I went and routed the audio from the main character ...