版本
menu_open

Custom scheduling of audio rendering

By default, the Wwise Sound Engine does all its command processing and audio rendering in a dedicated thread named AK::EventManager, controlled by the AkPlatformInitSettings::threadLEngine parameters. Calling AK::SoundEngine::RenderAudio signals the end of a game frame and allows the thread to consume all API commands since the previous call to RenderAudio.

Setting AkInitSettings::bUseLEngineThread to false disables this thread, and causes RenderAudio to synchronously do command processing, and audio rendering if needed. The actual rate of audio output remains controlled by the audio endpoint. If the RenderAudio call interval is shorter than the buffer period determined by AkInitSettings::uNumSamplesPerFrame and the output sample rate, some calls to RenderAudio will skip the audio rendering portion. Conversely, if the RenderAudio call interval is longer than the output buffer period, RenderAudio may process more than one buffer at a time, causing a CPU usage spike, and may eventually cause the audio to stutter.

When disabling the audio rendering thread, synchronous AK::SoundEngine::LoadBank and AK::SoundEngine::UnloadBank API calls must not be done from the same thread as the caller of RenderAudio: those calls may block until an audio buffer is rendered to complete Stop operations and free SoundBank media, which won't happen without a concurrent call to RenderAudio.


此页面对您是否有帮助?

需要技术支持?

仍有疑问?或者问题?需要更多信息?欢迎联系我们,我们可以提供帮助!

查看我们的“技术支持”页面

介绍一下自己的项目。我们会竭力为您提供帮助。

来注册自己的项目,我们帮您快速入门,不带任何附加条件!

开始 Wwise 之旅