社区问答

欢迎来到 Audiokinetic 社区问答论坛。在此,Wwise 和 Strata 用户可互帮互助。如需我们团队直接提供协助,请前往技术支持申请单页面。若要报告问题,请在 Audiokinetic Launcher 中选择“报告错误”选项(注意,问答论坛并不会接收错误报告)。我们内部设有专门的错误报告系统,会有专人查看报告并设法解决问题。

要想尽快得到满意的解答,请在提问时注意以下几点:

  • 描述尽量具体:比如,想达到什么样的目的,或者具体哪里有问题。
  • 包含关键细节:比如,Wwise 和游戏引擎版本以及所用操作系统等等。
  • 阐明所做努力:阐明自己为了排除故障都采取了哪些措施。
  • 聚焦问题本身:聚焦于问题本身的相关技术细节,以便别人可以快速找到解决方案。

+1 投票
Hello,

I'm interested in using a high accuracy sequencer in unity to call sounds in the Actor-Mixer Hierarchy. Currently, if I do the naive thing and just call "AkSoundEngine.PostEvent" the timing is off. I'm wondering what the correct approach is here (aside from just using the interactive music system within wwise itself). Anyone have experience doing this?. The accurate sequencers I've found for unity use AudioSettings.dsptime and AudioSource.PlayScheduled. I'm assuming if the unity audio system is off that's not possible to make use of, and even if it was available I'm not sure it would be of use. Any suggestions welcome!

Thanks,

Kent
分类:General Discussion | 用户: Kjolly (110 分)
I want to do exactly the same thing, i.e. schedule events to avoid latency issues when posting events and improve accuracy for audio events that are very time sensitive.

1个回答

0 投票
As far as I know, the interactive music tier itself is already doing something similar.
It depends on what kind of design you want to implement using this type of approach.
用户: Hou Chenzhong (Audiokinetic) (6.0k 分)
...