社区问答

欢迎来到 Audiokinetic 社区问答论坛。在此,Wwise 和 Strata 用户可互帮互助。如需我们团队直接提供协助,请前往技术支持申请单页面。若要报告问题,请在 Audiokinetic Launcher 中选择“报告错误”选项(注意,问答论坛并不会接收错误报告)。我们内部设有专门的错误报告系统,会有专人查看报告并设法解决问题。

要想尽快得到满意的解答,请在提问时注意以下几点:

  • 描述尽量具体:比如,想达到什么样的目的,或者具体哪里有问题。
  • 包含关键细节:比如,Wwise 和游戏引擎版本以及所用操作系统等等。
  • 阐明所做努力:阐明自己为了排除故障都采取了哪些措施。
  • 聚焦问题本身:聚焦于问题本身的相关技术细节,以便别人可以快速找到解决方案。

0 投票

 I am looking for some insight into the best way to create a distance crossfade for 3rd person guns. I'm in the process of remixing my game using the new HDR system, and in doing so I wanted to try and clean up my messy hack for distance crossfades, and improve on the system. 

 My current set up for 3rd person guns goes as follows- Parent is a Random looping container,  ten blend containers nested inside, with each blend container playing three sounds; Near, mid, far. I then set each of the three distance sounds with different attenuations to mimic a distance crossfade.

 This system is not working as well as it did pre-HDR, as I am now changing to sharper and more realistic attenuation curves. Ideally I would be using a blend track but I was told that creating a RTPC that sent how far away each gun was from the player would eat up a lot of math doing the traces to find how far away each object is from the listener. It seems like wwise is already recieving the distance information to control attenuation though, so is there a way to tap into that?

 Is there a way that you can easily find the max attenuation of a sound to create and set a rtpc for use in a blend track.  Is there a way to make a blend track using the max distance from the blend containers attenuation? It seems like there should be a elegant solution to this but I can not find one.

Engine - Unreal3 : Platforms - PC and PS4

TL:DR: Has anyone had success creating a RTPC to control distance crossfades from blend tracks? Is there a way to tap into the distance information wwise is recieving and use it to control a blend track for distance crossfading.

Any help would be much appreciated.

分类:General Discussion | 用户: Morgan G. (220 分)

1个回答

0 投票
Uhm, did you try to use separate positioning sharesets for each distance sound (near, mid, far)?

You can create smooth volume change for each layer and hear only desirable sound (or mix of two sounds) at desirable distance.
用户: Robert (390 分)
Thanks for the reply. That is how my current set up is done (found in the second paragraph).

 It can work, but it is messy,and time consuming when you have to come up with many different share sets to work with the 50 different weapons in the game, and then fine tune them each to work with the individual weapons and have them translate properly with the HDR window and the other elements in the weapon sounds: Metal, Tails, Brass.   

Maybe there isn't a cleaner solution.
Oops, sorry, didn't notice that.

Well, yeah, it can me messy if you have a lot of object structures in Wwise. Its current interface system is a little bit clumsy.

You can, at least at the developing stage, decrease the number of sharesets. For example down to 6. Close, middle, far for usual weapons and another copy for very loud models like sniper rifles.
...