コミュニティQ&A

Audiokineticのコミュニティ主導のQ&Aフォーラムへようこそ。ここはWwiseとStrataのユーザのみなさまがお互いに協力し合う場です。弊社チームによる直接のサポートをご希望の場合はサポートチケットページをご利用ください。バグを報告するには、Audiokinetic LauncherのBug Reportオプションをご利用ください。(Q&AフォーラムではBug Reportを受け付けておりませんのでご注意ください。専用のBug Reportシステムをご利用いただくことで、バグの報告が適切な担当部門に届き、修正される可能性が高まります。)

最適な回答を迅速に得られるよう、ご質問を投稿される際は以下のヒントをご参考ください。

  • 具体的に示す:何を達成したいのか、またはどんな問題に直面しているのかを具体的に示してください。
  • 重要な詳細情報を含める:Wwiseとゲームエンジンのバージョンやご利用のOSなど詳細情報を記載してください。
  • 試したことを説明する:すでに試してみたトラブルシューティングの手順を教えてください。
  • 事実に焦点を当てる:問題の技術的な事実を記載してください。問題に焦点を当てることで、ほかのユーザのみなさまが解決策を迅速に見つけやすくなります。

0 支持

 I am looking for some insight into the best way to create a distance crossfade for 3rd person guns. I'm in the process of remixing my game using the new HDR system, and in doing so I wanted to try and clean up my messy hack for distance crossfades, and improve on the system. 

 My current set up for 3rd person guns goes as follows- Parent is a Random looping container,  ten blend containers nested inside, with each blend container playing three sounds; Near, mid, far. I then set each of the three distance sounds with different attenuations to mimic a distance crossfade.

 This system is not working as well as it did pre-HDR, as I am now changing to sharper and more realistic attenuation curves. Ideally I would be using a blend track but I was told that creating a RTPC that sent how far away each gun was from the player would eat up a lot of math doing the traces to find how far away each object is from the listener. It seems like wwise is already recieving the distance information to control attenuation though, so is there a way to tap into that?

 Is there a way that you can easily find the max attenuation of a sound to create and set a rtpc for use in a blend track.  Is there a way to make a blend track using the max distance from the blend containers attenuation? It seems like there should be a elegant solution to this but I can not find one.

Engine - Unreal3 : Platforms - PC and PS4

TL:DR: Has anyone had success creating a RTPC to control distance crossfades from blend tracks? Is there a way to tap into the distance information wwise is recieving and use it to control a blend track for distance crossfading.

Any help would be much appreciated.

Morgan G. (220 ポイント) General Discussion

回答 1

0 支持
Uhm, did you try to use separate positioning sharesets for each distance sound (near, mid, far)?

You can create smooth volume change for each layer and hear only desirable sound (or mix of two sounds) at desirable distance.
Robert (390 ポイント)
Thanks for the reply. That is how my current set up is done (found in the second paragraph).

 It can work, but it is messy,and time consuming when you have to come up with many different share sets to work with the 50 different weapons in the game, and then fine tune them each to work with the individual weapons and have them translate properly with the HDR window and the other elements in the weapon sounds: Metal, Tails, Brass.   

Maybe there isn't a cleaner solution.
Oops, sorry, didn't notice that.

Well, yeah, it can me messy if you have a lot of object structures in Wwise. Its current interface system is a little bit clumsy.

You can, at least at the developing stage, decrease the number of sharesets. For example down to 6. Close, middle, far for usual weapons and another copy for very loud models like sniper rifles.
...