Community Q&A

Welcome to Audiokinetic’s community-driven Q&A forum. This is the place where Wwise and Strata users help each other out. For direct help from our team, please use the Support Tickets page. To report a bug, use the Bug Report option in the Audiokinetic Launcher. (Note that Bug Reports submitted to the Q&A forum will be rejected. Using our dedicated Bug Report system ensures your report is seen by the right people and has the best chance of being fixed.)

To get the best answers quickly, follow these tips when posting a question:

  • Be Specific: What are you trying to achieve, or what specific issue are you running into?
  • Include Key Details: Include details like your Wwise and game engine versions, operating system, etc.
  • Explain What You've Tried: Let others know what troubleshooting steps you've already taken.
  • Focus on the Facts: Describe the technical facts of your issue. Focusing on the problem helps others find a solution quickly.

0 votes
I am wondering if there is any benefit of using the object based audio pipeline in wwise if the endpoint is, for example, a quest 2 headset? Based on the documentation from audiokinetic on understanding object based audio, it mentions that the " the endpoint can use the most appropriate rendering method to deliver the final mix over headphones or speakers." So unlike if you were working on audio that had an endpoint of 5.1 or 7.1 or windows using the windows sonic for headphones, that provides their own unique spatialization, would the same benefits apply if you routed certain audio objects through the object-based audio pipeline?

As far as I know, the Quest 2 headset doesn't have its own spatialization rendered build in, so one must use plugin in wwise to achieve that (Auro-3d, Resonance, OSP, etc...) So specially when working on audio for a VR title, would it even benefit the use of this awesome new pipeline?
in General Discussion by Cesar S. (100 points)

Please sign-in or register to answer this question.

...