Audiokinetic's Community Q&A is the forum where users can ask and answer questions within the Wwise and Strata communities. If you would like to get an answer from Audiokinetic's Technical support team, make sure you use the Support Tickets page.

Using Wwise events to drive animations

+1 vote
Hi there,

I'm looking for a bit of advice regarding lip sync options using Wwise. We're looking to integrate a degree of lip sync into the project I'm currently working on, more general mouth movements that match the speech pattern as opposed to proper sync, and was wondering if anyone had any experience in using dialogue or audio from Wwise to drive animation movements like that? Like, for example, measuring the volume output or transients of an audio event to control how/when the mouth opens. I know markers can be used to trigger other events, but as the lip sync is not currently using bespoke animations per line I'm not entirely sure if that would work here. Similarly, setting up markers outside of the "Insert Filename Marker" in the Wwise conversion settings is currently a no go due to the sheer volume of work it would require and time restraints with regards to other audio tasks.

Also, we're using Unity if anyone has any info specific to that.

Cheers!
asked Aug 17, 2017 in General Discussion by Jaime C. (110 points)

1 Answer

0 votes
Using SoundForge  to add markers that will save to audio flle's metadata. Then PostEvent and Callback by marker.

Hope it helps you.
answered Aug 20, 2017 by 岳豪 (140 points)
...