Community Q&A

Welcome to Audiokinetic’s community-driven Q&A forum. This is the place where Wwise and Strata users help each other out. For direct help from our team, please use the Support Tickets page. To report a bug, use the Bug Report option in the Audiokinetic Launcher. (Note that Bug Reports submitted to the Q&A forum will be rejected. Using our dedicated Bug Report system ensures your report is seen by the right people and has the best chance of being fixed.)

To get the best answers quickly, follow these tips when posting a question:

  • Be Specific: What are you trying to achieve, or what specific issue are you running into?
  • Include Key Details: Include details like your Wwise and game engine versions, operating system, etc.
  • Explain What You've Tried: Let others know what troubleshooting steps you've already taken.
  • Focus on the Facts: Describe the technical facts of your issue. Focusing on the problem helps others find a solution quickly.

0 votes
类似audio和salsa实现的口型和声音同步
in General Discussion by (130 points)

1 Answer

0 votes
 
Best answer
Question Translation:

How to use unity+wwise to achieve lip sync?
For example like using the salsa plugin

 

比较简单的方法:可以使用Wwise Meter去捕获语音内容的音量数据,并以此数据直接驱动动画
通常的做法:动画师使用音频资源建立具备Lipsync的动画剪辑数据,随后确保相应的音频内容与动画剪辑同时播放

可参考文档:https://www.audiokinetic.com/zh/library/edge/?source=SDK&id=soundengine_markers.html

 

A simpler approach: you can use Wwise Meter to capture the volume data of the audio content and use it to drive the animation directly
The usual method: the animator uses the audio source to create the animation clip data with Lipsync, and then ensures that the corresponding audio content is played at the same time as the animation clip

You can also refer to the documentation: https://www.audiokinetic.com/zh/library/edge/?source=SDK&id=soundengine_markers.html
by Hou Chenzhong (Audiokinetic) (6.0k points)
selected by Hou Chenzhong (Audiokinetic)
...