Dehumaniser Live: An innovative technology focused on voice processing

커뮤니티와 이벤트 / 사운드 디자인

In the world of games, there are many characters that are completely fictitious. Characters like enormous dragons, gigantic soldiers, werewolves, zombies, and cyborgs each come with their own personality, battling the player or, at times, befriending them as the story progresses. It is the character's voice that adds personality to the virtual being, giving it life and thus playing an extremely critical role in providing the best gaming experience. However, it isn't easy to create voices for characters that don't actually exist. If it is a multilingual game, the workload can begin to multiply. KROTOS Dehumaniser Live is an innovative technology specializing in vocal effects, and works as a runtime plug-in for Wwise. The Dehumaniser Live vocal processing functions enable high-quality, creative voices making it, for example, possible to morph a werewolf's voice into a human voice, in real-time.

dehumaniser2-compressed-588x312.jpg

Krotos Ltd is one of Audiokinetic's community plug-in partners, which offers advanced audio technology. KROTOS Dehumaniser Live is comprised of four different components:

  • Dehumaniser Simple Mode
  • Dehumaniser Advanced Mode
  • Vocoder
  • Mic Input

There are two modes in the main component, Dehumaniser Live:

Dehumaniser Simple Mode

  • Age
  • Aggressiveness
  • Size
  • Character
  • Wildness

Figure_01.jpg

Dehumaniser Advanced Mode

  • Granular
  • Delay Pitch Shifting (x 2)
  • Flanger/Chorus
  • Ring Modulator

Figure_02.jpg

In Simple Mode, the possibilities are concentrated into five parameters, so morphing a voice becomes a simple operation. It also saves CPU use. Meanwhile, Advanced Mode offers more than 20 parameters for you to tweak. It can, however, increase the CPU workload. Both provide very distinct effects, so you can decide which of the two modes is best for the situation on hand. All parameters can be processed in real time using RTPC. Configure an RTPC for the parameter that changes a human character into a monster, and the character can morph smoothly in-game, too. You don't need different assets for the original voice and the processed voice, so you can also save on memory. 

Figure_03.jpg

 

In Dehumaniser Simple Mode, the Character and Size parameters are crucial, and the effects are obvious. It is a good idea to use these two parameters to give foundation to the character through size and texture, and adjust the other parameters to get the result you desire.

In Dehumaniser Advanced Mode, the two Delay Pitch Shift parameters are important. These set the base pitch, and you can configure the Granular parameter for added layers of creepy textures. You can then use the various effects included in the package and fine-tune the final voice effects. Vocoder produces a mechanical voice, like a robot's. If you select Pitch Tracker Mode, it will track the input signal's pitch variation, so I recommend you try it out first. Set Pitch Mode gives you a typical robotic voice effect. You can use the Career parameter's ten waveforms and 8-band EQ to enhance the robot voice texture.

Figure_04.jpg

Finally, Mic Input is a component that connects the plug-in to your computer's mic input, allowing you to play back your own voice in Wwise, in real time. That means you can use your voice to test the Dehumaniser Live functions. Of course, if you pair Dehumaniser Live with your game's voice chat system, it can act as a run-time component that processes the user's voice in real time.

Figure_05.jpg 

Audiokinetic held a demonstration at CEDEC 2017 in Japan this past August, showcasing how Dehumaniser Live can be combined with dialogue localization features in Wwise to obtain an innovative workflow, attracting interest both at the Audiokinetic booth and in our sponsored lecture. The Audiokinetic sponsored session at CEDEC 2017 focused on how to use the Wwise dialogue workflow. We invited Adam Levenson and Matthew Collings from Krotos, the developer of Dehumaniser Live, as well as dialogue recording specialist Tom Hays, from RocketSound, to talk about the new possibilities of dialogue production using Wwise. Masato Ushijima, Audiokinetic in-house product expert, also introduced various dialogue functions from Wwise.

Picture1.png

Dialogue is one of the areas of game audio development that requires a large amount of work in recent years. Reducing the number of steps involved with dialogue means increasing your time being creative with effects and music. Many game developers rely on spreadsheets to manage dialogue, but Wwise allows you to import voice data directly from the database. You can perform batch imports that contain the Event and property settings, so integrating dialogue should be very quick and reliable. 

Picture2.png

 Picture3.png

After you've integrated your dialogue assets into Wwise, it's time for Dehumaniser Live. Matthew Collings presented the basic features of Dehumaniser Live and demonstrated some presets. He gave us monster voices and robot voices, and just watching him play with a string of presets gave us an endless amount of ideas. In Wwise, you can track performance, tweak parameters, and even link with RTPCs to enable interactivity at runtime. Dehumaniser Live works as a plug-in to Wwise, so there is no need to process the voice data recorded on the DAW to add effects and then output the sound again.

Wwise supports multilingual games. If you import voice data as Sound Voice objects, you can change them according to the Language setting. If you import the same dialogue line in different languages and use an identical filename for them all, Wwise will automatically look at the filenames and store them in the same object. As long as the filenames match, all you need is to drag and drop them into Wwise for implementation. No room for manual errors here. If you create a monster voice with Dehumaniser Live and switch to another language, you can apply the exact same settings to a different language. You no longer need to take the extra step of going through each language and repeating the effect settings.

Picture4.png

Tom Hays talked about some of the issues associated with the conventional approach to dialogue production, from an outsourcing provider's point of view. The main point he raised was that he may have absolutely no idea how the post-recording voice files would be used in-game. With Wwise, he can use Work Units to take advantage of version management systems like Perforce to deliver data for implementation, so, as an outsource vendor, he is able to control the quality of what he delivers. If the developer provides him with the project's game build, he can play back the dialogue in-game and test the results, and he stressed that Wwise dialogue features allow him to significantly improve both efficiency and quality.

The final part of the presentation focused on Vorbis-related improvements in Wwise. Even social networking games can now carry up to 10,000 voice files, and while game consoles perform drastically better than before, it is still important to keep those sound data sizes compressed to the minimum. Audiokinetic has made proprietary enhancements to Wwise Vorbis. Depending on the waveform data and settings, there is more than 20% optimization compared to already-optimized previous versions of Wwise. The dialogue workflow has been improved, and performance can be maximized.

Picture5.png

 

 

 

 

 

 

 

 

 

 

Masato Ushijima

Product Expert

Masato Ushijima

Product Expert

Masato Ushijima studied acoustics and music theory in the department of Music Synthesis, at the Berklee College of Music. He returned to Japan and started his career in audio, working for three years on the WWE games in sound design, direction, requirement specifications, and interpreting. In 2015, Masato founded Sonologic-Design (www.sonologic-design.com), specializing in overall audio support including sound design, direction, and requirement specifications, mainly in the gaming industry. He works in game audio as well as gaming machines, animation, commercials, promotional videos, and other applications, offering a wide range of expertise in multi-audio, music production, and voice direction. In March 2017, he was appointed the Product Expert of Audiokinetic K.K.

댓글

Félix Tiévant

November 23, 2017 at 03:38 am

Really impressive! Can this plugin be used at bus level? Also, small typo on the advanced tab: FreqUency!

Joshua Hank

January 20, 2025 at 12:00 pm

Just found this now - I sadly guess this cooperation wasn't continued?

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

라우드니스를 처리하는 최상의 방법, 제 1강: 라우드니스 측정 (2부)

이전 블로그인 라우드니스를 처리하는 최상의 방법, 제 1강: 라우드니스 측정 (1부)에서는 지에 양 (디지몽크) 님이 다양한 플랫폼과 콘텐츠 타입에 대한 오디오 표준의 도전 과제와...

20.11.2019 - 작성자: 지에 양 (Jie Yang, 디지몽크)

Wwise와 REAPER의 연결: Part 2 - ReaOpen

ReaOpen은 오디오 파일을 선택하고 원래의 REAPER 프로젝트를 쉽게 열 수 있는 무료의 가벼운 프로그램입니다. Windows와 Mac 모두에서 실행되며 Wwise,...

7.4.2020 - 작성자: 니콜라 루키치 (NIKOLA LUKIĆ)

Wwise State-based Mixing의 새로운 기능 - 온갖 파라미터들!

2017.1 이전에는 States 탭에서는 볼륨, 피치, 저역-통과 필터, 하이패스 필터 및 메이크업 게인과 같은 비교적 적은 수의 파라미터에만 액세스할 수...

9.6.2020 - 작성자: Bradley Meyer

Wwise Wworld Wwide Online Expo (와이즈 월드 와이드 온라인 엑스포) 요약해보기

Audiokinetic의 20주년 기념과 딱 맞게 2020년이 시작되었습니다. 오늘, 크로스 플랫폼 오디오 솔루션인 Wwise는 여러분의 성원에 힘입어 상호작용 및 게임 오디오에서...

30.9.2020 - 작성자: Audiokinetic

누구나 사용할 수 있는 WAAPI - 제 1부: 개요

안녕하세요. 저는 왕양 (汪洋) 이라고 합니다 (혹은 ‘씨 예’, 溪夜라고도 불립니다). 저는 작년 하반기에 WAAPI에 대해 알게 되었습니다 (Wwise 저작 API). 저같이...

30.3.2021 - 작성자: 토마스 왕 (THOMAS WANG, 汪洋)

Wwise에서 Audio Object를 저작하고 프로파일링하는 간단한 9 단계

Wwise에서 새롭게 제공되는 오브젝트 기반 오디오 파이프라인을 둘러보고 싶지만 어디서부터 시작해야 할지 모르시는 분들 계시나요? 그렇다면 Windows용 Wwise에서 Audio...

21.7.2021 - 작성자: 데미안 캐스트바우어 (Damian Kastbauer)

다른 글

라우드니스를 처리하는 최상의 방법, 제 1강: 라우드니스 측정 (2부)

이전 블로그인 라우드니스를 처리하는 최상의 방법, 제 1강: 라우드니스 측정 (1부)에서는 지에 양 (디지몽크) 님이 다양한 플랫폼과 콘텐츠 타입에 대한 오디오 표준의 도전 과제와...

Wwise와 REAPER의 연결: Part 2 - ReaOpen

ReaOpen은 오디오 파일을 선택하고 원래의 REAPER 프로젝트를 쉽게 열 수 있는 무료의 가벼운 프로그램입니다. Windows와 Mac 모두에서 실행되며 Wwise,...

Wwise State-based Mixing의 새로운 기능 - 온갖 파라미터들!

2017.1 이전에는 States 탭에서는 볼륨, 피치, 저역-통과 필터, 하이패스 필터 및 메이크업 게인과 같은 비교적 적은 수의 파라미터에만 액세스할 수...