Nvidia Omniverse

Audio2Gesture (Sequencer, anim graph)

Intro:


In animation tab (top left), tick Audio2Gesture → +Base Skeleton (a skeleton will be created in scene)
Then tick Animaton Retargeting → A2G skeleton
image.png
(Detail retargeting on next tutorial)

Delete the base skeleton
In Audo2Gesture tab, + A2G offline pipeline
It may need to build Tensor RT engine:
image.png

Then choose the audio track → Click Run2AG to analyze


Offline Pipeline and Create scene with sequencer



Drag a Sol female character in the stage
In Audio2Gesture tab, choose target skeleton of the sol female
When the warning icon shows, click on the button next to it (to open retargeting tab)
image.png
Or click on the warning icon → choose Auto Retarget
→ Run A2G
Delete the base skeleton
Further Advance Settings, and style/animation mode/option to play around to see different algorithm result
In animation graph setup, click Record (click R side icon on destination path to choose ane name the record export)

Create a new scene:
Open a blank scene, drag character in it
Drag the Recorded A2G USDA files from saved folder location to the stage
Open the Sequencer timeline, R click on it → Create Asset Track
image.png
R click, Add new asset clip
image.png
In Property window, Asset clip column, Asset → Add Target
Choose the Skeleton as a target asset
image.png
Choose Animaiton in Add Target tab too
image.png
The scene should show the character changes to its animation

Repeat the steps for different characters on different timeline assets
image.png


Streaming Pipeline, Integration with Animation Graph


In Audio2Gesture tab, click A2G Streaming pipline

Inport Audio player streaming file:
R click on the streaming track below, click send example track:
image.png

P.S. Advance (optional):
Streaming audio player (mainly use in audio2face instead) to play audio which is not known in advance (i.e. generated randomly), streaming player allows audio streaming form external soruce/applicaiton via GRPC protocol
e.g using GRPC to send audio: Python based example can be found in audio2face code below:
In window (top menu) → Extension,
image.png
type audio2face, open the folder icon (next to autoload)
→ Choose the test_client python file as the follow path:
image.png

After getting sample streaming audio, play around parameters that same as offline pipeline
Except a parameter only available on streaming:
Load to Start: Time needed to start the gesture after audio play. The longer delay it sets, the more accurate synchronization it is

Combine Audio2Gesture with Animation Graph:

Open Animation graph in Animation (top menu)
Create the following graph, and manually add 4 variables (position, rotations, root position displacement, root rotation displacement)
image.png
(Problem: cannot connect those 4 variables on Pose Provider node)
Select the correct node in Animation graph setup
image.png

Animation recording
Choose the saved detination path,
Press record button, then play the audio, then hit stop record
image.png

Recorded animation can be used in offline audio2gesture
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.