Character setup to Audio2Face

Audio2Face to Metahuman UE


Create a blend shape from Omnivers Server Sample:

Got Omniverese server installed, find the Sample, locate in following localhost drive:
Drag the USD sample file to the Stage
image.png

Drag the Sample (blue one) away from the grey one, go to A2F Data Conversion (top R corner)
Select Input Anim MEsh and Blendshape Mesh as follow:
image.png
Click Set Up Blendshape Solve
Adjust Weight Regularization and Temporal Smoothing value if needed

Export Blend Shape:

Back to the Audio Player page to see if the talk syn on 2 heads:
image.png

Go to A2f Data Conversion again. Set the blend shape Export to UE:
Change FPS to needed, click Export as USD Skel/Animation
image.png


Import in UE:

Go to UE, R click on metahuman BP → Import Faical Animation, select the USD file where just exported
image.png

Choose the Face Archetype Skeletion:
image.png

Create Animation video:
Add the imported facial expression anim in sequencer → Delete Face Control Rig in timeline (or else the anim won’t display)
image.png
To edit the anim by keyframe. Choose Face anim, R click , bake to ciontrol rig
image.png
The keyframe of the anim will be showed, add needed keyframe to edit the anim
image.png
Remove the animation layer, R click on CtrlRig by adding Additive section (2 CtrlRig will be formed):
image.png
On the 2nd CtrlRig, open it and add keyframe on the part want to change (case here changes the jaw)
image.png

Option 2 tutorial for bake to control rig, choose simple face ctrl rig instead:


image.png

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.