AdaFace-Animate: Zero-Shot Human Subject-Driven Video Generation

Official demo for our working paper AdaFace: A Versatile Text-space Face Encoder for Face Synthesis and Processing.

❗️NOTE❗️

  • Support switching between three model styles: Realistic, Photorealistic and Anime. Realistic is less realistic than Photorealistic but has better motions.
  • If you change the model style, please wait for 20~30 seconds for loading new model weight before the model begins to generate images/videos.

❗️Tips❗️

  • You can upload one or more subject images for generating ID-specific video.
  • If the face loses focus, try enabling "Highlight face".
  • If the motion is weird, e.g., the prompt is "... running", try increasing the number of sampling steps.
  • Usage explanations and demos: Readme.
  • AdaFace Text-to-Image: AdaFace
Prompt

Try something like 'walking on the beach'.

Enhance the facial features by prepending 'face portrait' to the prompt

0 3
0 2
Base Model Style Type

Switching the base model type will take 10~20 seconds to reload the model

1 12
0 10000

Uncheck for reproducible results

30 70

Enable AdaFace for better face details. If unchecked, it falls back to ID-Animator (https://huggingface.co/spaces/ID-Animator/ID-Animator).

0.8 1.2
0.5 2
0 1
0 1
0 40