AdaFace-Animate: Zero-Shot Human Subject-Driven Video Generation

Official demo for our working paper AdaFace: A Versatile Text-space Face Encoder for Face Synthesis and Processing.

❗️NOTE❗️

  • Support switching between three model styles: Photorealistic, Realistic and Anime.
  • If you change the model style, please wait for 20~30 seconds for loading new model weight before the model begins to generate images/videos.

❗️Tips❗️

  • You can upload one or more subject images for generating ID-specific video.
  • "Highlight face" will make the face more prominent in the generated video.
  • "Enhance Composition" will enhance the overall composition of the generated video.
  • "Highlight face" and "Enhance Composition" can be used together.
  • Usage explanations and demos: Readme.
  • AdaFace Text-to-Image: AdaFace
Prompt

Try something like 'walking on the beach'.

Enhance the facial features by prepending 'face portrait' to the prompt

Enhance the overall composition of the generated video

0 3
0 2
Base Model Style Type

Switching the base model type will take 10~20 seconds to reload the model

1 12
0 10000

Uncheck for reproducible results

30 70

Enable AdaFace for better face details. If unchecked, it falls back to ID-Animator (https://huggingface.co/spaces/ID-Animator/ID-Animator).

0.8 1.2
0.5 2
0 1
0 1
0 40