Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization

Jiahao Cui1*Baoyou Chen1*Mingwang Xu1*Hanlin Shang1Yuxuan Chen1
Yun Zhan1Zilong Dong5Yao Yao4Jingdong Wang2Siyu Zhu1,3✉️
1Fudan University  2Baidu Inc  3Shanghai Innovative Institute 
4Nanjing University  5Alibaba Group 

SIGGRAPH Asia 2025

## 📸 Showcase
## ⚙️ Installation - System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 12.1 - Tested GPUs: H100 Download the codes: ```bash git clone https://github.com/fudan-generative-vision/hallo4 cd hallo4 ``` Create conda environment: ```bash conda create -n hallo python=3.10 conda activate hallo ``` Install packages with `pip` ```bash pip install -r requirements.txt ``` Besides, ffmpeg is also needed: ```bash apt-get install ffmpeg ``` ### 📥 Download Pretrained Models You can easily get all pretrained models required by inference from our [HuggingFace repo](https://huggingface.co/fudan-generative-ai/hallo4). Using `huggingface-cli` to download the models: ```shell cd $ProjectRootDir pip install "huggingface_hub[cli]" huggingface-cli download fudan-generative-ai/hallo4 --local-dir ./pretrained_models ``` Finally, these pretrained models should be organized as follows: ```text ./pretrained_models/ |-- hallo4 | `-- model_weight.pth |-- Wan2.1_Encoders |-- Wan2.1_VAE.pth |-- models_t5_umt5-xxl-enc-bf16.pth |-- audio_separator/ | |-- download_checks.json | |-- mdx_model_data.json | |-- vr_model_data.json | `-- Kim_Vocal_2.onnx |-- wav2vec/ `-- wav2vec2-base-960h/ |-- config.json |-- feature_extractor_config.json |-- model.safetensors |-- preprocessor_config.json |-- special_tokens_map.json |-- tokenizer_config.json `-- vocab.json ``` ### 🛠️ Prepare Inference Data Hallo4 have some specicial requirements on inference data due to limitation of our training: 1. Reference image should have aspect ratio between 1:1 and 480:832. 2. Driving audio must be in WAV format. 3. Audio must be in English since our training datasets are only in this language. 4. Ensure the vocals of audio are clear; background music is acceptable. ### 🎮 Run Inference To run a simple demo, just use our provided shell ```bash inf.sh``` ## ⚠️ Social Risks and Mitigations The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology. ## 🤗 Acknowledgements This model is a fine-tuned derivative version based on the **WAN2.1-1.3B** model. WAN is an open-source video generation model developed by the WAN team. Its original code and model parameters are governed by the [WAN LICENSE](https://github.com/Wan-Video/Wan2.1/blob/main/LICENSE.txt). As a derivative work of WAN, the use, distribution, and modification of this model must comply with the license terms of WAN.