metadata
license: apache-2.0
library_name: videox_fun
Qwen-Image-2512-Fun-Controlnet-Union
Model Card
| Name | Description |
|---|---|
| Qwen-Image-2512-Fun-Controlnet-Union-2602.safetensors | Compared to the previous version of the model, we added Gray control to the model. The model was trained for a longer time than before. |
| Qwen-Image-2512-Fun-Controlnet-Union.safetensors | ControlNet weights for Qwen-Image-2512. The model supports multiple control conditions such as Canny, HED, Depth, Pose, MLSD and Scribble. |
Model Features
- This ControlNet is added on 5 layer blocks. It supports multiple control conditions—including Canny, HED, Depth, Pose, MLSD, Scribble and Gray. It can be used like a standard ControlNet.
- Inpainting mode is also supported.
- When obtaining control images, acquiring them in a multi-resolution manner results in better generalization.
- You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.70 to 0.95.
Results
| Pose + Inpaint | Output |
![]() ![]() ![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Scribble | Output |
![]() |
![]() |
| Canny | Output |
![]() |
![]() |
| HED | Output |
![]() |
![]() |
| Depth | Output |
![]() |
![]() |
| Gray | Output |
![]() |
![]() |
Inference
Go to the VideoX-Fun repository for more details.
Please clone the VideoX-Fun repository and create the required directories:
# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git
# Enter VideoX-Fun's directory
cd VideoX-Fun
# Create model directories
mkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model
Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.
📦 models/
├── 📂 Diffusion_Transformer/
│ └── 📂 Qwen-Image-2512/
├── 📂 Personalized_Model/
│ └── 📦 Qwen-Image-2512-Fun-Controlnet-Union.safetensors
Then run the file examples/qwenimage_fun/predict_t2i_control.py and examples/qwenimage_fun/predict_i2i_inpaint.py.
















