Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Wan-AI
/
Wan2.1-I2V-14B-480P-Diffusers

Image-to-Video
Diffusers
Safetensors
English
Chinese
WanImageToVideoPipeline
video
video-generation
Model card Files Files and versions
xet
Community
11

Instructions to use Wan-AI/Wan2.1-I2V-14B-480P-Diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use Wan-AI/Wan2.1-I2V-14B-480P-Diffusers with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    from diffusers.utils import load_image, export_to_video
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", dtype=torch.bfloat16, device_map="cuda")
    pipe.to("cuda")
    
    prompt = "A man with short gray hair plays a red electric guitar."
    image = load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
    )
    
    output = pipe(image=image, prompt=prompt).frames[0]
    export_to_video(output, "output.mp4")
  • Notebooks
  • Google Colab
  • Kaggle
Wan2.1-I2V-14B-480P-Diffusers / image_encoder
Ctrl+K
Ctrl+K
  • 6 contributors
History: 2 commits
StevenZhang's picture
StevenZhang
update demo (#6)
ba97433 verified about 1 year ago
  • config.json
    582 Bytes
    update demo (#6) about 1 year ago
  • model.safetensors
    1.26 GB
    xet
    Diffusers-format weights (#1) about 1 year ago