Papers
arxiv:2603.24458

OmniWeaving: Towards Unified Video Generation with Free-form Composition and Reasoning

Published on Mar 25
· Submitted by
taesiri
on Mar 26
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

OmniWeaving is an open-source video generation model that unifies multimodal inputs and complex reasoning capabilities through large-scale pretraining and intelligent agent inference.

AI-generated summary

While proprietary systems such as Seedance-2.0 have achieved remarkable success in omni-capable video generation, open-source alternatives significantly lag behind. Most academic models remain heavily fragmented, and the few existing efforts toward unified video generation still struggle to seamlessly integrate diverse tasks within a single framework. To bridge this gap, we propose OmniWeaving, an omni-level video generation model featuring powerful multimodal composition and reasoning-informed capabilities. By leveraging a massive-scale pretraining dataset that encompasses diverse compositional and reasoning-augmented scenarios, OmniWeaving learns to temporally bind interleaved text, multi-image, and video inputs while acting as an intelligent agent to infer complex user intentions for sophisticated video creation. Furthermore, we introduce IntelligentVBench, the first comprehensive benchmark designed to rigorously assess next-level intelligent unified video generation. Extensive experiments demonstrate that OmniWeaving achieves SoTA performance among open-source unified models. The codes and model will be made publicly available soon. Project Page: https://omniweaving.github.io.

Community

Paper submitter

OmniWeaving enables unified, reasoning-informed video generation via multimodal binding and agent-driven composition, trained on diverse data, with IntelligentVBench to evaluate open-source progress.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.24458
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.24458 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.24458 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.24458 in a Space README.md to link it from this page.

Collections including this paper 1