Workflow or at least some Instructional info.

#6
by leomaxwell973 - opened

So, im not getting great outputs and im wondering if im just derping or if these GGUFs are meant to use a more generic workflow as opposed to the LTX repo workflows, namely the manual sigmas? In short I'm wondering if we keep the sigmas or use steps instead? There is no guidance one way or the other, and the only how to on the page points to almost unrelated info... just a GGUF image guide.

If there are different settings, a workflow would be pretty swagger of you.

If not then I get the void on the info, since its, well void, but it would be nice to know it's 1:1 or to be treated as such workflow wise.

Unsloth AI org
edited Jan 20

It's not a 1 to 1 replacement from from ltx2 workflows to use the ggufs. LTX2 packaged a few tensors inside the diffusion model that can't be loaded via the existing custom GGUF loaders. So those need to be separated out, and loaded separately when using a GGUF workflow. We just uploaded those components to this repo to make it easier.

To download the models, first cd /path/to/ComfyUI/models

Run the following commands to download the relevant model weights:

# Can try any quant type 
ln -s "$(hf download unsloth/LTX-2-GGUF ltx-2-19b-dev-UD-Q2_K_XL.gguf --quiet)" unet/ltx-2-19b-dev-UD-Q2_K_XL.gguf
ln -s "$(hf download unsloth/LTX-2-GGUF vae/ltx-2-19b-dev_audio_vae.safetensors --quiet)" vae/ltx-2-19b-dev_audio_vae.safetensors
ln -s "$(hf download unsloth/LTX-2-GGUF vae/ltx-2-19b-dev_video_vae.safetensors --quiet)" vae/ltx-2-19b-dev_video_vae.safetensors

# Can try any quant type 
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF gemma-3-12b-it-qat-UD-Q4_K_XL.gguf --quiet)" text_encoders/gemma-3-12b-it-qat-UD-Q4_K_XL.gguf
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF mmproj-BF16.gguf --quiet)" text_encoders/gemma-3-12b-it-qat-mmproj-BF16.gguf
ln -s "$(hf download unsloth/LTX-2-GGUF text_encoders/ltx-2-19b-dev_embeddings_connectors.safetensors --quiet)" text_encoders/ltx-2-19b-dev_embeddings_connectors.safetensors
ln -s "$(hf download Lightricks/LTX-2 ltx-2-19b-distilled-lora-384.safetensors --quiet)" loras/ltx-2-19b-distilled-lora-384.safetensors
ln -s "$(hf download Lightricks/LTX-2 ltx-2-spatial-upscaler-x2-1.0.safetensors --quiet)" latent_upscale_models/ltx-2-spatial-upscaler-x2-1.0.safetensors

# Optional
ln -s "$(hf download Lightricks/LTX-2-19b-LoRA-Camera-Control-Dolly-Left ltx-2-19b-lora-camera-control-dolly-left.safetensors --quiet)" loras/ltx-2-19b-lora-camera-control-dolly-left.safetensors

This will download the model weights to the HF cache and link it directly to your ComfyUI models directories.

Make sure you have the following custom nodes installed
https://github.com/city96/ComfyUI-GGUF
https://github.com/kijai/ComfyUI-KJNodes

Then you can open the video we uploaded to the repo unsloth_best.mp4 into ComfyUI to see the exact workflow we used. The manual sigmas come from the template workflows and are the recommended settings if you're going to refine the upscaled video. You certainly don't need it but it's recommended.

Sign up or log in to comment