Qwen3.5-9B
π€ What is this HuggingFace repository about?
This repository provides GGUF-quantized tensors for the Qwen3.5-9B model (official repo: https://huggingface.co/Qwen/Qwen3.5-9B). These GGUF shards are designed to be used with Thireusβ GGUF Tool Suite (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With this GGUF Tool Suite, you can produce your own Dynamic 3.0 Quants recipes and achieve optimum accuracy & SOTA quantization performance.
- π Read more: https://github.com/Thireus/GGUF-Tool-Suite
- π Example of GGUF recipes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- π³ Cook your own recipe files: https://gguf.thireus.com/quant_assign.html
- βοΈ Download GGUF models from recipe files: https://gguf.thireus.com/quant_downloader.html
- π Browse available models: https://gguf.thireus.com
tl;dr: Expand the details section below
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows/macOS/Linux builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file - you can also try the web version: https://gguf.thireus.com/quant_downloader.html
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/Qwen3.5-9B/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_llama.cpp_recipes/Qwen3.5-9B.ROOT-3.5993bpw-11.3565ppl.1GB-GGUF_0GB-GPU_0GB-CPU.9888e4b_831ff04.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-server \
-m Qwen3.5-9B-THIREUS-BF16-SPECIAL_TENSOR-00001-of-00399.gguf \
-fa auto -amb 1024 -ctk q8_0 -c 32768 -ngl 99 \
-b 4096 -ub 4096 --warmup-batch --no-mmap --threads 1 \
--main-gpu 0
β Why does this Tool Suite exist?
- Compatibility & Speed β unslothβs dynamic quants may not always work optimally with
ik_llama.cpp. - Custom Rig Fit β No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
- Automated PPL-Optimal Quantization β To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) targetβso I created one with excellent results!
π How does it compare to other GGUFs?
Hereβs how Qwen3.5-9B quantized with Thireusβ GGUF Tool Suite stacks up against other quantizers (lower perplexity = better at equal or lower bpw):
Note: The
recipe_examplesfiles illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you β just specify your target RAM, VRAM, and quant types, andquant_assign.pyfinds the best mix.
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
All PPL benchmarks are computed with the parameters -ctk f16 -c 512 -b 4096 -ub 4096. Changing any of these parameters will alter the PPL. In particular, reducing -b 4096 -ub 4096 increases the PPL, while increasing them decreases the PPL.
π How do I get started?
Check out the GGUF Tool Suite README β focus on these sections:
- β οΈ Requirements β Which
ik_llama.cpp(orllama.cpp) version to use and how to compile.- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
- π₯ Download Model Shards β Use
quant_downloader.shor quant_downloader.html to fetch GGUF shards from any recipe.- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- π§ Run a Downloaded Model β Sample usage with
llama-cli. - π οΈ Generate a Custom Recipe β Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
β Supported Models
Supported models are listed under models/ in the Tool Suite Github repo. Presence of ppl_results.csv indicates official support and compatibility with quant_assign.py.
π€·ββοΈ Will I release baked dynamic quant GGUFs?
No, because I believe in tailored quantization for each userβs hardware. If you prefer ready-made shards, you are welcome to merge them via llama-gguf-split --merge, or request someone to publish them, or rely on generic GGUF dynamic quants such as unsloth's.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The quant_downloader.sh script or quant_downloader.html (web port of this script) handles automatic fetching and verification of each shard. Note that recipes provided by Ubergarm on his model cards are also compatible with quant_downloader.sh and quant_downloader.html, providing a "SPECIAL_SPLIT" version of these models exists (see https://gguf.thireus.com/).
Users who donβt trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to llama-quantize --custom-q (see example). Run llama-quantize --help to list compatible quants for quant_assign.py. This approach is especially useful if you prefer llama.cpp over ik_llama.cpp.
π¦ Whatβs in this repository?
- 00001 GGUF header shard β Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- Tensor shards β Each shard holds one tensor; see
tensors.mapfor names, quant types, sizes, SHA-256 hash, shard IDs, etc. - GPG-signed files β
tensors.mapand header shard are signed with the key in trusted-keys.asc for tamper detection. - Security note β Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authorsβor alternatively self-quantizeβto avoid potential exploits.
π‘ Pro Tips
You can easily download the BF16 model version to quantize your own shards:
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe --qtype BF16
You can also quantize individual BF16 tensors without the need to download every BF16 .gguf shard:
BF16 model shards can also be individually quantized using a special version of ik_llama.cpp's llama-quantize utility which comes with the --individual-tensors option.
- Source code: https://github.com/Thireus/ik_llama.cpp/tree/th/quantize_individual_tensors
- Builds (macOS, Windows and Linux): https://github.com/Thireus/ik_llama.cpp/releases/tag/th-quantize_individual_tensors-b4210-7a44805
Usage example:
./llama-quantize --keep-split --imatrix imatrix_ubergarm.dat --individual-tensors 2,3,1094 Kimi-K2-Thinking-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01097.gguf my_new_shards.gguf iq3_s 12
For more information about how to use it: https://github.com/Thireus/GGUF-Tool-Suite/issues/45
Enjoy optimized quantization! π
- Downloads last month
- 15
8-bit