---
quantized_by: ubergarm
pipeline_tag: text-generation
base_model: MiniMaxAI/MiniMax-M2.5
base_model_relation: quantized
license_name: modified-mit
license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE
tags:
- imatrix
- conversational
- minimax_m2
- ik_llama.cpp
---
## `ik_llama.cpp` imatrix Quantizations of MiniMaxAI/MiniMax-M2.5
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for [Windows builds by Thireus here.](https://github.com/Thireus/ik_llama.cpp/releases) which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
## Big Thanks
Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I *really* appreciate the support from [aifoundry.org](https://aifoundry.org) so check out their open source RISC-V based solutions!
## Quant Collection
Perplexity computed against *wiki.test.raw*. (lower is "better")

These two are just a test quants for baseline perplexity comparison and not available for download here:
* `BF16` 426.060 GiB (16.003 BPW)
- PPL over 552 chunks for n_ctx=512 = 8.3386 +/- 0.06651
* `Q8_0` 226.431 GiB (8.505 BPW)
- PPL over 552 chunks for n_ctx=512 = 8.3590 +/- 0.06673
*NOTE*: The first split file is much smaller on purpose to only contain metadata, its fine!
## IQ5_K 157.771 GiB (5.926 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4860 +/- 0.06815
👈 Secret Recipe
```bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ5_K.gguf \
IQ5_K \
128
```
## IQ4_NL 121.386 GiB (4.559 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4419 +/- 0.06757
This one is *not* mainline compat because it uses:
* token_embd@iq4_k (instead of mainline q4_K)
* output@iq6_k (instead of mainline q6_K`
It gives a nice little boost in perplexity at basically the same size so I opted to use the newer types. It is technically a `smol-IQ4_NL` but its fine.
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_nl
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ4_NL.gguf \
IQ4_NL \
128
```
## mainline-IQ4_NL 121.234 GiB (4.554 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4528 +/- 0.06759
This one is mainline compat because it uses:
* token_embd@q4_K
* output@q6_K
This is the one to use for Vulkan, probably Mac, but might need more than 128GB hrm...
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_nl
# Non-Repeating Layers
token_embd\.weight=q4_K
output\.weight=q4_K
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-mainline-IQ4_NL.gguf \
IQ4_NL \
128
```
## IQ4_XS 114.842 GiB (4.314 BPW)
PPL over 552 chunks for n_ctx=512 = 8.5702 +/- 0.06901
This is the only quant in this collection that is compatible with mainline llama.cpp. ik_llama.cpp can run all of them. Its technically a `smol-IQ4_XS` but its fine.
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq4_xs
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_xs
# Non-Repeating Layers
token_embd\.weight=q4_K
output\.weight=q6_K
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ4_XS.gguf \
IQ4_XS \
128
```
## smol-IQ4_KSS 108.671 GiB (4.082 BPW)
PPL over 552 chunks for n_ctx=512 = 8.5815 +/- 0.06888
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq4_kss
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-smol-IQ4_KSS.gguf \
IQ4_KSS \
128
```
## smol-IQ3_KS 87.237 GiB (3.277 BPW)
PPL over 552 chunks for n_ctx=512 = 8.7539 +/- 0.07075
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-smol-IQ3_KS.gguf \
IQ3_KS \
128
```
## IQ2_KS 69.800 GiB (2.622 BPW)
PPL over 552 chunks for n_ctx=512 = 9.6827 +/- 0.07972
👈 Secret Recipe
```bash
#!/usr/bin/env bash
custom="
# 61 Repeating Layers [0-61]
# Attention [0-61] GPU
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# Routed Experts Layers [0-61] CPU
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ2_KS.gguf \
IQ2_KS \
128
```
## Quick Start
```bash
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Desired Quant
$ pip install huggingface_hub
$ hf download --local-dir ./MiniMax-M2.5-GGUF/ --include=smol-IQ3_KS/*.gguf ubergarm/MiniMax-M2.5-GGUF
# Hybrid CPU and Single GPU
echo TODO or look at my Step-3.5-Flash for rough example for now using --cpu-moe or --n-cpu-moe XX etc
# Multi GPU Full Offload 128k context 96GB VRAM!!!
model=MiniMax-M2.5-IQ2_KS-00001-of-00003.gguf
_GLIBCXX_REGEX_STATE_LIMIT=1000000 \
CUDA_VISIBLE_DEVICES="0,1" \
./build/bin/llama-sweep-bench \
--model "$model" \
--alias ubergarm/MiniMax-M2.5 \
-khad -ctk q6_0 -ctv q8_0 \
-c 131072 \
-ger \
-sm graph \
-ngl 99 \
-ub 4096 -b 4096 \
-ts 47,48 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
# CPU-Only
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/MiniMax-M2.5 \
--ctx-size 65536 \
-ger \
--merge-qkv \
-ctk q8_0 -ctv q8_0 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
```
My own early testing with `opencode` suggests that even the `smol-IQ3_KS` is working okay with tool calling etc!
For tool use you can always bring your own template with `--chat-template-file myTemplate.jinja` and might need `--special` etc.
## References
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
* [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
* [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
* [for more great mainline quant recipies check out AesSedai/MiniMax-M2.5-GGUF](https://huggingface.co/AesSedai/MiniMax-M2.5-GGUF)