Instructions to use ubergarm/GLM-5-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use ubergarm/GLM-5-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ubergarm/GLM-5-GGUF", filename="IQ2_KL/GLM-5-IQ2_KL-00001-of-00007.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ubergarm/GLM-5-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ubergarm/GLM-5-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf ubergarm/GLM-5-GGUF:Q2_K
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ubergarm/GLM-5-GGUF:Q2_K # Run inference directly in the terminal: llama-cli -hf ubergarm/GLM-5-GGUF:Q2_K
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ubergarm/GLM-5-GGUF:Q2_K # Run inference directly in the terminal: ./llama-cli -hf ubergarm/GLM-5-GGUF:Q2_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ubergarm/GLM-5-GGUF:Q2_K # Run inference directly in the terminal: ./build/bin/llama-cli -hf ubergarm/GLM-5-GGUF:Q2_K
Use Docker
docker model run hf.co/ubergarm/GLM-5-GGUF:Q2_K
- LM Studio
- Jan
- vLLM
How to use ubergarm/GLM-5-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ubergarm/GLM-5-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ubergarm/GLM-5-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ubergarm/GLM-5-GGUF:Q2_K
- Ollama
How to use ubergarm/GLM-5-GGUF with Ollama:
ollama run hf.co/ubergarm/GLM-5-GGUF:Q2_K
- Unsloth Studio new
How to use ubergarm/GLM-5-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ubergarm/GLM-5-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ubergarm/GLM-5-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ubergarm/GLM-5-GGUF to start chatting
- Pi new
How to use ubergarm/GLM-5-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf ubergarm/GLM-5-GGUF:Q2_K
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "ubergarm/GLM-5-GGUF:Q2_K" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use ubergarm/GLM-5-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf ubergarm/GLM-5-GGUF:Q2_K
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default ubergarm/GLM-5-GGUF:Q2_K
Run Hermes
hermes
- Docker Model Runner
How to use ubergarm/GLM-5-GGUF with Docker Model Runner:
docker model run hf.co/ubergarm/GLM-5-GGUF:Q2_K
- Lemonade
How to use ubergarm/GLM-5-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ubergarm/GLM-5-GGUF:Q2_K
Run and chat with the model
lemonade run user.GLM-5-GGUF-Q2_K
List all available models
lemonade list
ik_llama.cpp imatrix Quantizations of zai-org/GLM-5
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ik_llama.cpp windows builds by Thireus here..
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
These two are just test quants for baseline perplexity comparison and not available for download here:
BF161404.406 GiB (16.003 BPW)- PPL over 565 chunks for n_ctx=512 = 2.6298 +/- 0.01396
Q8_0746.302 GiB (8.504 BPW)- PPL over 565 chunks for n_ctx=512 = 2.6303 +/- 0.01398
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ3_KS 320.216 GiB (3.649 BPW)
PPL over 565 chunks for n_ctx=512 = 2.7839 +/- 0.01508
NOTE: Actual used RAM/VRAM will be about 314.07 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq4_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-IQ3_KS.gguf \
IQ3_KS \
128
IQ2_KL 261.988 GiB (2.985 BPW)
PPL over 565 chunks for n_ctx=512 = 3.0217 +/- 0.01651
NOTE: Actual used RAM/VRAM will be about 255.84 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-IQ2_KL.gguf \
IQ2_KL \
128
smol-IQ2_KS 205.738 GiB (2.344 BPW)
PPL over 565 chunks for n_ctx=512 = 3.7792 +/- 0.02183
NOTE: Actual used RAM/VRAM will be about 200 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ2_KS.gguf \
IQ2_KS \
128
smol-IQ1_KT 169.190 GiB (1.928 BPW)
PPL over 565 chunks for n_ctx=512 = 4.6032 +/- 0.02768
NOTE: Actual used RAM/VRAM will be about 163.046 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
custom="
#!/usr/bin/env bash
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq1_kt
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ1_KT.gguf \
IQ1_KT \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Quants
$ pip install huggingface_hub
$ hf download --local-dir ./GLM-5-GGUF/ --include=smol-IQ2_KS/*.gguf ubergarm/GLM-5-GGUF
# Hybrid CPU and Single GPU
echo TODO or look at ubergarm/Kimi-K2.5-GGUF model card quick (as it is also MLA arch)
# Multi GPU Full Offload
echo TODO or look at ubergarm/Kimi-K2.5-GGUF model card quick (as it is also MLA arch)
# CPU-Only
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/GLM-5 \
-ger \
--merge-qkv \
--ctx-size 131072 \
-ctk q8_0 \
-mla 3 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
I tested even the smol-IQ1_KT is working with opencode! You can also bring your own template with --chat-template-file myTemplate.jinja.
References
- Downloads last month
- 71
2-bit
Model tree for ubergarm/GLM-5-GGUF
Base model
zai-org/GLM-5