REAP the Experts: Why Pruning Prevails for One-Shot MoE compression
Paper
• 2510.13999 • Published
• 14
NVFP4 quantization of cerebras/MiniMax-M2.5-REAP-172B-A10B for NVIDIA DGX Spark (GB10).
The base model is a Cerebras REAP (Router-weighted Expert Activation Pruning) variant of MiniMaxAI/MiniMax-M2.5. REAP uniformly prunes experts from 256 → 192, reducing total parameters from 230B to 172B while maintaining near-identical performance.
| Base Model | cerebras/MiniMax-M2.5-REAP-172B-A10B |
| Original Model | MiniMaxAI/MiniMax-M2.5 (230B) |
| Architecture | MiniMaxM2ForCausalLM (MoE, 192 experts, 8 active per token) |
| Total Parameters | 172B |
| Active Parameters | 10B per token |
| Quantization | NVFP4 (4-bit floating point), all layers including self_attn |
| Format | compressed-tensors (safetensors), 20 shards |
| Size on Disk | 99 GB |
| Context Length | 196,608 tokens (~192K) |
| License | Modified MIT (inherited from Cerebras REAP) |
Benchmarked with llama-benchy.
| Metric | Value |
|---|---|
| Decode throughput | 27–29 tok/s |
| Prefill (512 tokens) | 920 tok/s |
| Prefill (4096 tokens) | 1,916 tok/s |
| TTFT (512 tokens) | 490 ms |
| Max context (gpu_mem_util=0.93) | 65,536 tokens |
| KV cache capacity | ~127K tokens |
Effective throughput with a large system prompt (~23K tokens): ~21 tok/s.
lm_head, model.embed_tokens, re:.*block_sparse_moe\.gate$LLMCOMPRESSOR_MOE_CALIBRATE_ALL_EXPERTS=1This is exactly how we run and benchmark this model. One DGX Spark, nothing else.
Docker image: avarok/dgx-vllm-nvfp4-kernel:v23 (vLLM 0.16.0-rc2, CUDA 13.0, SM 12.1)
Download the model:
huggingface-cli download saricles/MiniMax-M2.5-REAP-172B-A10B-NVFP4-GB10 \
--local-dir /opt/huggingface/models/MiniMax-M2.5-REAP-172B-NVFP4
Launch:
docker run -d --name minimax --gpus all --ipc=host \
-v /opt/huggingface/models/MiniMax-M2.5-REAP-172B-NVFP4:/models/MiniMax-M2.5-REAP-172B-NVFP4 \
-p 8000:8000 \
-e VLLM_NVFP4_GEMM_BACKEND=marlin \
-e VLLM_TEST_FORCE_FP8_MARLIN=1 \
-e VLLM_USE_FLASHINFER_MOE_FP4=0 \
-e VLLM_MARLIN_USE_ATOMIC_ADD=1 \
-e MODEL=/models/MiniMax-M2.5-REAP-172B-NVFP4 \
-e PORT=8000 \
-e MAX_MODEL_LEN=65536 \
-e GPU_MEMORY_UTIL=0.93 \
-e "VLLM_EXTRA_ARGS=--trust-remote-code --kv-cache-dtype fp8 --attention-backend flashinfer --enable-auto-tool-choice --tool-call-parser minimax_m2 --reasoning-parser minimax_m2_append_think" \
avarok/dgx-vllm-nvfp4-kernel:v23
Model takes ~3–4 minutes to load. Verify it's ready:
curl http://localhost:8000/v1/models
Test it:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "MiniMax-M2.5-REAP-172B-NVFP4",
"messages": [{"role": "user", "content": "Hello!"}],
"temperature": 1.0,
"top_p": 0.95,
"top_k": 40,
"min_p": 0.01,
"max_tokens": 512
}'
| Variable | Why |
|---|---|
VLLM_NVFP4_GEMM_BACKEND=marlin |
Use Marlin kernels for FP4 GEMM (FlashInfer JIT fails on Spark) |
VLLM_TEST_FORCE_FP8_MARLIN=1 |
Required for Marlin backend activation |
VLLM_USE_FLASHINFER_MOE_FP4=0 |
Disable FlashInfer for MoE FP4 (crashes with JIT ninja build) |
VLLM_MARLIN_USE_ATOMIC_ADD=1 |
Atomic adds for Marlin (stability on GB10) |
GPU_MEMORY_UTIL=0.93 |
0.95 OOMs on Spark; 0.93 is the safe max |
--kv-cache-dtype fp8 |
FP8 KV cache saves memory, enables ~127K token capacity |
--attention-backend flashinfer |
FlashInfer for attention (not MoE) — works fine |
gpu_memory_utilization=0.95 will OOM. Use 0.93.--enable-auto-tool-choice --tool-call-parser minimax_m2.{
"temperature": 1.0,
"top_p": 0.95,
"top_k": 40,
"min_p": 0.01
}
Base model
MiniMaxAI/MiniMax-M2.5