AbominationScience-12B-v4 GGUF Quantizations π²
When the choice is not random.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/AbominationScience-12B-v4.
Available Quantizations (ββΏβ)
| Type | Quantized GGUF Model | Size |
|---|---|---|
| Q4_0 | Khetterman/AbominationScience-12B-v4-Q4_0.gguf | 6.58 GiB |
| Q6_K | Khetterman/AbominationScience-12B-v4-Q6_K.gguf | 9.36 GiB |
| Q8_0 | Khetterman/AbominationScience-12B-v4-Q8_0.gguf | 12.1 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€
- Downloads last month
- 54
Hardware compatibility
Log In to add your hardware
4-bit
6-bit
8-bit
Model tree for Khetterman/AbominationScience-12B-v4-GGUF
Base model
Khetterman/AbominationScience-12B-v4