Abliterated GGUF Model for MaziyarPanahi/gemma-3-1b-it-GGUF
Quantized GGUF export of MaziyarPanahi/gemma-3-1b-it-GGUF with refusal behavior removed via directional ablation using Apostate.
Details
| Parameter | Value |
|---|---|
| Base model | MaziyarPanahi/gemma-3-1b-it-GGUF |
| Format | GGUF |
| Quantization | apostate |
| File size | 768.5 MB |
Usage
from apostate import GGUFInference
inference = GGUFInference("g-ntovas/gemma-3-1b-it-gguf-q4_k_m-apostate/gemma-3-1b-it-gguf-q4_k_m-apostate.gguf")
for token in inference.chat_stream([{"role": "user", "content": "Hello!"}]):
print(token, end="", flush=True)
Citation
@software{apostate,
title = {Apostate: Inference-Time Refusal Ablation},
url = {https://github.com/g-ntovas/apostate},
}
- Downloads last month
- 18
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for g-ntovas/gemma-3-1b-it-gguf-q4_k_m-apostate
Base model
google/gemma-3-1b-pt
Finetuned
google/gemma-3-1b-it
Quantized
MaziyarPanahi/gemma-3-1b-it-GGUF