Abliterated GGUF Model for MaziyarPanahi/gemma-3-1b-it-GGUF

Quantized GGUF export of MaziyarPanahi/gemma-3-1b-it-GGUF with refusal behavior removed via directional ablation using Apostate.

Details

Parameter Value
Base model MaziyarPanahi/gemma-3-1b-it-GGUF
Format GGUF
Quantization apostate
File size 768.5 MB

Usage

from apostate import GGUFInference

inference = GGUFInference("g-ntovas/gemma-3-1b-it-gguf-q4_k_m-apostate/gemma-3-1b-it-gguf-q4_k_m-apostate.gguf")
for token in inference.chat_stream([{"role": "user", "content": "Hello!"}]):
    print(token, end="", flush=True)

Citation

@software{apostate,
  title = {Apostate: Inference-Time Refusal Ablation},
  url = {https://github.com/g-ntovas/apostate},
}
Downloads last month
18
GGUF
Model size
1.0B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for g-ntovas/gemma-3-1b-it-gguf-q4_k_m-apostate

Quantized
(1)
this model