This is an abliterated version of Llama-3.3-8B-Instruct, made using Heretic v1.1.0

This is the model with 128k context.

The quantizations were created using an imatrix merged from combined_en_medium and harmful.txt to leverage the abliterated nature of the model.

Performance

Metric This model Original model
Refusals 5/100 95/100
KL divergence 0.0919 0

BibTeX entry and citation info

@misc{heretic,
  author = {Weidmann, Philipp Emanuel},
  title = {Heretic: Fully automatic censorship removal for language models},
  year = {2025},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/p-e-w/heretic}}
}
Downloads last month
1,116
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Llama-3.3-8B-Instruct-128k-abliterated-GGUF

Quantized
(16)
this model