This is an abliterated version of Llama-3.3-8B-Instruct, made using Heretic v1.1.0
This is the model with 128k context.
The quantizations were created using an imatrix merged from combined_en_medium and harmful.txt to leverage the abliterated nature of the model.
Performance
| Metric | This model | Original model |
|---|---|---|
| Refusals | 5/100 | 95/100 |
| KL divergence | 0.0919 | 0 |
BibTeX entry and citation info
@misc{heretic,
author = {Weidmann, Philipp Emanuel},
title = {Heretic: Fully automatic censorship removal for language models},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/p-e-w/heretic}}
}
- Downloads last month
- 1,116
Hardware compatibility
Log In to add your hardware
Model tree for noctrex/Llama-3.3-8B-Instruct-128k-abliterated-GGUF
Base model
allura-forge/Llama-3.3-8B-Instruct