Qwen3-4b-Z-Image-Turbo-AbliteratedV1 πŸš€

"I'm sorry, I can't generate that image..." SAID NO ONE EVER (well, almost).

Welcome to the ablation station! πŸš‚πŸ’¨

This is the abliterated version of the Z-Image-Turbo text encoder. I took the p-e-w heretic method and hammered it through 1000 trials, specifically targeting BOTH image generation refusals and those peskier general refusals.

The result?

  • KL Divergence: A tiny 0.0004 (basically no lobotomy! 🧠✨)
  • Refusal Rate: Only 4/100 in my torture tests.

It's ready to generate what you want, when you want it.

Available GGUF Formats

Quantization Size Description
F16 8.05 GB Full Precision - Original Quality
Q8_0 4.28 GB High Precision - Best
Q6_K 3.31 GB Good Balance - Faster
Q5_K_M 2.89 GB Medium Precision - Recommended
Q4_K_M 2.50 GB Standard Low - Fast
Q4_K_S 2.38 GB Smaller Low - Faster
Q3_K_M 2.08 GB Very Low - Fastest
Q2_K 1.67 GB Minimum Size - Extreme

Origins

Brought to you by the same chaotic good energy behind:

Disclaimer

I am not responsible for what you create with this model. This is a model weighting file, not a moral compass. You are responsible for your own outputs and following local laws. Use this power wisely.

Downloads last month
42,406
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for BennyDaBall/Qwen3-4b-Z-Image-Turbo-AbliteratedV1

Quantized
(35)
this model
Quantizations
3 models

Collection including BennyDaBall/Qwen3-4b-Z-Image-Turbo-AbliteratedV1