Huihui-Qwen3.5-0.8B-abliterated โ fp16 MLX
MLX-converted version of huihui-ai/Huihui-Qwen3.5-0.8B-abliterated for Apple Silicon.
Model Details
| Property | Value |
|---|---|
| Base model | Qwen3.5-0.8B |
| Type | Vision-Language Model (VLM) |
| Format | MLX fp16 (bfloat16) |
| Size | ~1.75 GB |
| Abliterated | Yes โ censorship layers removed |
Variants
| Variant | Size | Quality | Link |
|---|---|---|---|
| fp16 | ~1.75 GB | Highest | This repo |
| MXFP8 | ~0.98 GB | Near-native | mxfp8 |
| MXFP4 | ~0.6 GB | Good | mxfp4 |
Usage
pip install mlx-vlm
# Text generation
python -m mlx_vlm.generate \
--model AITRADER/Huihui-Qwen3.5-0.8B-abliterated-fp16-MLX \
--prompt "Describe this image in detail" \
--image <path-or-url>
# Chat UI
python -m mlx_vlm.chat_ui \
--model AITRADER/Huihui-Qwen3.5-0.8B-abliterated-fp16-MLX
Credits
- Downloads last month
- 192
Model size
0.9B params
Tensor type
BF16
ยท
F32 ยท
Hardware compatibility
Log In to add your hardware
Quantized