mlx-community/Gemma-SEA-LION-v4-27B-IT-bf16

The Model mlx-community/Gemma-SEA-LION-v4-27B-IT-bf16 was converted to MLX format from aisingapore/Gemma-SEA-LION-v4-27B-IT using mlx-lm version 0.30.7.

You can find other similar translation-related MLX model quants for an Apple Mac at https://huggingface.co/bibproj

  • 11 Languages: Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai, Vietnamese, Javanese, and Sundanese
  • Last updated: 2025-08-25
Downloads last month
58
Safetensors
Model size
27B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/Gemma-SEA-LION-v4-27B-IT-bf16