Gemma 3n
Collection
24 items • Updated • 14
How to use mlx-community/gemma-3n-E2B-8bit with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="mlx-community/gemma-3n-E2B-8bit") # Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("mlx-community/gemma-3n-E2B-8bit")
model = AutoModelForImageTextToText.from_pretrained("mlx-community/gemma-3n-E2B-8bit")How to use mlx-community/gemma-3n-E2B-8bit with MLX:
# Make sure mlx-vlm is installed
# pip install --upgrade mlx-vlm
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
# Load the model
model, processor = load("mlx-community/gemma-3n-E2B-8bit")
config = load_config("mlx-community/gemma-3n-E2B-8bit")
# Prepare input
image = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
prompt = "Describe this image."
# Apply chat template
formatted_prompt = apply_chat_template(
processor, config, prompt, num_images=1
)
# Generate output
output = generate(model, processor, formatted_prompt, image)
print(output)How to use mlx-community/gemma-3n-E2B-8bit with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mlx-community/gemma-3n-E2B-8bit"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/gemma-3n-E2B-8bit",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/mlx-community/gemma-3n-E2B-8bit
How to use mlx-community/gemma-3n-E2B-8bit with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "mlx-community/gemma-3n-E2B-8bit" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/gemma-3n-E2B-8bit",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "mlx-community/gemma-3n-E2B-8bit" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/gemma-3n-E2B-8bit",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use mlx-community/gemma-3n-E2B-8bit with Docker Model Runner:
docker model run hf.co/mlx-community/gemma-3n-E2B-8bit
This model was converted to MLX format from google/gemma-3n-E2B using mlx-vlm version 0.3.1.
Refer to the original model card for more details on the model.
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/gemma-3n-E2B-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
8-bit
Base model
google/gemma-3n-E4B