Model Overview

This model is a fine-tuned version of Gemma 3 270M adapted for text-to-emoji translation. The fine-tuning was performed using Quantized Low-Rank Adaptation (QLoRA), an efficient technique that reduces memory usage and speeds up the training process. The Hugging Face Transformer Reinforcement Learning (TRL) library was utilized to implement QLoRA.

This model is designed to take a text input and generate corresponding emoji outputs, specializing Gemma 3 270M for this specific task.

Emoji generator web app

This demo runs a Gemma 3 270M IT model fine-tuned for text-to-emoji translation directly in the browser. Gemma 3 is supported by web AI frameworks that make deployment easy. Run the app using either:

If you don't have a fine-tuned model, view the resources below.

Alt text

Preview the app on Hugging Face.

Resources

You can use these notebooks in Google Colab for fine-tuning and optimizing Gemma 3 270M for web. To fine-tune the model for the emoji translation task, you can either create your own dataset or use our premade dataset.

Notebook Description
Fine-tune Gemma 3 270M Fine-tune Gemma for emoji translation using Quantized Low-Rank Adaptation (QLoRA)
Convert to MediaPipe Quantize and convert your fine-tuned Gemma 3 270M model to .litert, then bundle into a .task file for use with the LLM Inference API
Convert to ONNX Quantize and convert your fine-tuned Gemma 3 270M model to .onnx for use with Transformers.js via ONNX Runtime
Downloads last month
99
Safetensors
Model size
0.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for huggingworld/myemoji-gemma-3-270m-it

Finetuned
(1038)
this model

Space using huggingworld/myemoji-gemma-3-270m-it 1