How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for LiquidAI/LFM2-1.2B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for LiquidAI/LFM2-1.2B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for LiquidAI/LFM2-1.2B-GGUF to start chatting
Quick Links
Liquid AI
Try LFM β€’ Docs β€’ LEAP β€’ Discord

LFM2-1.2B-GGUF

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-1.2B

πŸƒ How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-1.2B-GGUF
Downloads last month
8,756
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-1.2B-GGUF

Quantized
(30)
this model

Collection including LiquidAI/LFM2-1.2B-GGUF