How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf LiquidAI/LFM2-2.6B-Exp-GGUF:
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "LiquidAI/LFM2-2.6B-Exp-GGUF:"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links
Liquid AI
Try LFM β€’ Docs β€’ LEAP β€’ Discord

LFM2-2.6B-Exp-GGUF

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-2.6B-Exp

πŸƒ How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-2.6B-Exp-GGUF
Downloads last month
7,422
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-2.6B-Exp-GGUF

Unable to build the model tree, the base model loops to the model itself. Learn more.

Collection including LiquidAI/LFM2-2.6B-Exp-GGUF