Salesforce/xlam-function-calling-60k
Viewer • Updated • 60k • 19.5k • 615
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Manojb/Qwen3-4B-toolcalling-gguf-codex", filename="Qwen3-4B-Function-Calling-Pro.gguf", )
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Manojb/Qwen3-4B-toolcalling-gguf-codex # Run inference directly in the terminal: llama-cli -hf Manojb/Qwen3-4B-toolcalling-gguf-codex
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Manojb/Qwen3-4B-toolcalling-gguf-codex # Run inference directly in the terminal: llama-cli -hf Manojb/Qwen3-4B-toolcalling-gguf-codex
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Manojb/Qwen3-4B-toolcalling-gguf-codex # Run inference directly in the terminal: ./llama-cli -hf Manojb/Qwen3-4B-toolcalling-gguf-codex
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Manojb/Qwen3-4B-toolcalling-gguf-codex # Run inference directly in the terminal: ./build/bin/llama-cli -hf Manojb/Qwen3-4B-toolcalling-gguf-codex
docker model run hf.co/Manojb/Qwen3-4B-toolcalling-gguf-codex
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Manojb/Qwen3-4B-toolcalling-gguf-codex"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Manojb/Qwen3-4B-toolcalling-gguf-codex",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Manojb/Qwen3-4B-toolcalling-gguf-codex
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with Ollama:
ollama run hf.co/Manojb/Qwen3-4B-toolcalling-gguf-codex
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Manojb/Qwen3-4B-toolcalling-gguf-codex to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Manojb/Qwen3-4B-toolcalling-gguf-codex to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Manojb/Qwen3-4B-toolcalling-gguf-codex to start chatting
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with Docker Model Runner:
docker model run hf.co/Manojb/Qwen3-4B-toolcalling-gguf-codex
How to use Manojb/Qwen3-4B-toolcalling-gguf-codex with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Manojb/Qwen3-4B-toolcalling-gguf-codex
lemonade run user.Qwen3-4B-toolcalling-gguf-codex-{{QUANT_TAG}}lemonade list
docker model run hf.co/Manojb/Qwen3-4B-toolcalling-gguf-codex# Download and run instantly
ollama create qwen3:toolcall -f ModelFile
ollama run qwen3:toolcall
# Ask: "Get weather data for New York and format it as JSON"
# Model automatically calls weather API with proper parameters
# Ask: "Analyze this CSV file and create a visualization"
# Model selects appropriate tools: pandas, matplotlib, etc.
# Ask: "Fetch stock data, calculate moving averages, and email me the results"
# Model orchestrates multiple function calls seamlessly
# Load with Ollama
import requests
response = requests.post('http://localhost:11434/api/generate', json={
'model': 'qwen3:toolcall',
'prompt': 'Get the current weather in San Francisco and convert to Celsius',
'stream': False
})
print(response.json()['response'])
# The model understands complex tool orchestration
prompt = """
I need to:
1. Fetch data from the GitHub API
2. Process the JSON response
3. Create a visualization
4. Save it as a PNG file
What tools should I use and how?
"""
| Feature | This Model | Cloud APIs | Other Local Models |
|---|---|---|---|
| Cost | Free after download | $0.01-0.10 per call | Often larger/heavier |
| Privacy | 100% local | Data sent to servers | Varies |
| Speed | Instant | Network dependent | Often slower |
| Reliability | Always available | Service dependent | Depends on setup |
| Customization | Full control | Limited | Varies |
PERFECT for developers who want:
@model{Qwen3-4B-toolcalling-gguf-codex,
title={Qwen3-4B-toolcalling-gguf-codex: Local Function Calling},
author={Manojb},
year={2025},
url={https://huggingface.co/Manojb/Qwen3-4B-toolcalling-gguf-codex}
}
Apache 2.0 - Use freely for personal and commercial projects
Built with ❤️ for the developer community
We're not able to determine the quantization variants.
Base model
Qwen/Qwen3-4B-Instruct-2507
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "Manojb/Qwen3-4B-toolcalling-gguf-codex"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Manojb/Qwen3-4B-toolcalling-gguf-codex", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'