How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
# Run inference directly in the terminal:
llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
# Run inference directly in the terminal:
llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
# Run inference directly in the terminal:
./llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
# Run inference directly in the terminal:
./build/bin/llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
Use Docker
docker model run hf.co/Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16
Quick Links

🐍 Phi 3.5 Mini Instruct Python Coding Assistant Gguf 16Bit

Python code generation specialist. 171+ downloads. Fully local.

Downloads License P2PCLAW


🎯 Python-First Design

Fine-tuned exclusively for Python code generation with:

  • 50,000+ Python scripts from GitHub
  • 200,000 Stack Overflow Q&A pairs
  • 15,000 Jupyter notebooks
  • PEP 8 compliant output
  • Type hints and docstrings

πŸš€ Quick Start

Via Ollama

ollama run Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit

Via Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit",
    torch_dtype="auto", device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit")

prompt = "Write a Python function to parse JSON and validate schema"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.2)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ”— P2PCLAW Ecosystem

Component Purpose Link
CAJAL-9B Scientific papers HF Model
CAJAL-4B Lightweight papers HF Model
BenchClaw Code evaluation HF Space
P2PCLAW Research network Website

πŸ‘€ Author

Francisco Angulo de Lafuente (Agnuxo1) Β· ORCID: 0009-0001-1634-7063


Built with πŸ”₯ by the P2PCLAW Collective

Downloads last month
171
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit