EnigmAgent + OpenCLAW + P2PCLAW ecosystem
Collection
Local-first AI tooling: encrypted MCP vault, eval tribunal, decentralized research network, neuromorphic GPU primitives. β’ 16 items β’ Updated β’ 1
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16# Run inference directly in the terminal:
llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16# Run inference directly in the terminal:
./llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16docker model run hf.co/Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16Python code generation specialist. 171+ downloads. Fully local.
Fine-tuned exclusively for Python code generation with:
ollama run Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit",
torch_dtype="auto", device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit")
prompt = "Write a Python function to parse JSON and validate schema"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.2)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
| Component | Purpose | Link |
|---|---|---|
| CAJAL-9B | Scientific papers | HF Model |
| CAJAL-4B | Lightweight papers | HF Model |
| BenchClaw | Code evaluation | HF Space |
| P2PCLAW | Research network | Website |
Francisco Angulo de Lafuente (Agnuxo1) Β· ORCID: 0009-0001-1634-7063
Built with π₯ by the P2PCLAW Collective
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16# Run inference directly in the terminal: llama-cli -hf Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit:F16