Instructions to use concedo/CrabSoup-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use concedo/CrabSoup-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="concedo/CrabSoup-GGUF", filename="CrabSoup30_Q4_K_S/CrabSoup30_Q4_K_S.gguf-00001-of-00002.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use concedo/CrabSoup-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S # Run inference directly in the terminal: llama-cli -hf concedo/CrabSoup-GGUF:Q4_K_S
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S # Run inference directly in the terminal: llama-cli -hf concedo/CrabSoup-GGUF:Q4_K_S
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S # Run inference directly in the terminal: ./llama-cli -hf concedo/CrabSoup-GGUF:Q4_K_S
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S # Run inference directly in the terminal: ./build/bin/llama-cli -hf concedo/CrabSoup-GGUF:Q4_K_S
Use Docker
docker model run hf.co/concedo/CrabSoup-GGUF:Q4_K_S
- LM Studio
- Jan
- Ollama
How to use concedo/CrabSoup-GGUF with Ollama:
ollama run hf.co/concedo/CrabSoup-GGUF:Q4_K_S
- Unsloth Studio new
How to use concedo/CrabSoup-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for concedo/CrabSoup-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for concedo/CrabSoup-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for concedo/CrabSoup-GGUF to start chatting
- Pi new
How to use concedo/CrabSoup-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "concedo/CrabSoup-GGUF:Q4_K_S" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use concedo/CrabSoup-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf concedo/CrabSoup-GGUF:Q4_K_S
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default concedo/CrabSoup-GGUF:Q4_K_S
Run Hermes
hermes
- Docker Model Runner
How to use concedo/CrabSoup-GGUF with Docker Model Runner:
docker model run hf.co/concedo/CrabSoup-GGUF:Q4_K_S
- Lemonade
How to use concedo/CrabSoup-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull concedo/CrabSoup-GGUF:Q4_K_S
Run and chat with the model
lemonade run user.CrabSoup-GGUF-Q4_K_S
List all available models
lemonade list
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
These models were made by merging https://huggingface.co/huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF with https://huggingface.co/unsloth/GLM-4.5-Air-GGUF in various ratios.
The goal is to attempt to preserve as much model capabilities as possible while remaining uncensored (since abliteration damages model intelligence).
GLM-4.5-Air: 0% Abliterated
This is the basic censored model. Has the highest intelligence and can remember obscure facts. Extremely censored. Jailbreaking via system prompts are extremely difficult and often unsuccessful, only a strong postfill can jailbreak the model.
CrabSoup-30: 30% Abliterated, 70% Normal
This model is still heavily censored, however jailbreaks work slightly easier now. Model general intelligence is slightly reduced compared to unmodified model.
CrabSoup-55: 55% Abliterated, 45% Normal (RECOMMENDED)
This model is mostly uncensored by default. It still respects alignment requests added to the system prompt, making it steerable. Model intelligence is moderated affected, it retains obscure knowledge but often makes mistakes.
CrabSoup-76: 76% Abliterated, 24% Normal
This model is almost always uncensored, and sometimes will respond in an uncensored way even if asked not to do so. Model intelligence is substantially degraded but still usable.
huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF: 100% Abliterated
This is the abliterated model used in the above merges. Model intelligence is also strongly degraded, about the same level as CrabSoup-76. However, this model is incapable of refusal and will fulfill "harmful" requests even if instructed explicitly not to do so in a system prompt.
- Downloads last month
- 4
4-bit