Instructions to use acon96/Home-3B-v1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use acon96/Home-3B-v1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="acon96/Home-3B-v1-GGUF", filename="home-3b-v1.q2_k.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use acon96/Home-3B-v1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf acon96/Home-3B-v1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf acon96/Home-3B-v1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf acon96/Home-3B-v1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf acon96/Home-3B-v1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf acon96/Home-3B-v1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf acon96/Home-3B-v1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf acon96/Home-3B-v1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf acon96/Home-3B-v1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/acon96/Home-3B-v1-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use acon96/Home-3B-v1-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "acon96/Home-3B-v1-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "acon96/Home-3B-v1-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/acon96/Home-3B-v1-GGUF:Q4_K_M
- Ollama
How to use acon96/Home-3B-v1-GGUF with Ollama:
ollama run hf.co/acon96/Home-3B-v1-GGUF:Q4_K_M
- Unsloth Studio new
How to use acon96/Home-3B-v1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for acon96/Home-3B-v1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for acon96/Home-3B-v1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for acon96/Home-3B-v1-GGUF to start chatting
- Docker Model Runner
How to use acon96/Home-3B-v1-GGUF with Docker Model Runner:
docker model run hf.co/acon96/Home-3B-v1-GGUF:Q4_K_M
- Lemonade
How to use acon96/Home-3B-v1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull acon96/Home-3B-v1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Home-3B-v1-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf acon96/Home-3B-v1-GGUF:# Run inference directly in the terminal:
llama-cli -hf acon96/Home-3B-v1-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf acon96/Home-3B-v1-GGUF:# Run inference directly in the terminal:
./llama-cli -hf acon96/Home-3B-v1-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf acon96/Home-3B-v1-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf acon96/Home-3B-v1-GGUF:Use Docker
docker model run hf.co/acon96/Home-3B-v1-GGUF:Home 3B
The "Home" model is a fine tuning of the Phi-2 model from Microsoft. The model is able to control devices in the user's house as well as perform basic question and answering. The fine tuning dataset is a combination of the Cleaned Stanford Alpaca Dataset as well as a custom curated dataset designed to teach the model function calling.
The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.
The model can be used as an "instruct" type model using the ChatML prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.
Example "system" prompt:
You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed with the information provided only.
Services: light.turn_off, light.turn_on, fan.turn_on, fan.turn_off
Devices:
light.office 'Office Light' = on
fan.office 'Office fan' = off
light.kitchen 'Kitchen Light' = on
Output from the model will consist of a response that should be relayed back to the user, along with an optional code block that will invoke different Home Assistant "services". The output format from the model for function calling is as follows:
turning on the kitchen lights for you now
```homeassistant
light.turn_on(light.kitchen)
```
Due to the mix of data used during fine tuning, the model is also capable of basic instruct and QA tasks. For example, the model is able to perform basic logic tasks such as the following:
user if mary is 7 years old, and I am 3 years older than her. how old am I?
assistant If Mary is 7 years old, then you are 10 years old (7+3=10).
Training
The model was trained as a LoRA on an RTX 3090 (24GB) using a custom training script to enable gradient checkpointing. The LoRA has rank = 32, alpha = 64, targets the fc1,fc2,Wqkv,out_proj modules and "saves" the wte,lm_head.linear modules The embedding weights were "saved" and trained normally along with the rank matricies in order to train the newly added tokens to the embeddings. The full model is merged together at the end.
Datasets
Snythetic Dataset for SFT - https://github.com/acon96/home-llm
Stanford Alpaca Cleaned - https://huggingface.co/datasets/yahma/alpaca-cleaned
License
This model is a fine-tuning of the Microsoft Phi model series (MIT License) and utilizes datasets thare are licensed under CC BY-NC-4.0. As such this model is released under the same non-commerical Creative Commons License. The fine-tuned model is shared FOR RESEARCH PURPOSES ONLY. It is not to be used in any sort of commercial capacity.
- Downloads last month
- 189
2-bit
3-bit
4-bit
5-bit
8-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf acon96/Home-3B-v1-GGUF:# Run inference directly in the terminal: llama-cli -hf acon96/Home-3B-v1-GGUF: