teknium/OpenHermes-2.5
Viewer β’ Updated β’ 1M β’ 22.7k β’ 831
How to use timpal0l/Mistral-7B-v0.1-flashback-v2-instruct with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="timpal0l/Mistral-7B-v0.1-flashback-v2-instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("timpal0l/Mistral-7B-v0.1-flashback-v2-instruct")
model = AutoModelForCausalLM.from_pretrained("timpal0l/Mistral-7B-v0.1-flashback-v2-instruct")How to use timpal0l/Mistral-7B-v0.1-flashback-v2-instruct with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
How to use timpal0l/Mistral-7B-v0.1-flashback-v2-instruct with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use timpal0l/Mistral-7B-v0.1-flashback-v2-instruct with Docker Model Runner:
docker model run hf.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
Mistral-7B-v0.1-flashback-v2-instruct is an instruct based version of the base model timpal0l/Mistral-7B-v0.1-flashback-v2. It has been finetuned on a the machine translated instruct dataset OpenHermes2.5.
from transformers import pipeline
pipe = pipeline(
"text-generation",
"timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
device_map="auto"
)
text = """
Hur mΓ₯nga Γ€gg har jag? Jag hade 10 Γ€gg, sen gav jag bort 5 Γ€gg.
Sen fick jag 3 Γ€gg av en kompis.
"""
generated = pipe(f"USER:{text}ASSISTANT:", max_length=512, temperature=0.6)
print(generated[0]["generated_text"].split("ASSISTANT: ")[1:][0])
Output:
Du har 8 Γ€gg. HΓ€r Γ€r resonemanget:
1. Du bΓΆrjar med 10 Γ€gg
2. Du ger bort 5 Γ€gg, vilket lΓ€mnar dig med 10 - 5 = 5 Γ€gg
3. Sedan fΓ₯r du 3 Γ€gg av en kompis, vilket gΓΆr att du har 5 + 3 = 8 Γ€gg.