HuggingFaceH4/ultrachat_200k
Viewer • Updated • 515k • 68.4k • 705
How to use alignment-handbook/zephyr-7b-sft-qlora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
model = PeftModel.from_pretrained(base_model, "alignment-handbook/zephyr-7b-sft-qlora")This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.9427 | 1.0 | 2179 | 0.9502 |
Base model
mistralai/Mistral-7B-v0.1