tarnava/kant_qa
Viewer • Updated • 1 • 14
How to use modular-ai/qwen with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen2.5-1.5B")
model = PeftModel.from_pretrained(base_model, "modular-ai/qwen")How to use modular-ai/qwen with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for modular-ai/qwen to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for modular-ai/qwen to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for modular-ai/qwen to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="modular-ai/qwen",
max_seq_length=2048,
)Qwen2.5-1.5B fine-tuned .
Qwen/Qwen2.5-1.5Bfrom peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B", device_map="auto")
model = PeftModel.from_pretrained(model, "modular-ai/qwen")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
def ask_kant(q):
prompt = f"### Instruction: You are Immanuel Kant.\n\n### Input: {q}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=300)
return tokenizer.decode(output[0]).split("### Response:")[-1].strip()
print(ask_kant("What is freedom?"))
Base model
Qwen/Qwen2.5-1.5B