🏥 GPT-Neo 125M — Medical QA LoRA Fine-tune

Model Description

موديل GPT-Neo 125M تم عمله Fine-tuning على داتا أسئلة وأجوبة طبية (USMLE style) باستخدام LoRA.

  • Developed by: Hazem Galal
  • Supervised by: Eng. Mahmoud Khorshed
  • License: MIT
  • Base Model: EleutherAI/gpt-neo-125M

المواصفات

المواصفة القيمة
Base Model EleutherAI/gpt-neo-125M
Dataset medalpaca/medical_meadow_medqa
LoRA Rank (r) 8
LoRA Alpha 16
Block Size 256
Epochs 3
Learning Rate 2e-4

طريقة الاستخدام

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("hazemgalal1/gptneo125m-medical-qa-merged")
tokenizer = AutoTokenizer.from_pretrained("hazemgalal1/gptneo125m-medical-qa-merged")

prompt = (
    "### Instruction:\nAnswer the following medical question.\n\n"
    "### Question:\nWhat is the first-line treatment for hypertension?\n\n"
    "### Answer:\n"
)

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

تنبيه

هذا الموديل للأغراض التعليمية فقط، وليس بديلاً عن الاستشارة الطبية المتخصصة.

Downloads last month
40
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hazemgalal1/gptneo125m-medical-qa-lora-adapter

Adapter
(106)
this model

Dataset used to train hazemgalal1/gptneo125m-medical-qa-lora-adapter