EXAONE Easy Contract LoRA Adapter (v1.1)
μ£Όν μλμ°¨ κ³μ½μ μ‘°νμ μΌλ°μΈμ΄ μ΄ν΄ν μ μλλ‘
μ½κ³ λΆλλ¬μ΄ μ€λͺ
μ²΄λ‘ νμ΄ μ€λͺ
νλλ‘ νμ΅λ LoRA μ΄λν°μ
λλ€.
Base Model
- LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct
What This Adapter Does
- κ³μ½μ μ‘°νμ μλ―Έλ₯Ό νμ΄ μ¬μ΄ μ€λͺ μΌλ‘ λ³ν
- μλ¬Έμ μλ λ΄μ©μ μΆκ°νμ§ μλλ‘ νμ΅
- β~ν΄μΌ λΌμ / ~μμ / ~λ‘ λμ΄ μμ΄μβ μ€λͺ 체 μ μ§
- ν μ‘°νλΉ νλμ λ¬Έλ¨ μ€λͺ μμ±
Intended Use
- μ£Όκ±°μ© λΆλμ° μλμ°¨ κ³μ½μ μ¬μ΄ μ€λͺ μμ±
- κ³μ½μ κΈ°λ° Q&A λλ μ¬μ΄ κ³μ½μ 리ν¬νΈ
- FastAPI / RAG κΈ°λ° μλΉμ€μ κ²°ν© μ¬μ©
How to Use
from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained(
"LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct",
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(
base,
"temdy/exaone-3.5-2.4b-easycontract-qlora-v1.1"
)
Training Overview
Training method: Supervised Fine-Tuning (SFT)
Adaptation: LoRA / QLoRA
Data: μ£Όν μλμ°¨ κ³μ½μ μ‘°ν λ¨μ μ€λͺ
λ°μ΄ν°
Frameworks: PEFT, TRL, Transformers
Limitations
λ²λ₯ μλ¬Έμ λ체νμ§ μμ
μ€μ κ³μ½ νλ¨μ μλ¬Έ κ³μ½μμ μ λ¬Έκ° κ²ν νμ
License
Apache-2.0
- Downloads last month
- 1
Model tree for temdy/exaone-3.5-2.4b-easycontract-qlora-v1.1
Base model
LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct