qwen3-4b-structured-output-lora5
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth). It is optimized for high-precision structured data generation on NVIDIA L4.
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Training Objective
This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV).
Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit)
- Max sequence length: 2048
- Epochs: 2
- Learning rate: 1e-05
- LoRA: r=16, alpha=32
- Hardware: NVIDIA L4
- Precision: bfloat16
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "MasaKoma/qwen3-4b-structured-output-lora5"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
Sources & Terms (IMPORTANT)
Training data: u-10bei/structured_data_with_cot_dataset_512_v5
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- 153
Model tree for MasaKoma/qwen3-4b-structured-output-lora5
Base model
Qwen/Qwen3-4B-Instruct-2507