Hunmin-VLM 0.12
A Vision-Language Model built on GLM-4V-9B with a custom projector and LoRA fine-tuning using ORPO (Odds Ratio Preference Optimization).
Model Details
- Base Model: THUDM/glm-4v-9b (GLM-4.7-Flash)
- Training Method: 4-stage progressive training (projector → LoRA SFT → LoRA ORPO)
- Image Tokens: 288 per image
- Hardware: 1 node × 8 NVIDIA B200 GPUs
- Framework: PyTorch + HuggingFace Transformers ≥5.0.0rc0
Training Stages
This model was trained in 4 progressive stages. Each stage is available as a separate branch.
| Branch | Stage | Description | GPUs | Steps | LR |
|---|---|---|---|---|---|
stage0 |
0 | Projector warmup (random init) | 1 | 200 | 3e-4 |
stage1 |
1 | Projector training (init from stage0) | 1 | 1,000 | 1e-4 |
stage2 |
2 | LoRA SFT, projector frozen | 8 | 500 | 1e-5 |
main |
3 | LoRA ORPO, projector frozen, early stopping | 8 | 500 | 5e-6 |
Stage Details
- Stage 0–1 (Projector): Only the vision-language projector is trained. The LLM backbone and vision encoder are frozen. Output:
projector_state.pt(~17MB). - Stage 2 (LoRA SFT): The projector is frozen. LoRA adapters are applied to the LLM for supervised fine-tuning on VLM conversation data. The full merged model is saved (~76GB).
- Stage 3 (LoRA ORPO): Same as stage 2, but trained with ORPO preference optimization. Text ORPO pairs are mixed in at 10% ratio. Evaluation every 100 steps with early stopping (patience=5).
Training Data
- VLM pairs: mncai/orpo-vlm-pairs-full — multimodal preference pairs
- Text pairs: mncai/orpo-text-pairs-full — text-only preference pairs (10% mix in stage 3)
Training Logs (W&B)
Full training metrics are publicly available:
- Project: ttagu99/hunmin-vlm-0.12
| Run | Stage |
|---|---|
glm47-mm-projector-1node-20260201-083651-stage0 |
Projector warmup |
glm47-mm-projector-1node-20260201-083651-stage1 |
Projector training |
glm47-mm-projector-1node-20260201-083651-stage2 |
LoRA SFT |
glm47-mm-projector-1node-20260201-083651-stage3 |
LoRA ORPO (final) |
Branch Usage
# Final model (stage3 ORPO)
git clone https://huggingface.co/mncai/hunmin-vlm-0.12
# Specific stage
git clone -b stage2 https://huggingface.co/mncai/hunmin-vlm-0.12
Or with Python:
from huggingface_hub import snapshot_download
# Final model
snapshot_download("mncai/hunmin-vlm-0.12")
# Specific stage
snapshot_download("mncai/hunmin-vlm-0.12", revision="stage1")
Included Training Files
| File | Description |
|---|---|
training/train_glm47_mm.py |
Main training script (all 4 stages) |
training/trainjob-glm47-mm-projector-1node.yaml |
Kubeflow TrainJob YAML |
training/smoke_glm47_mm.py |
Smoke test script |
License
Please contact mncai for licensing information.
Model tree for mncai/hunmin-vlm-0.12
Base model
zai-org/glm-4v-9b