--- library_name: transformers datasets: - schneewolflabs/Denker-SFT base_model: - nbeerbower/Schreiber-mistral-nemo-12B --- # Denker-test2-12B Testing Merlina with no tokenizer changes. ## Training Configuration | Parameter | Value | |-----------|-------| | Training Mode | SFT | | Base Model | `nbeerbower/Schreiber-mistral-nemo-12B` | | Learning Rate | 9e-05 | | Epochs | 1 | | Batch Size | 1 | | Gradient Accumulation | 16 | | Effective Batch Size | 16 | | Max Sequence Length | 4096 | | Optimizer | paged_adamw_8bit | | LR Scheduler | cosine | | Warmup Ratio | 0.05 | | Weight Decay | 0.01 | | Max Grad Norm | 0.25 | | Seed | 42 | | LoRA Rank (r) | 128 | | LoRA Alpha | 256 | | LoRA Dropout | 0.05 | | Target Modules | up_proj, down_proj, gate_proj, k_proj, q_proj, v_proj, o_proj | | Quantization | 4-bit (NF4) | | GPU | NVIDIA RTX A6000 | --- ![Trained with Merlina](https://raw.githubusercontent.com/Schneewolf-Labs/Merlina/refs/heads/main/frontend/madewithmerlina_smol.png) [Merlina on GitHub](https://github.com/Schneewolf-Labs/Merlina)