--- license: apache-2.0 language: - en base_model: - unsloth/Ministral-3-3B-Instruct-2512 base_model_relation: adapter library_name: peft tags: - canis-teach - ministral - education - lora - transformers - tutoring - math pipeline_tag: text-generation datasets: - CanisAI/teach-math-v1 --- # Canis.teach - Ministral-3B Instruct (Math) LoRA adapters for the Math tutor in the Canis.teach suite. - **Base Model**: unsloth/Ministral-3-3B-Instruct-2512 - **Release**: CanisAI/teach-math-ministral-3b-r2 - **Project**: Canis.teach - Learning that fits. - **Subject**: Math ## What is this? This repository provides LoRA adapters fine-tuned on Math tutoring dialogues. Apply these adapters to the base model to enable subject-aware, didactic behavior without downloading a full merged checkpoint. The model is designed to **teach, not just answer** - providing step-by-step explanations, hints, and pedagogically structured responses. For ready-to-run merged models or Ollama-friendly GGUF quantizations, see the "Related Models" section. ## Quick Start ### Installation ```bash pip install transformers peft torch ``` ### Usage (LoRA) ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel base = "unsloth/Ministral-3-3B-Instruct-2512" adapter = "CanisAI/teach-math-ministral-3b-r2" tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True) model = AutoModelForCausalLM.from_pretrained( base, device_map="auto", torch_dtype="auto" ) model = PeftModel.from_pretrained(model, adapter) # Example prompt prompt = "Explain how to solve 2x + 1 = 5 step by step." inputs = tokenizer.apply_chat_template( [{"role": "user", "content": prompt}], add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( inputs, max_new_tokens=512, temperature=0.7, top_p=0.8, top_k=40, do_sample=True ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Details - **Base Model**: unsloth/Ministral-3-3B-Instruct-2512 - **Training Method**: Supervised Fine-Tuning (SFT) with LoRA - **Framework**: Unsloth + TRL/PEFT - **Data**: Canis.lab-curated Math tutoring dialogues - **Target Modules**: Query, Key, Value, Output projections, MLP gates (gate, up, down) - **Rank**: 32 - **Alpha**: 32 ## Intended Use - **Primary**: Subject-aware tutoring for Math education - **Applications**: Educational prototypes, tutoring systems, research - **Approach**: Stepwise explanations, pedagogical hints, rubric-aligned responses - **Target Audience**: Students, educators, researchers ## Model Behavior The model is optimized for: - Clear, step-by-step explanations - Appropriate difficulty progression - Encouraging learning through hints rather than direct answers - Subject-specific pedagogical approaches - Maintaining educational standards and accuracy ## Recommended Settings For optimal tutoring behavior: - **Temperature**: 0.6-0.8 - **Top-p**: 0.8-0.9 - **Top-k**: 20-40 - **Max tokens**: 512-1024 ## Safety and Limitations **Important Considerations**: - Human oversight required for educational use - May occasionally hallucinate or oversimplify complex topics - For fact-critical applications, consider RAG with verified curriculum sources - Follow your institution's data privacy and AI usage policies - Not a replacement for qualified human instruction ## Related Models | Type | Repository | Description | |------|------------|-------------| | **LoRA Adapters** | `CanisAI/teach-math-ministral-3b-r2` | This repository (lightweight) | | **Merged Model** | (Coming Soon) | Ready-to-use full model | | **GGUF Quantized** | (Coming Soon) | Ollama/llama.cpp compatible | | **Dataset** | `CanisAI/teach-math-v1` | Training data | ## License This model inherits the license from the base model (unsloth/Ministral-3-3B-Instruct-2512). Please review the base model's license terms before use. ## Citation ```bibtex @misc{canis-teach-teach-math, title={Canis.teach Math Tutor}, author={CanisAI}, year={2026}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/CanisAI/teach-math-ministral-3b-r2}} } ``` ## Acknowledgments - **MistralAI/Ministral Team** for the excellent base model - **Unsloth** for efficient training tools - **Hugging Face** ecosystem (Transformers, PEFT, TRL) - Educators and contributors supporting the Canis.teach project --- **Canis.teach** - Learning that fits.