Canis.teach - Ministral-3B Instruct (Humanities)

LoRA adapters for the Humanities tutor in the Canis.teach suite.

  • Base Model: unsloth/Ministral-3-3B-Instruct-2512
  • Release: CanisAI/teach-humanities-ministral-3b-r2
  • Project: Canis.teach - Learning that fits.
  • Subject: Humanities

What is this?

This repository provides LoRA adapters fine-tuned on Humanities tutoring dialogues. Apply these adapters to the base model to enable subject-aware, didactic behavior without downloading a full merged checkpoint.

The model is designed to teach, not just answer - providing step-by-step explanations, hints, and pedagogically structured responses.

For ready-to-run merged models or Ollama-friendly GGUF quantizations, see the "Related Models" section.

Quick Start

Installation

pip install transformers peft torch

Usage (LoRA)

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base = "unsloth/Ministral-3-3B-Instruct-2512"
adapter = "CanisAI/teach-humanities-ministral-3b-r2"

tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    base, 
    device_map="auto",
    torch_dtype="auto"
)
model = PeftModel.from_pretrained(model, adapter)

# Example prompt
prompt = "What were the main causes of the French Revolution?"
inputs = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

outputs = model.generate(
    inputs,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.8,
    top_k=40,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

  • Base Model: unsloth/Ministral-3-3B-Instruct-2512
  • Training Method: Supervised Fine-Tuning (SFT) with LoRA
  • Framework: Unsloth + TRL/PEFT
  • Data: Canis.lab-curated Humanities tutoring dialogues
  • Target Modules: Query, Key, Value, Output projections, MLP gates (gate, up, down)
  • Rank: 32
  • Alpha: 32

Intended Use

  • Primary: Subject-aware tutoring for Humanities education
  • Applications: Educational prototypes, tutoring systems, research
  • Approach: Stepwise explanations, pedagogical hints, rubric-aligned responses
  • Target Audience: Students, educators, researchers

Model Behavior

The model is optimized for:

  • Clear, step-by-step explanations
  • Appropriate difficulty progression
  • Encouraging learning through hints rather than direct answers
  • Subject-specific pedagogical approaches
  • Maintaining educational standards and accuracy

Recommended Settings

For optimal tutoring behavior:

  • Temperature: 0.6-0.8
  • Top-p: 0.8-0.9
  • Top-k: 20-40
  • Max tokens: 512-1024

Safety and Limitations

Important Considerations:

  • Human oversight required for educational use
  • May occasionally hallucinate or oversimplify complex topics
  • For fact-critical applications, consider RAG with verified curriculum sources
  • Follow your institution's data privacy and AI usage policies
  • Not a replacement for qualified human instruction

Related Models

Type Repository Description
LoRA Adapters CanisAI/teach-humanities-ministral-3b-r2 This repository (lightweight)
Merged Model (Coming Soon) Ready-to-use full model
GGUF Quantized (Coming Soon) Ollama/llama.cpp compatible
Dataset CanisAI/teach-humanities-v1 Training data

License

This model inherits the license from the base model (unsloth/Ministral-3-3B-Instruct-2512). Please review the base model's license terms before use.

Citation

@misc{canis-teach-teach-humanities,
  title={Canis.teach Humanities Tutor},
  author={CanisAI},
  year={2026},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/CanisAI/teach-humanities-ministral-3b-r2}}
}

Acknowledgments

  • MistralAI/Ministral Team for the excellent base model
  • Unsloth for efficient training tools
  • Hugging Face ecosystem (Transformers, PEFT, TRL)
  • Educators and contributors supporting the Canis.teach project

Canis.teach - Learning that fits.

Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CanisAI/teach-humanities-ministral-3b-r2

Dataset used to train CanisAI/teach-humanities-ministral-3b-r2

Collection including CanisAI/teach-humanities-ministral-3b-r2