Llama-3.2-3B-Instruct-Intuitor-MATH-1EPOCH

This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct trained on the MATH dataset for one epoch using Intuitor.

Description

Intuitor is a reinforcement learning method introduced in the paper Learning to Reason without External Rewards. It fine-tunes Large Language Models (LLMs) using self-certainty—the model’s own internal confidence—as the sole reward signal.

This approach is part of a novel framework called Reinforcement Learning from Internal Feedback (RLIF), which enables models to learn from intrinsic signals without the need for external rewards, gold labels, or test-case verifiers. Intuitor replaces external rewards in Group Relative Policy Optimization (GRPO) with self-certainty scores, enabling fully unsupervised learning that matches or exceeds standard RL performance on mathematical and coding benchmarks.


Citation

@article{zhao2025learning,
  title={Learning to Reason without External Rewards},
  author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
  journal={arXiv preprint arXiv:2505.19590},
  year={2025}
}
Downloads last month
30
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sunblaze-ucb/Llama-3.2-3B-Instruct-Intuitor-MATH-1EPOCH

Finetuned
(1156)
this model

Collection including sunblaze-ucb/Llama-3.2-3B-Instruct-Intuitor-MATH-1EPOCH

Paper for sunblaze-ucb/Llama-3.2-3B-Instruct-Intuitor-MATH-1EPOCH