OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH-SYSP

Description:

This model is an Intuitor-fine-tuned version of allenai/OLMo-2-1124-7B-SFT trained on the MATH dataset with a system prompt for one epoch.

Intuitor is a reinforcement learning method introduced in the paper Learning to Reason without External Rewards. It fine-tunes large language models (LLMs) using self-certainty—the model’s own internal confidence—as the sole reward signal. This is part of a novel paradigm called Reinforcement Learning from Internal Feedback (RLIF), which enables LLMs to learn from intrinsic signals without requiring external rewards, gold labels, or verifiers.

Resources


Citation

@article{zhao2025learning,
  title={Learning to Reason without External Rewards},
  author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
  journal={arXiv preprint arXiv:2505.19590},
  year={2025}
}
Downloads last month
22
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH-SYSP

Finetuned
(15)
this model

Collection including sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH-SYSP

Paper for sunblaze-ucb/OLMo-2-7B-SFT-Intuitor-MATH-1EPOCH-SYSP