Intuitor
Collection
Models in the paper "Learning to Reason without External Rewards" • 12 items • Updated
• 1
Description:
This model is an Intuitor-fine-tuned version of allenai/OLMo-2-1124-7B-SFT trained on the MATH dataset with a system prompt for one epoch.
Intuitor is a reinforcement learning method introduced in the paper Learning to Reason without External Rewards. It fine-tunes large language models (LLMs) using self-certainty—the model’s own internal confidence—as the sole reward signal. This is part of a novel paradigm called Reinforcement Learning from Internal Feedback (RLIF), which enables LLMs to learn from intrinsic signals without requiring external rewards, gold labels, or verifiers.
@article{zhao2025learning,
title={Learning to Reason without External Rewards},
author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn},
journal={arXiv preprint arXiv:2505.19590},
year={2025}
}