The Scaling Properties of Implicit Deductive Reasoning in Transformers
Abstract
Deep Transformers with bidirectional masking exhibit implicit deductive reasoning capabilities comparable to explicit chain-of-thought methods across various graph structures and problem sizes.
We investigate the scaling properties of implicit deductive reasoning over Horn clauses in depth-bounded Transformers. By systematically decorrelating provability from spurious features and enforcing algorithmic alignment, we find that in sufficiently deep models with a bidirectional prefix mask, implicit reasoning approaches explicit CoT performance across graph topologies and problem widths, though CoT remains necessary for depth extrapolation.
Community
code, datasets and models, although reproducible from paper, will be made public upon publication. For joint research, contact {enrico.vompa}@gmail.com as I'm open for collaboration
Interesting breakdown of this paper on arXivLens: https://arxivlens.com/PaperView/Details/the-scaling-properties-of-implicit-deductive-reasoning-in-transformers-274-3e1290bd
Covers the executive summary, detailed methodology, and practical applications.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Thinking Deeper, Not Longer: Depth-Recurrent Transformers for Compositional Generalization (2026)
- Loop, Think,&Generalize: Implicit Reasoning in Recurrent-Depth Transformers (2026)
- To See the Unseen: on the Generalization Ability of Transformers in Symbolic Reasoning (2026)
- Temporal Reasoning Is Not the Bottleneck: A Probabilistic Inconsistency Framework for Neuro-Symbolic QA (2026)
- How Transformers Learn to Plan via Multi-Token Prediction (2026)
- Quantifying Cross-Query Contradictions in Multi-Query LLM Reasoning (2026)
- Dual Path Attribution: Efficient Attribution for SwiGLU-Transformers through Layer-Wise Target Propagation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2605.04330 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper