shreyaskaps's picture
Training in progress, step 50
00764da verified
---
base_model: Qwen/Qwen3-14B
library_name: transformers
model_name: paxhistoria-reward-model-v2-14b
tags:
- generated_from_trainer
- trl
- hf_jobs
- reward-trainer
licence: license
---
# Model Card for paxhistoria-reward-model-v2-14b
This model is a fine-tuned version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
text = "The capital of France is Paris."
rewarder = pipeline(model="shreyaskaps/paxhistoria-reward-model-v2-14b", device="cuda")
output = rewarder(text)[0]
print(output["score"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ashr/paxhistoria-reward-model-v2/runs/t3ddxd26)
This model was trained with Reward.
### Framework versions
- TRL: 0.29.0
- Transformers: 5.2.0
- Pytorch: 2.10.0
- Datasets: 4.6.0
- Tokenizers: 0.22.2
## Citations
Cite TRL as:
```bibtex
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
```