Instructions to use jaygala24/Qwen3-4B-RLOO-math-reasoning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jaygala24/Qwen3-4B-RLOO-math-reasoning with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jaygala24/Qwen3-4B-RLOO-math-reasoning") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen3-4B-RLOO-math-reasoning") model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen3-4B-RLOO-math-reasoning") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jaygala24/Qwen3-4B-RLOO-math-reasoning with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jaygala24/Qwen3-4B-RLOO-math-reasoning" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jaygala24/Qwen3-4B-RLOO-math-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/jaygala24/Qwen3-4B-RLOO-math-reasoning
- SGLang
How to use jaygala24/Qwen3-4B-RLOO-math-reasoning with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jaygala24/Qwen3-4B-RLOO-math-reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jaygala24/Qwen3-4B-RLOO-math-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jaygala24/Qwen3-4B-RLOO-math-reasoning" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jaygala24/Qwen3-4B-RLOO-math-reasoning", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use jaygala24/Qwen3-4B-RLOO-math-reasoning with Docker Model Runner:
docker model run hf.co/jaygala24/Qwen3-4B-RLOO-math-reasoning
Qwen3-4B-RLOO-math-reasoning
This model is a fine-tuned version of Qwen3-4B using RLOO (REINFORCE Leave-One-Out) without KL penalty for mathematical reasoning.
Trained with PipelineRL.
Training Details
Datasets
| Split | Datasets |
|---|---|
| Train | gsm8k_train, math_train |
| Test | gsm8k_test, math_500 |
RL Algorithm
| Parameter | Value |
|---|---|
| Algorithm | RLOO (REINFORCE Leave-One-Out) |
| Advantage Baseline | Leave-one-out mean reward over the group |
| Extra Inference | None |
| Group Structure | Required |
| Policy Loss | reinforce |
| KL Coefficient | 0.0 |
| Epsilon (clip) | 0.02 |
Discount Factor (gamma) |
1.0 |
| Divide Advantage by Std | False |
| Filter Zero Advantage Groups | False |
| Rollouts per Problem | 16 |
RLOO uses the leave-one-out mean of the other responses in the group as the baseline, trained with a REINFORCE-style policy loss.
Training Hyperparameters
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-4B |
| Learning Rate | 1e-06 |
| LR Scheduler | cosine |
| Warmup Steps | 25 |
| Max Training Steps | 1500 |
| Micro Batch Size | 2 |
| Gradient Accumulation | 128 |
| Effective Batch Size | 256 |
| Sequence Length | 8192 |
| Gradient Clipping | 0.3 |
| Weight Decay | 0.01 |
| Optimizer | adamw_torch |
| Precision | bf16 |
| DeepSpeed | ZeRO Stage 3 |
Evaluation Results
Pass@k on math reasoning benchmarks (N=32 samples per problem, temperature=1.0):
| Dataset | pass@1 | pass@2 | pass@4 | pass@8 | pass@16 | pass@32 |
|---|---|---|---|---|---|---|
| GSM8K (test) | 90.08 | 93.31 | 95.18 | 96.30 | 97.05 | 97.73 |
| MATH-500 | 79.19 | 85.54 | 89.91 | 92.77 | 94.66 | 96.00 |
| Overall | 87.09 | 91.17 | 93.73 | 95.33 | 96.39 | 97.25 |
GSM8K test: 1319 problems · MATH-500: 500 problems · Overall: 1819 problems (overall weighted by problem count).
Training Curves
W&B Run
Full training logs: https://wandb.ai/jaygala24-team/rl-post-training/runs/qwen3_4b_rloo_no_kl_3a1f_4xh100_236660_finetune_84c874cf
Usage
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jaygala24/Qwen3-4B-RLOO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
tokenizer = AutoTokenizer.from_pretrained("jaygala24/Qwen3-4B-RLOO-math-reasoning", revision="step-0200")
prompt = "Please reason step by step, and put your final answer within \\boxed{}.\n\nWhat is the sum of 123 and 456?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
vLLM
from vllm import LLM, SamplingParams
llm = LLM(model="jaygala24/Qwen3-4B-RLOO-math-reasoning", revision="step-0200") # optional branch, e.g. "step-0400"
sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
prompt = "Please reason step by step, and put your final answer within \boxed{}.
What is the sum of 123 and 456?"
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
Framework
- PipelineRL
- Transformers
- DeepSpeed (ZeRO Stage 3)
- Downloads last month
- 460
