How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "IndexTeam/Index-1.9B-Constant-LR" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "IndexTeam/Index-1.9B-Constant-LR",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'Quick Links
Index-1.9B-Constant-LR
Model Introduction
This repository Index-1.9B-Constant-LR is the checkpoint file of the Index-1.9B base model before decay training, which is provided for everyone to conduct research on downstream tasks.
For more details, see our GitHub and Index-1.9B Technical Report
Evaluation Results
Here we add the evaluation of the general understanding ability of the Index-1.9B-Constant-LR model
| Model | Average score | Average English score | MMLU | CEVAL | CMMLU | HellaSwag | Arc-C | Arc-E |
|---|---|---|---|---|---|---|---|---|
| Index-1.9B-Constant-LR | 41.47 | 44.24 | 35.30 | 38.58 | 33.26 | 59.94 | 32.96 | 48.75 |
| Index-1.9B-Pure | 49.55 | 52.83 | 43.75 | 42.35 | 43.61 | 63.21 | 42.75 | 61.61 |
| Index-1.9B | 64.92 | 69.93 | 52.53 | 57.01 | 52.79 | 80.69 | 65.15 | 81.35 |
Evaluation code is based on OpenCompass with compatibility modifications. See the evaluate folder for details.
- Downloads last month
- 12
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "IndexTeam/Index-1.9B-Constant-LR" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IndexTeam/Index-1.9B-Constant-LR", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'