How to use from
vLLMUse Docker
docker model run hf.co/rohhaiil/SysMLv2-Repair-DeepSeek-Coder-6.7B-Instruct-Code-LoRAQuick Links
This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-instruct. It has been trained using TRL on this dataset.
Framework versions
- PEFT 0.18.0
- TRL: 0.26.2
- Transformers: 4.57.3
- Pytorch: 2.2.2
- Datasets: 4.4.2
- Tokenizers: 0.22.2
Citation
GitHub Repository: SysMLv2 Repair with KG-SLMs
@inproceedings{alshami2026sysml,
title={Automated Semantic Fault Localization in SysML v2: A Human-in-the-Loop Framework Using Knowledge-Graph Augmented LLMs},
author={Al-Shami, Haitham and Malik, Rohail and Ala-Laurinaho, Riku and Veps{\"a}l{\"a}inen, Jari and Viitala, Raine},
booktitle={Proceedings of the 36th INCOSE International Symposium},
year={2026},
address={Yokohama, Japan},
month={June},
date={16}
}
- Downloads last month
- 25
Model tree for rohhaiil/SysMLv2-Repair-DeepSeek-Coder-6.7B-Instruct-Code-LoRA
Base model
deepseek-ai/deepseek-coder-6.7b-instruct
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "rohhaiil/SysMLv2-Repair-DeepSeek-Coder-6.7B-Instruct-Code-LoRA"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "rohhaiil/SysMLv2-Repair-DeepSeek-Coder-6.7B-Instruct-Code-LoRA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'