--- license: apache-2.0 pipeline_tag: text-generation language: - en base_model: - Qwen/Qwen2.5-14B --- ### Model Sources - **Paper:** [https://arxiv.org/abs/2506.05700?context=cs.CL] ## Uses To use `RKEFino1-14B` with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "YanAdjeNole/RKEFino1-14B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "What is the results of 3-5?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Citation Please cite our paper here: ``` @misc{wang2025rkefino1regulationknowledgeenhancedlarge, title={RKEFino1: A Regulation Knowledge-Enhanced Large Language Model}, author={Yan Wang and Yueru He and Ruoyu Xiang and Jeff Zhao}, year={2025}, eprint={2506.05700}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.05700}, } ```