Instructions to use prithivMLmods/Elita-0.1-Distilled-R1-abliterated with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Elita-0.1-Distilled-R1-abliterated with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Elita-0.1-Distilled-R1-abliterated") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Elita-0.1-Distilled-R1-abliterated") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Elita-0.1-Distilled-R1-abliterated") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Elita-0.1-Distilled-R1-abliterated with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Elita-0.1-Distilled-R1-abliterated" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Elita-0.1-Distilled-R1-abliterated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Elita-0.1-Distilled-R1-abliterated
- SGLang
How to use prithivMLmods/Elita-0.1-Distilled-R1-abliterated with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Elita-0.1-Distilled-R1-abliterated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Elita-0.1-Distilled-R1-abliterated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Elita-0.1-Distilled-R1-abliterated" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Elita-0.1-Distilled-R1-abliterated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Elita-0.1-Distilled-R1-abliterated with Docker Model Runner:
docker model run hf.co/prithivMLmods/Elita-0.1-Distilled-R1-abliterated
Use Docker
docker model run hf.co/prithivMLmods/Elita-0.1-Distilled-R1-abliteratedElita-0.1-Distilled-R1-Abliterated
Elita-0.1-Distilled-R1-Abliterated is based on the Qwen [ KT ] model, which was distilled by DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
Quickstart with Transformers
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Elita-0.1-Distilled-R1-Abliterated"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Elita, created by DeepSeek-AI. You are a powerful reasoning assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use:
- Instruction-Following: The model excels in understanding and executing detailed instructions, making it ideal for automation systems, virtual assistants, and educational tools.
- Text Generation: It can produce coherent, logically structured, and contextually relevant text for use in content creation, summarization, and report writing.
- Complex Reasoning Tasks: With its fine-tuning for chain-of-thought reasoning, the model is well-suited for multi-step problem-solving, logical deduction, and question-answering tasks.
- Research and Development: It can support researchers and developers in exploring advancements in logical reasoning and fine-tuning methodologies.
- Educational Applications: The model can assist in teaching logical reasoning and problem-solving by generating step-by-step solutions.
Limitations:
- Domain-Specific Knowledge: While fine-tuned on reasoning datasets, the model may lack deep expertise in highly specialized or technical domains.
- Hallucination: Like many large language models, it can generate incorrect or fabricated information, especially when reasoning beyond its training data.
- Bias in Training Data: The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.
- Performance on Non-Reasoning Tasks: The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.
- Resource-Intensive: Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.
- Dependence on Input Quality: The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.
- Downloads last month
- 18
Model tree for prithivMLmods/Elita-0.1-Distilled-R1-abliterated
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "prithivMLmods/Elita-0.1-Distilled-R1-abliterated"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Elita-0.1-Distilled-R1-abliterated", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'