π Arc 2 by Meissosis AI
Arc 2 is a high-intelligence, reasoning-distilled variant of the Llama 3.1 8B architecture. Developed by Meissosis AI, this model was engineered to solve the logical "blind spots" found in standard base models. By integrating high-density Chain-of-Thought (CoT) data, Arc 2 achieves a significant performance jump in mathematical reasoning, coding logic, and technical synthesis.
π§ The "Anti-Buns" Architecture
Most 8B models are over-trained on conversational noise. Arc 2 focuses strictly on High-Density Reasoning. We utilized a 400-step distillation process (0.40 Epoch) to ensure the model retains its "Creative Spark" while gaining an elite "Logic Engine."
Key Improvements:
- Logical Consistency: Reduced hallucinations in multi-step word problems.
- Identity Hardcoding: Natively identifies as Arc 2 by Meissosis AI.
- Technical Depth: Enhanced performance in Python coding and STEM subjects.
π Benchmarking Goals (Estimated)
| Benchmark | Base Llama 3.1 8B | Arc 2 (Target) |
|---|---|---|
| GSM8K | 82.0% | 89.5% |
| HumanEval | 67.0% | 76.2% |
| MMLU | 66.0% | 71.4% |
π§ͺ Training Data (The Reasoning Recipe)
Arc 2 was trained using Unsloth on a curated mixture of 15 elite datasets, including:
- Reasoning:
open-r1/OpenThoughts-114k-math,simplescaling/s1K - Logic:
bespokelabs/Bespoke-Stratos-17k,amphora/QwQ-LongCoT-130K - Knowledge:
HuggingFaceFW/fineweb-edu,lucasmccabe-lmi/Open-Platypus
π Proprietary License & Terms
Arc 2 is a Closed-Weight/Proprietary model developed by Meissosis AI.
- Ownership: All weights and training methodologies are the property of Meissosis AI.
- Usage: Unauthorized redistribution or reverse-engineering of the weights is prohibited.
- Attribution: Any application using this model must state: "Powered by Arc 2 by Meissosis AI."
π οΈ Local Deployment (Ollama)
To run the optimized GGUF version of Arc 2:
ollama run hf.co/meissosisai/Arc-2-GGUF
- Downloads last month
- 68
Model tree for meissosisai/Arc-2
Base model
meta-llama/Llama-3.1-8B