🦅 Supra Mini v3 0.5M

Supra Mini v3 0.5M is a very tiny base model trained on 1 billion tokens of Fineweb-Edu for 2 epochs as the third version of our Supra Mini series.

Model Config

  • Parameters: 467,648 (0.5M)
  • Architecture: Llama
  • Vocab size with custom BPE tokenizer: 4096
  • Hidden Size: 64
  • Intermediate Size: 128
  • Hidden Layers: 5
  • Attention Heads: 8
  • Max Position Embeddings: 512
  • Learning rate: 5e-4
  • Weight Decay: 0.01
  • Trained in bfloat16

Final Loss

This model reached a final train loss after 2 epochs of 4.872.

Benchmarks

All benchmarks were executed using lm-eval.

Task Value Random level
Arc_Easy 0.2727 0.25 (25%)
Wikitext 4.4881 -
BLiMP 0.5526 0.5 (50%)

Benchmarks Table

Examples

Prompt: "Artificial intelligence is "
Output:: "Artificial intelligence is ignored by the Sunnette-Caiti. Prequent and Burkin, A.S. Cougalia (Mean) Researchers in the Northern Museum of Treaty (Figure 1). The first thing that has a drop or an ambitions to the timeline, which is used for more than a large amount of cosmic moisture (Bargeus), the vital bones are not a few weeks ago, but there was no longer in the same time where he had a hour. It’s also known as the “the person” because they were unhab"

Prompt: "The main concept of physics is "
Output:: "The main concept of physics is escape, and that the other important to be unable to be used. "It’s a new way for how it can be used in this article on what you are going to make sure why we know when they have to do with your own words like the CBDS?” What is not as you think about ourselves by the BCO (RI)? Now: I want to get up to the classroom, if you need to seek them! You will find us or ready to learn about the students to understand what the child has to create a lot of skills and writing. If you can ask the questions of these kids, you may use you to work out from myself"

Prompt: "Once upon a time, "
Output:: "Once upon a time, ignificantly was to be the first of his father. The Helburg and he had been seen in the Morsey's songs of God’s mastery, but it is noted by the Surtaine who was the most important parties that he did not have herself with him or he could be an obvious way to do something. "Intarias, I were only a good thing to love this, but we can see what you are going from the "consin” (handing for me) and then he will be used in our owner; there is no reason to get a timber, but it would be very hard"

Usage

To use our model, just run this code using HF Transformers to execute the model:

from transformers import pipeline
import torch

print("[*] Loading Supra Mini v3 0.5M model from Hugging Face Hub...")
pipe = pipeline(
    "text-generation", 
    model="SupraLabs/Supra-Mini-v3-0.5M",
    device_map="auto",
    torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32
)

def generate_text(prompt, max_length=150):
    result = pipe(
        prompt, 
        max_new_tokens=max_length,
        do_sample=True,
        temperature=0.5,
        top_k=25,
        top_p=0.9,
        repetition_penalty=1.2,
        pad_token_id=pipe.tokenizer.pad_token_id,
        eos_token_id=pipe.tokenizer.eos_token_id
    )
    return result[0]['generated_text']

test_prompt = "The importance of education is"
print(f"\nPrompt: {test_prompt}")
print("-" * 30)
print("\nOutput:\n" + generate_text(test_prompt))

Training guide

We trained Supra Mini v3 0.5M on a single NVIDIA RTX 5060 Ti 16GB in ~1 hour for 2 epochs.
The full training code can be found in this repo as train_tokenizer.py (train costum BPE tokenizer with vocab size of 4096), train.py (train the model) and inference.py (test the model).
The model was trained on the first 1 billion tokens of Sample-10BT from Fineweb-Edu using streaming tokenization.

Downloads last month
47
Safetensors
Model size
468k params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train SupraLabs/Supra-Mini-v3-0.5M

Collections including SupraLabs/Supra-Mini-v3-0.5M