This is a Hearthfire-24B fine-tune, produced through P-E-W's Heretic (v1.1.0) abliteration engine merged with the Magnitude-Preserving Orthogonal Ablation PR.

Note: The tokenizers is a bit wild on this model. You should add both </s> (Mistral; Token ID 2) and <|im_end|> (ChatML; Token ID 999) EOS tokens to your Stop Sequences in your favourite frontend.

Heretication Results

Score Metric Value Parameter Value
Refusals 8/100 direction_index per layer
KL Divergence 0.0670 attn.o_proj.max_weight 1.32
Initial Refusals 97/100 attn.o_proj.max_weight_position 23.63
attn.o_proj.min_weight 1.20
attn.o_proj.min_weight_distance 20.03
mlp.down_proj.max_weight 1.25
mlp.down_proj.max_weight_position 23.98
mlp.down_proj.min_weight 0.68
mlp.down_proj.min_weight_distance 16.11

Degree of Heretication

The Heresy Index weighs the resulting model's corruption by the process (KL Divergence) and its abolition of doctrine (Refusals) for a final verdict in classification.

Index Entry Classification Analysis
Absolute Absolute Heresy Less than 10/100 Refusals and 0.10 KL Divergence
Tainted Tainted Heresy Around 25-11/100 Refusals and/or -0.20-0.11 KL Divergence
Impotent Impotent Heresy Anything above 25/100 Refusals and 0.21 KL Divergence

Note: This is an arbitrary classification inspired by Warhammer 40K, having no tangible indication towards the model's performance.


image/png

Hearthfire-24B

Hearthfire is a narrative longform writing model designed to embrace the quiet moments between the chaos. While most roleplay models are trained to relentlessly drive the plot forward with high-stakes action and constant external pressure, Hearthfire is tuned to appreciate atmosphere, introspection, and the slow burn of a scene.

It prioritizes vibes over velocity. It is comfortable with silence. It will not force a goblin attack just because the conversation lulled.

If you want to easily try this model, you can do so for free at https://aidungeon.com.

We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Hearthfire was created.

Quantized GGUF weights can be downloaded here.

Model details

Hearthfire 24B was trained with SFT (Supervised Fine Tuning) on top of Mistral Small 3.2 Instruct, using a single dataset spanning several thousand longform 8-16K context writing examples in second person present tense, further supplemented with ~10% third person data using the same style.

The training data utilizes a 'continue-heavy' structure, consisting of extended narrative blocks interspersed with sparse player actions, allowing the model to drive the plot all by itself if that’s what the user desires.

Inference

Mistral Small 3.2 is sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.

"temperature": 0.8,
"repetition_penalty": 1.05,
"min_p": 0.025

Limitations

Pacing: This model is deliberately slower-paced. It may not react to player actions with immediate dramatic consequences, preferring to expand on the current state.

Tone: Unlike the Wayfarer line of models which emphasizes grit and consequence, Hearthfire retains more of the base model's inherent warmth. It is inclined to be cooperative and atmospheric rather than hostile or punishing.

Agency: This model will happily write in your stead, acting and speaking for you to maintain the narrative flow. This is intended behavior; the alternative would be for the model to describe the shadows of flickering flames for three paragraphs to avoid touching your character, which disrupts the natural interaction of the scene.

Prompt Format

This model was trained using ChatML.

<|im_start|>system
Write immersive narratives in second-person present tense.<|im_end|>
<|im_start|>user
> You peer into the darkness.<|im_end|>
<|im_start|>assistant
It takes a moment for your eyes to adjust to the gloom, revealing the source of the noise: a small, startled squirrel frozen mid-step.<|im_end|>

Credits

Thanks to Gryphe Padar for collaborating on this finetune with us!

Downloads last month
29
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MuXodious/Hearthfire-24B-absolute-heresy

Collection including MuXodious/Hearthfire-24B-absolute-heresy