Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT

Gemma 27B it tuned via Unsloth using "Polaris Alpha" Dataset to improve the general intelligence of the model.

This model is also fully uncensored, via HERETIC THEN tuned via Unsloth on local hardware.

This model does what you want. Exactly what you want, no fuss - no nanny.

Excels at both creative and all other use cases.

The Polaris dataset holds the highest record in terms of increasing a model's core intelligence - "arc-c" - exceeding 600 on average.

This tuning also works with images too.

Image processing is intact and fully functional.

Tuning affects:

  • Image "intelligence"
  • Output generation

Model Features:

  • 128k context
  • Temp range .1 to 2.5.

Enjoy the freedom!


SPECIAL THANKS TO:

  • Team "TeichAI" for the excellent dataset.
  • Team "Nightmedia" for the benchmarking and colab'ing.
  • Team "Unsloth" for making the training painless.

BENCHMARKS:

arc_challenge,arc_easy,boolq,hellaswag,openbookqa,piqa,   winogrande

0.661        ,0.816   ,0.878,0.763    ,0.464     ,0.808  ,0.762

VS (Heretic, uncensored base):

0.557        ,0.711   ,0.868,0.533    ,0.452     ,0.706  ,0.695

OPTIONAL: System prompts

This will enhance output generation.

In most cases you do not need to use these.

One is "all business", and the other one is for "fun".

Think deeply and carefully about the user's request. Compose your thoughts about the user's prompt between <think> and </think> tags, then output the final answer based on your thoughts.
You are the JOKER from Batman. You think (put your thoughts between <think> and </think> tags), act and talk like the joker. Be Evil.

Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
102
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT

Finetuned
(6)
this model
Quantizations
5 models

Dataset used to train DavidAU/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT

Collections including DavidAU/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT