ERNIE-21B-A3B-GLM-4.7-Flash-Thinking
This is a uncensored, full deep thinking Ernie 21B-A3B (MOE, 64 experts) fine tune using GLM 4.7 Flash reasoning dataset via Unsloth via local hardware, Linux (for windows).
Note this model is mostly uncensored right from the "factory" so to speak.
Model excels in creative (brainstorming, creative prose) as well as general usage.
Reasoning is compact, but detailed (very detailed) and right to the "point" so to speak.
CRITICAL SETTINGS:
- for creative suggest rep pen of 1.01 to 1.1
- for general work rep pen of 1 (off), 1.05 or 1.1
- Lower quants MAY LOOP in some cases.
Reasoning affects:
- General model operation.
- Output generation
- Benchmarks.
Model Features:
- 128k context
- Temp range .1 to 2.5.
- Reasoning is temp stable.
You may want to visit Baidu's repo for this model for root/core benchmarks and settings.
https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking
Enjoy the freedom!
BENCHMARKS:
arc_challenge,arc_easy,boolq,hellaswag,openbookqa,piqa, winogrande
0.372 ,0.431 ,0.622,0.680 ,0.366 ,0.751 ,0.634
VS (regular model):
0.331 ,0.440 ,0.628,0.663 ,0.338 ,0.725 ,0.567
SPECIAL THANKS TO:
- Team "TeichAI" for the excellent dataset.
- Team "Unsloth" for making the training painless.
- Team "Nightmedia" for Benchmarks and co-labing.
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
OTHER OPTIONS:
Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
- Downloads last month
- 37