noctrex
noctrex
AI & ML interests
None yet
Recent Activity
new activity 7 minutes ago
noctrex/Mistral-Small-4-119B-2603-MXFP4_MOE-GGUF:Poor performance and pretty lobotomized updated a model 1 day ago
noctrex/Mistral-Small-4-119B-2603-MXFP4_MOE-GGUF published a model 1 day ago
noctrex/Mistral-Small-4-119B-2603-MXFP4_MOE-GGUFOrganizations
None yet
Poor performance and pretty lobotomized
1
#1 opened about 1 hour ago
by
mancub
MXFP4 vs other 4-bit quant algos?
2
#3 opened 3 days ago
by
dinerburger
New activity in noctrex/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-MXFP4_MOE-GGUF 4 days ago
can you please do an f16 version
1
#1 opened 4 days ago
by
Shuasimodo
It would be neat to see a Heretical version of this.
1
#1 opened about 1 month ago
by
SabinStargem
AI Model Evaluation Report: MiroThinker-1.7-Mini (GGUF/Ollama)
1
#1 opened 6 days ago
by
phanthai12
Would it make sense to get Qwen3-VL MXFP4 quants?
20
#2 opened about 2 months ago
by
ampersandru
command to create GGUF MXFP4 mixed with BF16
1
#5 opened 13 days ago
by
ghit72
It's really good.
đ 1
26
#3 opened 21 days ago
by
Shuasimodo
Model performance
2
#1 opened 19 days ago
by
spanspek
Kind request for Qwen3.5-397B-A17B MXFP4 BF16
7
#2 opened 20 days ago
by
dehnhaide
Increasing the precision of some of the weights when quantizing
đ 4
57
#2 opened 30 days ago
by
Shuasimodo
Is there some helpful regex to offload all MoE layers to the CPU?
4
#7 opened 20 days ago
by
hdnh2006
BF16 version?
1
#1 opened 20 days ago
by
Kackliqur
"Use this model" wrong tag by default.
đ 2
1
#2 opened 21 days ago
by
jorj2
Qwen3.5-27Bīŧ
1
#4 opened 20 days ago
by
wzgrx
Try a different model and/or config.
2
#1 opened 21 days ago
by
E7Reine
Embedded images?
2
#3 opened 24 days ago
by
coder543
BF16 has looping issues
28
#4 opened 28 days ago
by
jmander11
Tried to use with llama-cpp-python without success
3
#1 opened about 2 months ago
by
rriscoc
Infinite loop
3
#2 opened about 2 months ago
by
kielCAC