| --- |
| language: |
| - en |
| license: apache-2.0 |
| library_name: transformers |
| tags: |
| - mistral |
| - unsloth |
| - transformers |
| --- |
| |
| # I quanted this from the Unsloth upload for Mistral Nemo Instruct. |
|
|
| [You can find the link here](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) |
| [This is for the base Mistral Nemo Instruct Model](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) |
|
|
| EXL2 quanting seemed to work. I ran a few tests on it and it seemed to have zero issues generating text up to 32k context size. I did not try higher than that, but uploading so folks can start testing this. Pleasantly surprised for a roleplay capacity as it seemed to latch onto character traits very well. |
|
|
| [6BPW](https://huggingface.co/Statuo/Mistral_Nemo_Instruct_EXL2_6bpw) |
| [4BPW](https://huggingface.co/Statuo/Mistral_Nemo_Instruct_EXL2_4bpw) |