morriszms commited on
Commit
f94ed92
·
verified ·
1 Parent(s): 18f7c20

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ TwinLlama-3.1-8B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ TwinLlama-3.1-8B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ TwinLlama-3.1-8B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ TwinLlama-3.1-8B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ TwinLlama-3.1-8B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ TwinLlama-3.1-8B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ TwinLlama-3.1-8B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ TwinLlama-3.1-8B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ TwinLlama-3.1-8B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ TwinLlama-3.1-8B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ TwinLlama-3.1-8B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ TwinLlama-3.1-8B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mlabonne/TwinLlama-3.1-8B
3
+ datasets:
4
+ - mlabonne/llmtwin
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ license: apache-2.0
9
+ tags:
10
+ - unsloth
11
+ - trl
12
+ - sft
13
+ - TensorBlock
14
+ - GGUF
15
+ ---
16
+
17
+ <div style="width: auto; margin-left: auto; margin-right: auto">
18
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
+ </div>
20
+ <div style="display: flex; justify-content: space-between; width: 100%;">
21
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
23
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
24
+ </p>
25
+ </div>
26
+ </div>
27
+
28
+ ## mlabonne/TwinLlama-3.1-8B - GGUF
29
+
30
+ This repo contains GGUF format model files for [mlabonne/TwinLlama-3.1-8B](https://huggingface.co/mlabonne/TwinLlama-3.1-8B).
31
+
32
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
33
+
34
+ <div style="text-align: left; margin: 20px 0;">
35
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
36
+ Run them on the TensorBlock client using your local machine ↗
37
+ </a>
38
+ </div>
39
+
40
+ ## Prompt template
41
+
42
+ ```
43
+
44
+ ```
45
+
46
+ ## Model file specification
47
+
48
+ | Filename | Quant type | File Size | Description |
49
+ | -------- | ---------- | --------- | ----------- |
50
+ | [TwinLlama-3.1-8B-Q2_K.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
51
+ | [TwinLlama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
52
+ | [TwinLlama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
53
+ | [TwinLlama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
54
+ | [TwinLlama-3.1-8B-Q4_0.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
55
+ | [TwinLlama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
56
+ | [TwinLlama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
57
+ | [TwinLlama-3.1-8B-Q5_0.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
58
+ | [TwinLlama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
59
+ | [TwinLlama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
60
+ | [TwinLlama-3.1-8B-Q6_K.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
61
+ | [TwinLlama-3.1-8B-Q8_0.gguf](https://huggingface.co/tensorblock/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
62
+
63
+
64
+ ## Downloading instruction
65
+
66
+ ### Command line
67
+
68
+ Firstly, install Huggingface Client
69
+
70
+ ```shell
71
+ pip install -U "huggingface_hub[cli]"
72
+ ```
73
+
74
+ Then, downoad the individual model file the a local directory
75
+
76
+ ```shell
77
+ huggingface-cli download tensorblock/TwinLlama-3.1-8B-GGUF --include "TwinLlama-3.1-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
78
+ ```
79
+
80
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
81
+
82
+ ```shell
83
+ huggingface-cli download tensorblock/TwinLlama-3.1-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
84
+ ```
TwinLlama-3.1-8B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28475637cb11fe68315439edbf0f718413ad3b5978d9d83cd4bc38226986ae69
3
+ size 3179131712
TwinLlama-3.1-8B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db0bf3f9393170258384415ed9d88b954ecba59d4cfb68ab26a976288ca720fd
3
+ size 4321956672
TwinLlama-3.1-8B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:200a46bac16837098eb8b4e748e7c672c4df3ffe2821b0d3225a5ab8b276ac53
3
+ size 4018918208
TwinLlama-3.1-8B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13414bcd16b626bc560b076f1505c66ee7aabaf5087722e83adb66109e20d7d7
3
+ size 3664499520
TwinLlama-3.1-8B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4692a0cdd98fde723e7b23e41db783e5df3951e677d043e8918b00c1e88c8ff
3
+ size 4661211968
TwinLlama-3.1-8B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:785475247913b5407691cb01d58ea50e5521c8356ecf68110160ee714f86ee74
3
+ size 4920734528
TwinLlama-3.1-8B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba36e7336e86ed068e11d2861ea648aa66e8cc487a08d3f623366fad1946a3c9
3
+ size 4692669248
TwinLlama-3.1-8B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ef68f6b89a0dd40b050a943246f1c7d18b221491896a276d709c1bcbaaf1624
3
+ size 5599294272
TwinLlama-3.1-8B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0b8061705a5fddf5f6c79dc998b76f43dd2ccfa4601574f4f1f9f60969dba89
3
+ size 5732987712
TwinLlama-3.1-8B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb17203bc73ac67f2020358124b51798b6dfb379e98db0cb88e0bc40d841cfaa
3
+ size 5599294272
TwinLlama-3.1-8B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:399c9033792412c6a51107e32dfd317cfd8f713515fc0dfba36ef25daac47bcf
3
+ size 6596006720
TwinLlama-3.1-8B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c38e4f36d9f24053427f6e800c1b4b15e11da61e0ebd1d1e5739591c7af60c47
3
+ size 8540771136