Update README.md
Browse files
README.md
CHANGED
|
@@ -10,6 +10,16 @@ pipeline_tag: text-generation
|
|
| 10 |
|
| 11 |
## <span style="color: red">NOTICE:</span> This model does, in fact run on inference endpoints. Just click deploy, unlike with regular GGUF models. The model is no longer stored, merely linked. Enjoy <span style="color: red"><3</span>
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
Hello! I wrote a simple container that allows for easy running of llama-cpp-python with GGUF models. My goal here was a cheap way to play with Gemma, but then I thought maybe i'd share just in case it's helpful. I'll probably make a bunch of these, so if you have any requests for GGUF or otherwise quantized Llama.cpp models to become inference endpoints, please feel free to reach out!
|
| 14 |
|
| 15 |
# Files
|
|
|
|
| 10 |
|
| 11 |
## <span style="color: red">NOTICE:</span> This model does, in fact run on inference endpoints. Just click deploy, unlike with regular GGUF models. The model is no longer stored, merely linked. Enjoy <span style="color: red"><3</span>
|
| 12 |
|
| 13 |
+
<label>Code Sample ( One-Shot )</label>
|
| 14 |
+
```javascript
|
| 15 |
+
|
| 16 |
+
{
|
| 17 |
+
"inputs": "A plain old prompt with nothing else"
|
| 18 |
+
}
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
## Multi turn coming soon...
|
| 22 |
+
|
| 23 |
Hello! I wrote a simple container that allows for easy running of llama-cpp-python with GGUF models. My goal here was a cheap way to play with Gemma, but then I thought maybe i'd share just in case it's helpful. I'll probably make a bunch of these, so if you have any requests for GGUF or otherwise quantized Llama.cpp models to become inference endpoints, please feel free to reach out!
|
| 24 |
|
| 25 |
# Files
|