Instructions to use nvidia/dragon-multiturn-query-encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nvidia/dragon-multiturn-query-encoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="nvidia/dragon-multiturn-query-encoder")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nvidia/dragon-multiturn-query-encoder") model = AutoModel.from_pretrained("nvidia/dragon-multiturn-query-encoder") - Inference
- Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -126,7 +126,7 @@ ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_
|
|
| 126 |
```
|
| 127 |
|
| 128 |
## License
|
| 129 |
-
Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model.
|
| 130 |
|
| 131 |
|
| 132 |
## Correspondence to
|
|
|
|
| 126 |
```
|
| 127 |
|
| 128 |
## License
|
| 129 |
+
Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model. Dragon-multiturn is also subject to the [Terms of Use](https://openai.com/policies/terms-of-use).
|
| 130 |
|
| 131 |
|
| 132 |
## Correspondence to
|