Translation
Transformers
PyTorch
TensorFlow
t5
text-generation
summarization
text-generation-inference
Instructions to use google-t5/t5-11b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google-t5/t5-11b with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="google-t5/t5-11b")# Load model directly from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-11b") model = AutoModelWithLMHead.from_pretrained("google-t5/t5-11b") - Notebooks
- Google Colab
- Kaggle
Split T5-11b model into shards
#7
by iarroyof - opened
Dear all, I have recently facing problem with this model to load it into multiple GPUs. I realized that its weights are stored into a unique file of 45Gb, which I think does not allow for using device_map='auto'. Please let me know wheter it is possible to download smaller shards or what is the correct way to solve this problem.
Thank you in advance.