Instructions to use MCG-NJU/videomae-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MCG-NJU/videomae-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("video-classification", model="MCG-NJU/videomae-base")# Load model directly from transformers import AutoImageProcessor, AutoModelForPreTraining processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base") model = AutoModelForPreTraining.from_pretrained("MCG-NJU/videomae-base") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f0d3c61a9f0bcb6fd657e3b923e2477f427bce6e5f59f1717b2772e8735a4ebd
- Size of remote file:
- 377 MB
- SHA256:
- 2e0c3f4bc73c6d287b96c30975b43d60c9fc004c548150f252278f1becd89a7d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.