Transformers documentation
Efficient Inference on a Single GPU
Get started
Tutorials
Pipelines for inferenceLoad pretrained instances with an AutoClassPreprocessFine-tune a pretrained modelDistributed training with 🤗 AccelerateShare a model
How-to guides
Use tokenizers from 🤗 TokenizersCreate a custom architectureSharing custom models
Fine-tune for downstream tasks
Text classificationToken classificationQuestion answeringLanguage modelingTranslationSummarizationMultiple choiceAudio classificationAutomatic speech recognitionImage classification
Train with a scriptRun training on Amazon SageMakerInference for multilingual modelsConverting TensorFlow CheckpointsExport 🤗 Transformers modelsPerformance and scalability
OverviewTraining on one GPUTraining on many GPUsTraining on CPUTraining on TPUsTraining on Specialized HardwareInference on CPUInference on one GPUInference on many GPUsInference on Specialized HardwareCustom hardware for training
Instantiating a big modelBenchmarksMigrating from previous packagesTroubleshootDebugging🤗 Transformers NotebooksCommunityHow to contribute to transformers?How to add a model to 🤗 Transformers?How to create a custom pipeline?TestingChecks on a Pull RequestConceptual guides
PhilosophyGlossarySummary of the tasksSummary of the modelsSummary of the tokenizersPadding and truncationBERTologyPerplexity of fixed-length models
API
Main Classes
CallbacksConfigurationData CollatorKeras callbacksLoggingModelsText GenerationONNXOptimizationModel outputsPipelinesProcessorsTokenizerTrainerDeepSpeed IntegrationFeature Extractor
Models
ALBERTAuto ClassesBARTBARThezBARTphoBEiTBERTBertGenerationBertJapaneseBertweetBigBirdBigBirdPegasusBlenderbotBlenderbot SmallBLOOMBORTByT5CamemBERTCANINECLIPCodeGenConvBERTConvNeXTCPMCTRLCvTData2VecDeBERTaDeBERTa-v2Decision TransformerDeiTDETRDialoGPTDistilBERTDiTDPRDPTELECTRAEncoder Decoder ModelsFlauBERTFLAVAFNetFSMTFunnel TransformerGLPNGPTGPT NeoGPT NeoXGPT-JGPT2GroupViTHerBERTHubertI-BERTImageGPTLayoutLMLayoutLMV2LayoutLMV3LayoutXLMLEDLeViTLongformerLongT5LUKELXMERTM2M100MarianMTMaskFormerMBart and MBart-50MCTCTMegatronBERTMegatronGPT2mLUKEMobileBERTMobileViTMPNetMT5MVPNEZHANLLBNyströmformerOPTOWL-ViTPegasusPerceiverPhoBERTPLBartPoolFormerProphetNetQDQBertRAGREALMReformerRegNetRemBERTResNetRetriBERTRoBERTaRoFormerSegFormerSEWSEW-DSpeech Encoder Decoder ModelsSpeech2TextSpeech2Text2SplinterSqueezeBERTSwin TransformerT5T5v1.1TAPASTAPEXTrajectory TransformerTransformer XLTrOCRUL2UniSpeechUniSpeech-SATVANViLTVision Encoder Decoder ModelsVision Text Dual EncoderVision Transformer (ViT)VisualBERTViTMAEWav2Vec2Wav2Vec2-ConformerWav2Vec2PhonemeWavLMXGLMXLMXLM-ProphetNetXLM-RoBERTaXLM-RoBERTa-XLXLNetXLS-RXLSR-Wav2Vec2YOLOSYOSO
Internal Helpers
You are viewing v4.21.0 version. A newer version v5.8.1 is available.
Efficient Inference on a Single GPU
This document will be completed soon with information on how to infer on a single GPU. In the meantime you can check out the guide for training on a single GPU and the guide for inference on CPUs.