Transformers documentation
BERTology
Get started
Tutorials
Pipelines for inferenceLoad pretrained instances with an AutoClassPreprocessFine-tune a pretrained modelDistributed training with 🤗 AccelerateShare a model
How-to guides
General usage
Create a custom architectureSharing custom modelsTrain with a scriptRun training on Amazon SageMakerConverting from TensorFlow checkpointsExport to ONNXExport to TorchScriptTroubleshoot
Natural Language Processing
Use tokenizers from 🤗 TokenizersInference for multilingual modelsText generation strategies
Task guides
Audio
Computer Vision
Image classificationSemantic segmentationVideo classificationObject detectionZero-shot object detectionZero-shot image classification
Multimodal
Performance and scalability
OverviewTraining on one GPUTraining on many GPUsTraining on CPUTraining on many CPUsTraining on TPUsTraining on TPU with TensorFlowTraining on Specialized HardwareInference on CPUInference on one GPUInference on many GPUsInference on Specialized HardwareCustom hardware for trainingInstantiating a big modelDebuggingHyperparameter Search using Trainer APIXLA Integration for TensorFlow Models
Contribute
How to contribute to transformers?How to add a model to 🤗 Transformers?How to convert a 🤗 Transformers model to TensorFlow?How to add a pipeline to 🤗 Transformers?TestingChecks on a Pull Request
🤗 Transformers NotebooksCommunity resourcesBenchmarksMigrating from previous packagesConceptual guides
PhilosophyGlossaryWhat 🤗 Transformers can doHow 🤗 Transformers solve tasksThe Transformer model familySummary of the tokenizersAttention mechanismsPadding and truncationBERTologyPerplexity of fixed-length modelsPipelines for webserver inference
API
Main Classes
Auto ClassesCallbacksConfigurationData CollatorKeras callbacksLoggingModelsText GenerationONNXOptimizationModel outputsPipelinesProcessorsQuantizationTokenizerTrainerDeepSpeed IntegrationFeature ExtractorImage Processor
Models
Text models
Vision models
Audio models
Multimodal models
Reinforcement learning models
Time series models
Graph models
Internal Helpers
You are viewing v4.27.1 version. A newer version v5.8.1 is available.
BERTology
There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”). Some good examples of this field are:
- BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950
- Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
- What Does BERT Look At? An Analysis of BERT’s Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341
- CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633
In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel (https://arxiv.org/abs/1905.10650):
- accessing all the hidden-states of BERT/GPT/GPT-2,
- accessing all the attention weights for each head of BERT/GPT/GPT-2,
- retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650.
To help you understand and use these features, we have added a specific example script: bertology.py while extract information and prune a model pre-trained on GLUE.