[Community Event] Doc Tests Sprint
This issue is part of our Doc Test Sprint. If you're interested in helping out come join us on Discord and talk with other contributors!
Docstring examples are often the first point of contact when trying out a new library! So far we haven't done a very good job at ensuring that all docstring examples work correctly in 🤗 Transformers - but we're now very dedicated to ensure that all documentation examples work correctly by testing each documentation example via Python's doctest (https://docs.python.org/3/library/doctest.html) on a daily basis.
In short we should do the following for all models for both PyTorch and Tensorflow:
-
- Check the current doc examples will run without failure
-
- Check whether the current doc example of the forward method is a sensible example to better understand the model or whether it can be improved. E.g. is the example of https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/bert#transformers.BertForQuestionAnswering.forward a good example of the model? Could it be improved?
-
- Add an expected output to the doc example and test it via Python's doc test (see Guide to contributing below)
Adding a documentation test for a model is a great way to better understand how the model works, a simple (possibly first) contribution to Transformers and most importantly a very important contribution to the Transformers community 🔥
If you're interested in adding a documentation test, please read through the Guide to contributing below.
This issue is a call for contributors, to make sure docstring exmaples of existing model architectures work correctly. If you wish to contribute, reply in this thread which architectures you'd like to take :)
Guide to contributing:
-
Ensure you've read our contributing guidelines 📜
-
Claim your architecture(s) in this thread (confirm no one is working on it) 🎯
-
Implement the changes as in https://github.com/huggingface/transformers/pull/15987 (see the diff on the model architectures for a few examples) 💪
- The file you want to look at is in
src/transformers/models/[model_name]/modeling_[model_name].py,src/transformers/models/[model_name]/modeling_tf_[model_name].pyorsrc/transformers/doc_utils.pyorsrc/transformes/file_utils.py - Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files
- Optionally, change the example docstring to a more sensible example that gives a better suited result
- Make the test pass
- Add the file name to https://github.com/huggingface/transformers/blob/master/utils/documentation_tests.txt (making sure the file stays in alphabetical order)
- Run the doc example test again locally
In addition, there are a few things we can also improve, for example :
- Fix some style issues: for example, change ``decoder_input_ids``` to `decoder_input_ids`.
- Using a small model checkpoint instead of a large one: for example, change "facebook/bart-large" to "facebook/bart-base" (and adjust the expected outputs if any)
- The file you want to look at is in
-
Open the PR and tag me @patrickvonplaten @ydshieh or @patil-suraj (don't forget to run
make fixupbefore your final commit) 🎊- Note that some code is copied across our codebase. If you see a line like
# Copied from transformers.models.bert..., this means that the code is copied from that source, and our scripts will automatically keep that in sync. If you see that, you should not edit the copied method! Instead, edit the original method it's copied from, and run make fixup to synchronize that across all the copies. Be sure you installed the development dependencies withpip install -e ".[dev]", as described in the contributor guidelines above, to ensure that the code quality tools inmake fixupcan run.
- Note that some code is copied across our codebase. If you see a line like
PyTorch Model Examples added to tests:
- [ ] ALBERT (@vumichien)
- [x] BART (@abdouaziz)
- [x] BEiT
- [ ] BERT (@vumichien)
- [ ] Bert
- [ ] BigBird (@vumichien)
- [x] BigBirdPegasus
- [x] Blenderbot
- [x] BlenderbotSmall
- [ ] CamemBERT (@abdouaziz)
- [ ] Canine (@NielsRogge)
- [ ] CLIP (@Aanisha)
- [ ] ConvBERT (@simonzli)
- [x] ConvNext
- [ ] CTRL (@jeremyadamsfisher)
- [x] Data2VecAudio
- [ ] Data2VecText
- [ ] DeBERTa (@Tegzes)
- [ ] DeBERTa-v2 (@Tegzes)
- [x] DeiT
- [ ] DETR
- [ ] DistilBERT (@jmwoloso)
- [ ] DPR
- [ ] ELECTRA (@bhadreshpsavani)
- [ ] Encoder
- [ ] FairSeq
- [ ] FlauBERT (@abdouaziz)
- [ ] FNet
- [ ] Funnel
- [ ] GPT2 (@ArEnSc)
- [ ] GPT-J (@ArEnSc)
- [x] Hubert
- [ ] I-BERT (@abdouaziz)
- [ ] ImageGPT
- [ ] LayoutLM (chiefchiefling @ discord)
- [ ] LayoutLMv2
- [ ] LED
- [x] Longformer (@KMFODA)
- [ ] LUKE (@Tegzes)
- [ ] LXMERT
- [ ] M2M100
- [x] Marian
- [x] MaskFormer (@reichenbch)
- [x] mBART
- [ ] MegatronBert
- [ ] MobileBERT (@vumichien)
- [ ] MPNet
- [ ] mT5
- [ ] Nystromformer
- [ ] OpenAI
- [ ] OpenAI
- [x] Pegasus
- [ ] Perceiver
- [x] PLBart
- [x] PoolFormer
- [ ] ProphetNet
- [ ] QDQBert
- [ ] RAG
- [ ] Realm
- [ ] Reformer
- [x] ResNet
- [ ] RemBERT
- [ ] RetriBERT
- [ ] RoBERTa (@patrickvonplaten )
- [ ] RoFormer
- [x] SegFormer
- [x] SEW
- [x] SEW-D
- [x] SpeechEncoderDecoder
- [x] Speech2Text
- [x] Speech2Text2
- [ ] Splinter
- [ ] SqueezeBERT
- [x] Swin
- [ ] T5 (@MarkusSagen)
- [ ] TAPAS (@NielsRogge)
- [ ] Transformer-XL (@simonzli)
- [ ] TrOCR (@arnaudstiegler)
- [x] UniSpeech
- [x] UniSpeechSat
- [x] Van
- [x] ViLT
- [x] VisionEncoderDecoder
- [ ] VisionTextDualEncoder
- [ ] VisualBert
- [x] ViT
- [x] ViTMAE
- [x] Wav2Vec2
- [x] WavLM
- [ ] XGLM
- [ ] XLM
- [ ] XLM-RoBERTa (@AbinayaM02)
- [ ] XLM-RoBERTa-XL
- [ ] XLMProphetNet
- [ ] XLNet
- [ ] YOSO
Tensorflow Model Examples added to tests:
- [ ] ALBERT (@vumichien)
- [ ] BART
- [ ] BEiT
- [ ] BERT (@vumichien)
- [ ] Bert
- [ ] BigBird (@vumichien)
- [ ] BigBirdPegasus
- [ ] Blenderbot
- [ ] BlenderbotSmall
- [ ] CamemBERT
- [ ] Canine
- [ ] CLIP (@Aanisha)
- [ ] ConvBERT (@simonzli)
- [ ] ConvNext
- [ ] CTRL
- [ ] Data2VecAudio
- [ ] Data2VecText
- [ ] DeBERTa
- [ ] DeBERTa-v2
- [ ] DeiT
- [ ] DETR
- [ ] DistilBERT (@jmwoloso)
- [ ] DPR
- [ ] ELECTRA (@bhadreshpsavani)
- [ ] Encoder
- [ ] FairSeq
- [ ] FlauBERT
- [ ] FNet
- [ ] Funnel
- [ ] GPT2 (@cakiki)
- [ ] GPT-J (@cakiki)
- [ ] Hubert
- [ ] I-BERT
- [ ] ImageGPT
- [ ] LayoutLM
- [ ] LayoutLMv2
- [ ] LED
- [x] Longformer (@KMFODA)
- [ ] LUKE
- [ ] LXMERT
- [ ] M2M100
- [ ] Marian
- [x] MaskFormer (@reichenbch)
- [ ] mBART
- [ ] MegatronBert
- [ ] MobileBERT (@vumichien)
- [ ] MPNet
- [ ] mT5
- [ ] Nystromformer
- [ ] OpenAI
- [ ] OpenAI
- [ ] Pegasus
- [ ] Perceiver
- [ ] PLBart
- [ ] PoolFormer
- [ ] ProphetNet
- [ ] QDQBert
- [ ] RAG
- [ ] Realm
- [ ] Reformer
- [ ] ResNet
- [ ] RemBERT
- [ ] RetriBERT
- [ ] RoBERTa (@patrickvonplaten)
- [ ] RoFormer
- [ ] SegFormer
- [ ] SEW
- [ ] SEW-D
- [ ] SpeechEncoderDecoder
- [ ] Speech2Text
- [ ] Speech2Text2
- [ ] Splinter
- [ ] SqueezeBERT
- [ ] Swin (@johko)
- [ ] T5 (@MarkusSagen)
- [ ] TAPAS
- [ ] Transformer-XL (@simonzli)
- [ ] TrOCR (@arnaudstiegler)
- [ ] UniSpeech
- [ ] UniSpeechSat
- [ ] Van
- [ ] ViLT
- [ ] VisionEncoderDecoder
- [ ] VisionTextDualEncoder
- [ ] VisualBert
- [ ] ViT (@johko)
- [ ] ViTMAE
- [ ] Wav2Vec2
- [ ] WavLM
- [ ] XGLM
- [ ] XLM
- [ ] XLM-RoBERTa (@AbinayaM02)
- [ ] XLM-RoBERTa-XL
- [ ] XLMProphetNet
- [ ] XLNet
- [ ] YOSO
@patrickvonplaten I would like to start with Maskformer for Tensorflow/Pytorch. Catch up with how the event goes.
Awesome! Let me know if you have any questions :-)
Hello! I'd like to take on Longformer for Tensorflow/Pytorch please.
@patrickvonplaten I would like to start with T5 for pytorch and tensorflow
Sounds great!
LayoutLM is also taken as mentioned by a contributor on Discord!
@patrickvonplaten I would take GPT and GPT-J (TensorFlow editions) if those are still available.
I'm guessing GPT is GPT2?
I will take Bert, Albert, and Bigbird for both Tensorflow/Pytorch
I'll take Swin and ViT for Tensorflow
I'd like DistilBERT for both TF and PT please
@patrickvonplaten I would take GPT and GPT-J (TensorFlow editions) if those are still available.
I'm guessing GPT is GPT2?
@cakiki You can go for GPT2 (I updated the name in the test)
Can I try GPT2 and GPTJ for Pytorch? if @ydshieh you are not doing so?
I would like to try CLIP for Tensorflow and PyTorch.
I'll take CANINE and TAPAS.
Can I try GPT2 and GPTJ for Pytorch? if @ydshieh you are not doing so?
@ArEnSc No, you can work on these 2 models :-) Thank you!
@ydshieh Since the MobileBertForSequenceClassification is the copy of BertForSequenceClassification, so I think I will do check doc-test of MobileBert as well to overcome the error from make fixup
I'll take FlauBERT and CamemBERT.
@abdouaziz Awesome! Do you plan to work on both PyTorch and TensorFlow versions, or only one of them?
I would like to work on LUKE model for both TF and PT
@Tegzes you're lucky because there's no LUKE in TF ;) the list above actually just duplicates all models, but many models aren't available yet in TF.
In this case, I will also take DeBERTa and DeBERTa-v2 for PyTorch
@ydshieh
I plan to work only with PyTorch
@Tegzes you're lucky because there's no LUKE in TF ;) the list above actually just duplicates all models, but many models aren't available yet in TF.
True - sorry I've been lazy at creating this list!
Happy to work on TrOCR (pytorch and TF)
I take RoBERTa in PT and TF
I would like to pick up XLM-RoBERTa in PT and TF.
I can work on ELECTRA for PT and TF
Hey guys,
We've just merged the first template for Roberta-like model doc tests: https://github.com/huggingface/transformers/pull/16363 :-)
Lots of models like ELETRA, XLM-RoBERTa, DeBERTa, BERT are very similar in spirit, it would be great if you could try to rebase your PR to the change done in https://github.com/huggingface/transformers/pull/16363 . Usually all you need to do is to add the correct {expected_outputs}, {expected_loss} and {checkpoint} to the docstring of each model (ideally giving sensible results :-)) until it passes locally and then the file can be added to the tester :-)
Also if you have open PRs and need help, feel free to ping me or @ydshieh and link the PR here so that we can nicely gather everything :-)
One of the most difficult tasks here might be to actually find a well-working model. As a tip what you can do:
- Find all models of your architecture as it's always stated in the modeling files here: https://github.com/huggingface/transformers/blob/77c5a805366af9f6e8b7a9d4006a3d97b6d139a2/src/transformers/models/roberta/modeling_roberta.py#L67 e.g. for ELECTRA: https://huggingface.co/models?filter=electra
- Now click on the task (in left sidebar) your working on, e.g. say you work on
ForSequenceClassificationof a text model go under this task metric: https://huggingface.co/models?other=electra&pipeline_tag=text-classification&sort=downloads - Finally, click on the framework metric (in left sidebar) you're working with: e.g. for TF: https://huggingface.co/models?library=tf&other=electra&pipeline_tag=text-classification&sort=downloads . If you see too few or too many not well performing models in TF you might also want to think about converting a good PT model to TF under your Hub name and to use this one instead :-)