Muhammad Fahid

Results 7 comments of Muhammad Fahid

> I want to pre-train multilingual BERT using the existing mBERT weights. > I have tried to find it but I could not find any mention of how mBERT was...

> I have similar question as above @peregilk , how to add domain specific vocab.txt in any language other then english, in their official repo it "This repository does not...

> I did a few more tests on this (as I mentioned in another post). I am no longer convinced about my own results. The challenge is that fine-tuning has...

> @muhammadfahid51 If I understand things correctly, Bert works on token level. In addition it learns multi-token embeddings. > > Lets say we have the word "goodness". Lets say this...

> ## BERT Usage > > Salaam team, > Great work first of all 🥇 > Can we consider adding the BERT model to the list of models ? @blackvitriol...

Hi, I have a similar issue. i trained BPE tokenizer and now when I am loading it using transformers **AutoTokenizer** module, it is giving me an error.

Use the model name like if you are pretraining Roberta, use roberta_model.from_pretrained for loading your custom model. Similarly use RobertaTokenizer for loading your custom tokenizer.