使用cnSchema进行关系抽取报错 Model name 'bert-base-chinese' was not found in tokenizers model name list
Describe the question
A clear and concise description of what the question is.
File "C:\Users\DuYH\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\tokenization_utils_base.py", line 1584, in from_pretrained raise EnvironmentError( OSError: Model name 'bert-base-chinese' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'bert-base-chinese' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
修改 predict.yaml中的参数fp为下载文件的路径,embedding.yaml中num_relations为51(关系个数),config.yaml中的参数model为lm
这些都修改好了,fp为fp: 'C:/知识图谱/DeepKE-main/example/re/standard/re_bert.pth'
使用的是DeepKE(RE), BERT-wwm, Chinese(DeepKE(RE), RoBERTa-wwm-ext, Chinese网盘提取码错误
Environment (please complete the following information):
- OS: [e.g. mac / window] window
- Python Version [e.g. 3.6] 3.8
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
lm的参数需要修改为对应的模型,比如下载使用的是bert模型,lm就需要改为chinese-bert-wwm
是修改DeepKE-main\example\re\standard\conf\model\lm.yaml文件中的lm_file还是model_name呢
lm_file 改为 hfl/chinese-bert-wwm 如果继续报错,则从huggingface下载下来模型,改为模型的路径