abigial

Results 6 comments of abigial

您好~ 想根据您的回答补充几个问题,还烦请指教 1.请问一下在指定bpe tokenzier的过程中,好像代码有报错,模型中不存在bpe。 preprocess.py: error: argument --tokenizer: invalid choice: 'bpe' (choose from 'bert', 'char', 'space', 'xlmroberta'), 是需要像wiki文档里指出的使用另外指定的tokenizer,"通过 --spm_model_path 指定sentencepiece模型路径,然后导入sentencepiece模块,加载sentencepiece模型,对句子进行切分"来指定另外的tokenizer吗? 如果是的话,想要使用huggingface transformer的tokenizer模型的话该怎么导入呢? 2. --merges_path参数似乎在preprocess和pretrain中都不存在,该怎么办呢?

遇到相同问题,加粗单词后 网页自动翻译无法正确识别单词,翻译不了

有用,解决了 感谢

I encountered the same problem. Has it been resolved?

I found a solution with 'logprobs' parameter in SamplingParams, which can print the top n token's logits. Then we can select the logits corresponding to 'A''B''C''D' and perform softmax processing....

@Juanhui28 The error is due to the model limitation. You can try to set the logprobs less than 20.