Hap-Zhang
Hap-Zhang
Hi,all i want to load a Hclr.fst by the code "hcl_fst = fst::StdFst::Read(hcl_fst_rxfilename)", and get a error as "ERROR: GenericRegister::GetEntry: lookup failed in shared object: olabel_lookahead-fst.so", would you like to...
Yes,i config openfst-1.6.1 with option --enable-lookahead-fsts and build&install recently.I want to try lookahead graph decoding when online decoding,and unluckily i get this error.
ok,i'll have a try on the OpenFST forums firstly,thank you.
@Malkovsky Yeah,i see. I really export paths like:`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$KALDI_ROOT/tools/openfst/lib/fst` However, the error is:"lookup failed in shared object: olabel_lookahead-fst.so", and not "olabel_lookahead-fst.so: cannot open shared object file: No such file or...
@feifeibear 好的,谢谢大佬。 我看代码里这块单独处理了pooler层,是有什么讲究吗?  我基于huggingface预训练模型进行finetuning,出来的模型用turbo作为后端运行,会报下面的错误,后来调试发现模型中其实没有pooler这一层的,TurboTransformer的代码里为什么会特地加入pooler呢? 
我这边用的预训练模型是bert-base-chinese
嗯,的确,我用的是BertForTokenClassification,这个里面没有用到pooler,那是不是意味着我用的话,需要改下turbotransformers底层的代码呢?
请问中文大概多少个字能体现出效果呢,我可以再测试下 onnxrt是有些加速的,虽然我不知道为什么onnxrt这边Torch的时长更长了。。。。 
我这边把timeline打出来了,您可以帮忙看看哪块有比较大的嫌疑吗? 
好的,OMP线程数目默认是机器本身CPU个数吗?