Wu Kun

Results 13 comments of Wu Kun

text2sql任务的数据框架已开源~

ERNIE的最大文本输入的长度只能在1到512之间,我认为你需要去掉一些更低置信度的value候选,保证ernie输入长度小于512才能继续预测

第一个问题:我看了一下,你发现的问题是存在的,你说的也是对的。 第二个问题:按照我的理解,abc的2-gram-list是['a', 'b'],拼接也是将['a', 'b']拼接成ab。 第三个问题我排查以后再和您讨论一下

> 您好,想问一下您训练NER模型指标能到多少,我这里训练出来在测试集上的指标f值只有0.22,不知道是不是哪里搞错了 我觉得最大可能是数据预处理结果出错了

> In Question Classification process: > > 1. where is the train data for the model "Question_classification/BERT_LSTM_word"? > 2. how to use the code in "PreScreen" directory? I update the...

可以提示一下哪个文件吗

您好,这是我们相关的中文论文 http://jcip.cipsc.org.cn/CN/abstract/abstract3196.shtml

I have not ran the codes,but I wanna know how many available data(subject mention could be found in the question) in the train(75910)&test(21678) can I get after preprocessing.Would you mind...

Well, it is a dependency parsing api. I tried to use dependency parsing to make a difference, but failed. Actrually you can just ignore bert-bilstm-word model and DO NOT SET...

> maybe you can simplify main() function in "Question_classification/BERT_LSTM_word/main.py", it contains too much unused code for a simple classification model. yes, you are right. I update the detail of readme....