Jimmy Liu

Results 7 comments of Jimmy Liu

你好,请问您后来是怎么解决中英文混合的问题的呢? > 对中英文混合的文本分类(主要是中午,夹杂一些英文关键词,如:我喜欢用TensorFlow框架进行机器学习任务。),encode文本的时候有以下问题,请教下: > > 1. 当前任务,推荐是使用提供多语言版本模型还是使用chinese版本模型? > 2. 中英文文本的token切分是char-level还是word-level,具体机制是什么?自己尝试的例子出现: > [['[CLS]', 'hell', '##o', 'world', '!', '[SEP]'], > ['[CLS]', '我', '在', '吃', '饭', '[SEP]']] > 3. 部分英文词汇属于重要的关键词,当前属于UNK,不能丢弃,请问该项目支持加自定义的词汇么?

Hi, below is my opinion, may be not very true. ```python attention = torch.mul(attention, adj.sum(1).repeat(M, 1).t()) ``` I think after the above softmax operation, the attention's weight has been scaled...

It may correspond the part of equation(7), which locates within sigma. It's a long time away from reading above code. 囧rz Sorry, I am busy with the work now. If...

> Okay, no worries. I might write an explanation of how equation (5) and (6) are related to the code. So, it'll be easier for others too. Thanks again. Good...

我想这两句的意思应该是: 1.已经计算好的attention,因为经过softmax进行了归一化处理,因此重新乘以原来的该节点总连接数,将其权重调整回原有的尺度; 2.将上一步计算好的attention融合到原有的邻接矩阵中。

Yes, it's so wired. How's your try?@abhinab303

@1eclipse 你好。请问你是直接单GPU运行MR数据集吗还是改写成分布式运算?我的显卡是GeForce RTX 2080 Ti。直接run out of memory。