Lukangkang123
Lukangkang123
阿里达摩院的语义分割模型存在bug,默认会把".”作为分割符进行分割而不管上下文语义。是否还有其他分割符则未知。建议的修改方案:把“.”统一替换为其他字符,分割后再替换回来。或者添加其他分割模型。 以下是我在阿里modelscope官网上测试的结果。 [官方测试链接](https://www.modelscope.cn/models/damo/nlp_bert_document-segmentation_chinese-base/summary) 
修改自evaluate.py,支持多线程接受请求,设计了个请求池,当积累到一定数量(MAX_BATCH_SIZE)或等待到一定时间(MAX_WAIT_TIME)后,可以执行批量推理,大大加快了推理速度。可以根据需要设置MAX_BATCH_SIZE和MAX_WAIT_TIME这两个超参数
Does the model support Chinese input?
Symbols and characters are very confusing and misleading. Also, it appears to be summing and then squaring, rather than squaring and then summing. Please check carefully to complete the proof...
I noticed you described BasisNet in the methods, but why is there no BasisNet in the experiments and code?
A new paper with heterophily—— Addressing Heterogeneity and Heterophily in Graphs: A Heterogeneous Heterophilic Spectral Graph Neural Network paper: [https://arxiv.org/abs/2410.13373](https://arxiv.org/abs/2410.13373) code: [code](https://github.com/Lukangkang123/H2SGNN/)