sheirving
sheirving
Before you open an issue, please make sure you have tried the following steps: 1. Make sure your **environment** is the same with (https://mace.readthedocs.io/en/latest/installation/env_requirement.html). 2. Have you ever read the...
In the project, the edit distances to a probability distribution is using softmax of (1/ dist + 0.1), [link](https://github.com/VinAIResearch/dict-guided/blob/f2b6f2ddb9d3fe38e562e2c2199658a7a6b15c1b/adet/modeling/text_head.py#L311). In the paper, this is softmax of (dist / T ),...
模型、数据等参数一致的情况下,对比Warp-CTC与TF-CTC发现二者之间速度无明显差异: 
依据SVTR的配置训练自己的手写中文数据集(字符个数变动范围:2-15)(输入图像适配:64*512,其他配置不变),训练Loss下降到10左右难以再下降,推理模型发现精度极低,主要为漏字:    然后,将SVTR中的mixer = Local, 全部换成mixer = Global ,推理发现较为正常,不会出现严重的漏字现象,请问这个现象是否表明Local mixer 导致感受野过大以及丢失的信息较多导致?Local kernel的设置(7, 11)是否也需要结合训练样本的长度来调整?
您好!想问下在共享卷积层提取特征后,采用BiLstm将特征序列化,水平和垂直特征分别再reverse出不同方向的特征,有个疑问是BiLstm本身具有双向性,bw为fw的reverse,这里的再reverse是否有些多余?谢谢!
bazel build --config android_arm -c opt --verbose_failures //neuron/java:tensorflow-lite-neuron get the error: "neuron/lib/libneuron_performance.a(neuron_performance.o): incompatible target", Could you support the lib for armeabi-v7a? Thankyou very much!
## error log | 日志或报错信息 | ログ ## context | 编译/运行环境 | バックグラウンド ncnn.version 20240102 MAC OS ## how to reproduce | 复现步骤 | 再現方法 1. 原始Float32模型可正常推理(模型发官方邮箱)。 2. 参考量化三部曲:https://github.com/Tencent/ncnn/blob/master/docs/how-to-use-and-FAQ/quantized-int8-inference.md 3....
Hi, First of all, thank you very much for sharing your work. Howerver, I read the origin paper which said some pre processing methods, such as skew correction, find baseline...
Thankyou very much for share the code, but when I training a model using the CTCMWER loss, it may be negative, sometime? So what's the reason could be?
https://github.com/lsabrinax/VideoTextSCM/blob/d87ad1bbb6ada7573a02a82045ee1b9ead5861ad/train_embedding.py#L47 Thankyou for share the code! But what's the mean of 'W_scale' in spatical_triplet_loss, and loss_pos = pos_dist * (W_scale + W_pos_dis) - alpha1, why we need to add 'w_scale'?