An
An
Hi! I find that setting batchsize to 16 does not occupy too much in every gpu, so why don't you use 4 tesla V100 with 32 batchsize?Are there any differences...
Hi! You said that "In the encoder-decoder attention module, the target query can attend to all positions on the template and the search region features, thus learning robust representations for...
Hi,I use 8 32g tesla V100 and want to train your model. but the fps is lower than yours. Do you know how many cores/threads do you use every task/PID/...
Hi! I have one question. When getting the result plots like "Comparisons on LaSOT test set" with other papers' models, we need raw reaults of their. So how to get...
thank you for your great work. but I wonder: could you share the code for visualizing attention maps in your paper? I'm looking forward from you.
Hi, thanks for your work! Could you pleae provide the Hz value of GOT-10K when evaluating the Stark?
Have you ever encountered this situation? Does it affect the effect of the training?
Hi, thanks for you amazing work! Could you please provide me with the raw results on Lasot benchmark?
Hi! Coud you please provide the raw reasults of Lasot in 280 sequences?
想问一下您模型保存的时候是GPU格式的,模型加载的时候为什么这里要加参数map_location='cpu'呢?我把这个参数去掉之后,训练断掉重新加载模型的时候会爆显存。