Xia Li

Results 13 comments of Xia Li

And the kernel size here should be 7 instead of 3

By the way, this also happens in other two of your repos: https://github.com/AliaksandrSiarohin/first-order-model/blob/ca49071ce60051200f58f03a0e4e65e675a54e27/modules/model.py#L105 https://github.com/AliaksandrSiarohin/motion-cosegmentation/blob/571e26f04b8c40c5454a158b4b570e4ba034c856/modules/model.py#L96

Hi, We no longer provide the coco-format jsons. Instead, we provide the scripts to convert bdd100k format to coco format. You may refer this part: https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md#convert-annotations

Hi, we have updated our codes. Please clone the latest version. Hope this will help you~

Currently, you can resort to this process: 1. Convert the prediction into BDD100K format: https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md#conversion-to-the-scalabelbdd100k-format 2. Visualize the prediction through bdd100k tools: https://doc.bdd100k.com/usage.html#understanding-the-data

@zhanghang1989 Can you also add the pretrained model on COCO? I see that has been published in GluonCV.

> I add two lines of print, but when I run the code, I can just see "start", I can't see loss. > > print("start") > loss = sess.train_batch(image, label)...

With pytorch's new API (DDP included), you can also write it in EMAU class with the dist.reduce operation

Sorry for late response. This class is just adopted to compute the SSIM value for inference.

> Without visualization results, it is difficult for us to understand the paper。If you are free,you can do it! It is not difficult. You just have to save the 'Z'...