JeyFun
JeyFun
Sorry to bother you,I still have this error when I use smaller size for the training. I also test the [3405283](https://github.com/roytseng-tw/Detectron.pytorch/commit/3405283698c8abb29c4f585689588229598d58a0) commit and just modify the (scale, maxsize) to (480,...
Yeah, this error happend at serveral iterations (for me, it happend randomly)
I have re-ran this command `python tools/train_net_step.py --dataset coco2017 --cfg configs/e2e_mask_rcnn_R-50-FPN_1x.yaml --set TRAIN.SCALES "(480,)" TRAIN.MAX_SIZE 540`, and after 2981 steps, Segentation fault occured.  **And the loss of (480, 540)...
Thank you for your prompt reply, I will try it later. But it's strange that I have trained successfully with 800x1333. Is there difference between large size and small size?...
I'm not sure about this because this issue occured randomly, and I observe that `loss_rpn_bbox_fpn6=0` all the time, so I change `FPN.RPN_MAX_LEVEL=5`. Now I can run the whole training process...
Hi, @ignacio-rocco, I have tested these two models  and can't get the right result. I just test the 35861858 model again, and also get the wrong semantic-mask result but...
Hi, @yjl9122 , I also looking for a trainning code as reference, can you share your trainning code ?
hi, @yuxng , as you said " `If it is disabled, the code would use the ground truth camera poses to compute the data association`", do you mean that kinect-fusion...
Hi, @yuxng , is main() in your kinect_fusion.cpp used in your DA-RNN system? I see that lib/fcn/test.py doesn't need this main() function and direct use feed_data API in test.py as...
Yes, this issue can be closed