catsdogone
catsdogone
Did you changed the layer name? If not, you should try.
@sanghoon Thank you. I have corrected the rpn_pre_nms_top_k from 200 to 2000 in the prototxt. But the result still not good enough as describe in #27 . The attachment is...
@songjmcn While there is not Python layer in may new net and I use the c++ interface of caffe.
@sanghoon I finetune the pvanet/full/test.model using the train.prototxt in pvanet/example_fineturn on KITTI while the result is not good and even worse than the pvanet/full/test.model. Should I use imagenet/full/test.model as pre-train...
@AITTSMD Using the parse.py in caffe-fast-rcnn/tools/extra to parse the log file than plot it.
@AITTSMD Thank you. I will have a try.
@beihangzxm123 I use the format of script file as py-faster-rcnn/scripts/*.sh. The parse_log.py is from caffe-fast-rcnn/toos/extra. The attachment is the files you may need. [kitti_pva.txt](https://github.com/sanghoon/pva-faster-rcnn/files/641313/kitti_pva.txt)--kitti_pva.sh [parse_log.txt](https://github.com/sanghoon/pva-faster-rcnn/files/641314/parse_log.txt)--parse_log.py [plot_loss.txt](https://github.com/sanghoon/pva-faster-rcnn/files/641315/plot_loss.txt)--plot_loss.py
I think you should use the model in data/pvanet as pretrained model if you use the prototxt in example_finetune.
layer { name: "conv1/7x7_s2" type: "Convolution" bottom: "data" top: "conv1/7x7_s2" param { lr_mult: 1 decay_mult: 1 } In /fast-rcnn/models/VGG16(and CaffeNet) /train.prototxt, the parameter lr_mult and decay_mult is 0, so why...
@jond55 I think the input image doesn't need to be resized, the roi_pooling layer is where amazing happens. The input image can be any ratio but I think the minimum...