xiaowenhe

Results 16 issues of xiaowenhe

when I change Resize (1600,900) to (1200,675) in train_pipeline = [ dict(type='LoadImageFromFileMono3D'), dict( type='LoadAnnotations3D', with_bbox=True, with_label=True, with_attr_label=True, with_bbox_3d=True, with_label_3d=True, with_bbox_depth=True), #dict(type='Resize', img_scale=(1600, 900), keep_ratio=True), dict(type='Resize', img_scale=(1200, 675), keep_ratio=True), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5),...

Does this project only support SSD_mobilenet and ssd_inception_v2 when detection? How to use faster-rcnn? Thank you!

Hi,@sergiomsilva, if one car has two license plate, how can i train the wpod-net? like the following pictures: ![image](https://user-images.githubusercontent.com/28335784/79950038-0e270880-84a9-11ea-8928-8033caa8c535.png)

hello ,@sergiomsilva, could you release the OCR training code now? I follow your WPOD-NET, and get good results in our datasets, but this is only LP detection. And OCR result...

你好, 非常感谢你的工作。上周用你的代码训练了其他任务的检测和关键点定位模型,但是在训练的过程中,总是出现 《读取图片的过程中,线程一直等待,无法继续训练问题》, 图片内容可能不能显示,具体内容如下: 第一次训练: 2244: 2.664922,2.490456 avg loss, 0.001000 rate, 0.701170 seconds, 71808 images, 62.321252 hours left 第二次训练: 2256: 2.414187, 2.562203 avg loss, 0.001000 rate, 0.712135 seconds, 72192 images,...

你好,感谢你的开源。我在阅读你的代码的过程中,对anchor和gt的匹配存在一点疑问,麻烦你能帮忙解释下吗? 在anchor.py的231行,sorted_ious = sorted_ious[np.logical_not(anchor_matched_already)]这句代码的意义是什么?好像并没有什么关联与box_ious?

在dataset.py 第234行计算IOU的时候 iou_scale = self.bbox_iou(bbox_xywh_scaled[i][np.newaxis, :], anchors_xywh),代码中用的anchor是实际的相对与实际坐标的anchor大小,比如【10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326】,而bbox_xywh_scaled确是除以strides之后的,这样的话,是否存在矛盾呢? 比如,某gt的宽高都为608,训练时的图片最大宽高,假设gt占满全图。则608/8 = 76。也就是说,bbox_xywh_scaled里面的宽高最大才78,那anchor里面的 大于这个宽高的anchor还有什么用呢?所以,这个地方计算iou的时候感觉存在问题。 以上是个人理解,有不对的,请谅解!

Hi, @authors, Thanks for your works. Now I want to use my own datasets to train. So first I maybe to do my datasets as ILSVRC2015. But I cannot download...

Hi,I have a question, what is the difference as follows: #implemented by py_func #value = tf.identity(xw) #substract the marigin and scale it value = coco_func(xw_norm,y,alpha) * scale #implemented by tf...