Huixian Cheng

Results 29 comments of Huixian Cheng

maybe it's cause by depthwise convolution

No. Finally I give up.

Similar to your method, it did not work well on my task, and the training was very slow.

对于长宽不一样的输入图片 这个img_size要设成不一样的吗 比如输入图片是256x512 这个切片是要变成16x32吗 还是可以不变

May caused by out of memory. A simple way to solve this is just use Slice in here. https://github.com/tsunghan-mama/RandLA-Net-pytorch/blob/913837e846176e4247a7e21783bf8f2f38576257/dataset/semkitti_testset.py#L26 Such as 4071 in seq 08. Just infer two time. Rough...

I haven't used the original code so I can't give advice. Also, all you need to be aware of is the error log given by codalab. May be you can...

Just this repo with infer in "all" type. I did not submit a test, I think if there is no problem with this [api](https://github.com/PRBonn/semantic-kitti-api) verification in valid set, the test...

Hi, I do not meet this problem. Maybe You should check the number of classes and classes_weights. Here is the weight I ever caculated and used. > class_weights = torch.tensor([[17.1775,...

No. I think it will not effect.