kalai2033
kalai2033
@TavRotenberg It depends on the input image size. There are chances that other processes are utilizing the gpu memory as well. Please clear it and just use the preprocess flag...
Hi @tphankr , I have got the same issue Have you cleared it? or did you try any other visualization tool? If so please let me know.
Hi @tumusudheer , Did you find any options for running the test for set of images using pretrained model?
Hi @sangwoomo and @bsahil29 I face the same issue... I have printed the seg_paths ... Every time it breaks at random point. But i have corresponding mask images in both...
@baronpalacios Did you get how to apply OCR?
Your work seems to be interesting... waiting for your code release :)
It seems the model used in the below link is faster... I think the model used here is different. It would be great if the provided pre trained model craft_mlt_25k.pth...
Still there is no change in time taken for forward pass . It still takes a long time even after downgrading to pytorch version 0.4.1 .
> In CPU mode, it takes a long time, but I have no idea why the inference speed is slow in GPU mode. I think there are so many factors...
I am using custom dataset but I follow the same structure as Cityscapes dataset. And I don't have any images in the test directory. But I ran the experiment thrice,...