Does CUDA out of memory because of simOTA
I am having Cuda out of memory error. It always happens after several epoch of training. Is it possible that simOTA is the reason? Because to my understanding, after several epoch of training, the model gets better at detecting, the dynamic_k gets larger too, so it means more positive samples, to be loaded into memory. Hence cause out of memory error. am I right?
@leetesua are you using bigger dataset than COCO ?
In yolox training logic, image size is changed per 10 iters, which causes the gpu memory raising/falling.
@leetesua are you using bigger dataset than COCO ?
my dataset is much smaller than COCO,
In yolox training logic, image size is changed per 10 iters, which causes the gpu memory raising/falling. are you referring to dynamic_scale in the train dataset config file?
In yolox training logic, image size is changed per 10 iters, which causes the gpu memory raising/falling.
where Can I change this setting?
Your exp file.
In yolox training logic, image size is changed per 10 iters, which causes the gpu memory raising/falling. are you referring to dynamic_scale in the train dataset config file?
In yolox training logic, image size is changed per 10 iters, which causes the gpu memory raising/falling.
where Can I change this setting?
For people concerned, change random_size or multiscale_range in exp file.