lang tang
lang tang
When I use Tesla V100 gpu, batchsize=256 with input size 3x224x224 is ok. If you have less than 32g gpu memory then you can consider reduce the batchsize or input...
I implement this in my neural network, you can add cutout op at the beginning of the forward...
这里是为了让数据能连续存储,dali要求数据的存储是连续的来增加locality,提高访问速度
Hi, your iterate time seems unstable, is your gpu running other applications?