OFA
OFA copied to clipboard
how to split the whole dataset into each gpu in multi-gpus training?
how to split the whole dataset into each gpu? when multi-gpu training I find that each gpu has to go over the whole dataset, while some other repos usually split the whole dataset into each gpus.
My concern is that when each gpu has to go over the whole dataset, it does not matter whether I use how many gpus to train, the whole training time is the same!
Could some one please tell me how to split the whole dataset into each gpus? Thansk so much!