Ensemble-Pytorch icon indicating copy to clipboard operation
Ensemble-Pytorch copied to clipboard

question about expected speedup when using parallelization via joblib

Open nguyentr17 opened this issue 1 year ago • 3 comments

Hi,

I came across your repository while searching for ways to train multiple NN models simultaneously using 1 single GPU. My model is pretty small (just 1 layer MLP) and the VRAM used more each model is only 260mb. However, when I try to use joblib train multiple models at the same time, though they do start at the same time (according to the log), the total training time is still the same as training models sequentially. Do you happen to have any tips / quick insights / things to look at for this? I know this is not directly an issue with your package but would really appreciate any help.

My code is like this:

    with parallel_backend('loky', n_jobs=-1):
        parallel = Parallel(n_jobs=-1)
        parallel(
            delayed(process_latent_pair)(mi_estimator, iid, tid, cfg, exp_name, args, DEVICE) # process_latent_pair trains 1 NN model
            for iid in range(13)
            for tid in range(13)
        )

My environment:

python 3.9.19
torch==2.4.1
joblib==1.4.2

nguyentr17 avatar Dec 04 '24 14:12 nguyentr17

Which model are you training? The parallel part is already implemented in torchensemble, you only need to pass the n_jobs param

xuyxu avatar Dec 05 '24 12:12 xuyxu

Hi @xuyxu I don't use torchensemble but using joblib directly. I didn't get any luck debugging this and thought you probably have lots of experience with this so would like to ask for advice on what might have caused this lack of speedup.

nguyentr17 avatar Dec 06 '24 01:12 nguyentr17

You can check the use of Joblib in torchensemble here, which is different from you in a slight way, may this help.

xuyxu avatar Dec 06 '24 09:12 xuyxu