ROZBEH
ROZBEH
@HesNobi @nglehuy Any suggestions on how to fix this? I am facing the same issue, the test.py is extremely slow. It takes 1 hour to run inference on 8 data...
I see - thanks @nithinraok Yeah knowing the actual config can be really beneficial.
I am facing the same issue on a multi node multi GPU and without docker. I am utilizing slurm to run the job.
Hi there, Thanks for putting together this repo. Great resource. Any progress on this? Thanks.
Hi there, Just checking here and wondering whether this is resolved? I am facing same issue. Thank you.
Thanks @haihua I'm indeed 5 nodes with 5 GPU each. Is that what you mean?
I see but the above issue is persistent with multi node and I'd like to get it working.
Any updates here?