Håkan Ardö
Håkan Ardö
Sure, they'll have to be extracted from a bigger experimental tangle, but I'll get back to you with a stand alone micro-benchmark... On Thu, Sep 8, 2022 at 8:39 AM...
Hi, here is the benchmark. Including results as comments at the bottom. On Thu, Sep 8, 2022 at 8:48 AM Hakan Ardo ***@***.***> wrote: > Sure, they'll have to be...
[hub_bench.zip](https://github.com/activeloopai/Hub/files/9547665/hub_bench.zip)
I use a "Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz" system with an "GeForce RTX 2080 Ti" GPU. Thanx for the pointer, I will have a look! On Thu, Sep 15,...
I've benchmarked the deeplake loader using the attached benchmark. It performs slightly better than my branch with shuffle=True, but significantly worse than the python loader with shuffle=False. Am I doing...
Thanx for the tip, num_workers=0 improves things, but I still only get some 80% GPU utilization in my trainings as opposed to about 98% with the shuffle_thread branch. I also...
Se also https://github.com/activeloopai/Hub/issues/1888
That worked, but requires the dataloader to be picklable, which is a bit annoying. But it seems to be enough to wrap it in a separate thread: ```python class DONE:...
I think the main effect comes from doing the data loading on the CPU in parallel with the training on the GPU instead of in series with it. Thereby the...
Yes, exactly.