Divyansh Khanna

Results 26 comments of Divyansh Khanna

looks like the support for 3.13t isn't really there at the moment

Thanks @keunwoochoi for bringing this up! It is a very valid point of discussion. We had some discussions and this is one way we can go about it: let's keep...

Happy to discuss here which nodes we can add!

I wonder, how different can this be from doing the transfer within a Mapper, similar to a collate_fn doing tensor.to(device)

For cases where we have multiple threads reading from data, we might be able to create multiple thread local CUDA streams to transfer data onto the GPU. WDYT @andrewkho ?

Sharing some points that we discussed over the call. 1. The core work here is in the data access layer, nor particularly in the data loader. I imagine we can...

@isarandi Have you tried creating a Dataset class and using a DsitributedSampler for multi-gpu workloads ?

Thanks @tazr for the issue! You already answered my first question - it works with a `Mapper`. The reason it fails with `ParallelMapper` is because `ParallelMapper` uses background threads (for...

@tazr You are overall right. Each node operates on the outputs of the previous nodes. `ParallelMapper` uses background threads which seems to causing the error in your case. In typical...