Ramanish Singh
Ramanish Singh
@rsdefever I agree with you, that level of precision is not even meaningful. The bond length tolerance in our group code is 1e-6 Angstrom. So, a precision of 1e-7 or...
Hi @keunwoochoi , Thanks for checking out `nodes`. `Loader` works pretty well, and you can check out some examples here: https://github.com/pytorch/data/tree/main/examples/nodes You can also check the migration guide here: https://pytorch.org/data/beta/migrate_to_nodes_from_utils.html...
@busbyjrj You can check out the discussion here #1472 We are currently working on developing some for multi-GPU training and we plan to publish them in the coming weeks.
Hi @AshwinSankar17 I wasn't able to reproduce the issue that you are facing and I was able to actually get batches from the StatefulDataloader. I ran into a different issue,...
@yuvalatzmon I have added a custom enumerate method in #1505 LMK what you think
Hi @vadimkantorov We are currently working on upstreaming it https://github.com/ramanishsingh/pytorch/tree/upstream_sdl
Hi @howitry Yes, prefetched data is discarded when resumed. Because we only store number of yielded samples. These prefetched batches are not saved in the checkpoint. However, those batches are...
> hi @ramanishsingh, can we try the CI with the latest change? > > also, i will have access to a windows laptop in a few days, in case the...
Hi @keunwoochoi , this repo has recently moved from pytorch to meta-pytorch and somehow the CIs are not running. We haven't been able to devote cycles for solving this yet.
Hi @howitry Trying to understand with the help of some examples. 1. In case of non stateful iterable dataset ``` import torch from torch.utils.data import IterableDataset, get_worker_info from torchdata.stateful_dataloader import...