Ahmed Khaled
Ahmed Khaled
Oh, I think it's a very nice remark if `model.train()` is actually taking time. Assuming it takes time, you're right it, avoiding this in the Higher Level API is important,...
Will work on this, may take some time thinking about how to test this properly
@vfdev-5 yeah I see, once we sort something out, I'm interested in working on this, will look for something else for now, thanks!
I will work on this, `torch.no_grad` exists in: - Examples - Docs - Metrics - Engine I will start workin on them in this same order, what do you think?
Cool, will do that on CIFAR and share the results, maybe more complex use case like CIFAR, can get us more clear benchmarks.
Ran CIFAR10 example for 5 epochs each (no_grad, inference_mode), and averaged the time for evaluating test metrics. (on Colab GPU). - torch.no_grad: 3.7 sec - torch.inference_mode: 3.688 sec NB: https://colab.research.google.com/drive/1zfWxg8H0XOutqXfh8BORXaz2sJ4hFE_-?usp=sharing
I really excited to work on this!
Can i participate with any language that i want?
Okay thank you, i will work on the java repo
Thanks! but i think if we did that we can not get it back to `_RawTextIterableDataset`, right?