mrshenli
mrshenli
@mrshenli has imported this pull request. If you are a Facebook employee, you can view this diff [on Phabricator](https://www.internalfb.com/diff/D38579873).
two inserted nodes (i.e., wrap comm result + work, and wait on work) are correctly linked in the graph now ``` opcode name target args kwargs ------------- ------------------ --------------------------------------------- ----------------------------------------...
> Also I remembered we are switching to modes instead of tensor subclass, is that sth we will do later? The current version is already compatible with the `DispatcherMode` implementation...
@pytorchbot merge -g
@pytorchbot merge -g
@pytorchbot merge -g
@pytorchbot merge
merged by #84126
Since this is shared-memory, would I be correct if I assume `torchstore` aims to support multiple processes on the same machine, instead of across machines?