Mempool memory resource - IPC
This is ready for at least a first look
Add Mempool class as the first public MemoryResource implementation. It supports IPC.
Add IPC buffer which is an internal buffer implementation used to represent a buffer descriptor which is used to export and import buffers across process boundaries. It implements reduce which is expected by common multiprocessing libraries. It's use is seen in the tests added to this review.
This pull request requires additional validation before any workflows can run on NVIDIA's runners.
Pull request vetters can view their responsibilities here.
Contributors can view more details about this message here.
/ok to test
/ok to test
Doc Preview CI :---: |
:rocket: View preview athttps://nvidia.github.io/cuda-python/pr-preview/pr-446/
|
https://nvidia.github.io/cuda-python/pr-preview/pr-446/cuda-core/
|
https://nvidia.github.io/cuda-python/pr-preview/pr-446/cuda-bindings/
|
Preview will be ready when the GitHub Pages deployment is complete.
/ok to test
/ok to test
/ok to test
/ok to test
/ok to test
/ok to test
/ok to test
/ok to test
/ok to test
I am tasked with adding the IPC mempool support for windows at the driver level (5151668), so I think we should roll with this implementation and have it pick up windows once that change is integrated. Users who need IPC on windows for older CTKs can implement their own MR or use the cuMemCreate & VMM API bindings. WDYT
IPC support for Linux landed (https://github.com/NVIDIA/cuda-python/pull/930) 🎉 We'll revisit (and close) this PR when there's a viable path emerging for Windows counterpart (and preferably not through VMM #968) .