Integrate DNN runtime for running inference in C++
Enable 3DML processing in C++
This allows an increasing number of 3D processing methods to be available to the Open3D C++ API.
Options:
- [ ] libtorch [preferred]
- [ ] onnx
Docs:
- https://pytorch.org/cppdocs/
- https://pytorch.org/docs/main/torch.compiler_aot_inductor.html#inference-in-c
- DLpack: https://github.com/dmlc/dlpack
- https://www.open3d.org/docs/release/cpp_api/classopen3d_1_1core_1_1_tensor.html#a7c402059e20f6d7d40504159ad92f911
High level plan:
- The workflow is that you can train a model in PyTorch, then
torch.export()it to a.ptfile on disk. This can then be loaded from a C++ program for inferece. See AOTInductor example above. - Add
open3d::ml::modelclass with methodsload_modelandforwardto load a model from disk and run the forward pass for inference. - The
load_modelfunction should:- dlopen libtorch, so that libtorch functions can be called from Open3D. This ensures that libtorch remains an optional requirement.
- Follow the AOTInductor example. Note that the inputs will actually be Open3D tensors (on CPU or GPU). We will use DLPack to wrap these to PyTorch Tensors and provide to the
run()function. The outputs will similarly be converted from PyTorch to Open3D with DLPack.
- Test the integration with a very simple model, say just a small linear layer initialized with known weights (e.g. all ones). Check the output in PyTorch and in Open3D with a known input tensor (say all ones). Add this as a C++ unit test.
- Next add a real world model. GeDI is a good candidate - this is the SoTA point cloud registration feature point descriptor and uses Open3D for processing. See demo.py. Port this example to C++. You will have to check that this model can be
torch.export()ed. If not, we will have to pick a different model.
I've worked on similar stuff in one of my projects. Can definitely help! Can you point me to relevant material so I can get started?
Hi @adityamwagh thanks for volunteering! I've added more details and a high level plan in the issue description. Let me know if you have any questions.
Thanks for the heads up! I'll have a look into it.
Hey! @ssheorey Apologies for the delay. I have started working on this issue, should have a PR ready in a few days.
I see that this issue is mentioned in the v0.20.0 milestone. When do you plan to relase v0.20.0?