torchsparse icon indicating copy to clipboard operation
torchsparse copied to clipboard

[Feature Request] Add support for pooling blocks

Open tortoiseknightma opened this issue 2 years ago • 4 comments

e.g. The implementation in MinkowskiEngine: https://nvidia.github.io/MinkowskiEngine/pooling.html#minkowskimaxpooling. I only found global_max_pool()

tortoiseknightma avatar Nov 13 '23 10:11 tortoiseknightma

Hi. We haven't implemented those pooling kernels yet. We will consider to implement them. Thank you for reaching out!

ys-2020 avatar Nov 14 '23 21:11 ys-2020

I also need average pooling for my application case and would appreciate it if you could implement this. Also, I would be happy if you could suggest a way to implement average pooling with convolutions. I thought using convolutions with all kernel elements = 1/N but N needs to be the number of active voxels inside the receptive field and I do not know how I can get that number.

YilmazKadir avatar Apr 07 '24 17:04 YilmazKadir

Yes, that would be extremely useful. I was in the process of migrating my code for Minkowski, but sadly the lack of pooling layers make this impossible now.

@zhijian-liu @kentang-mit

kabouzeid avatar Apr 08 '24 14:04 kabouzeid