BostonLobster
BostonLobster
The point cloud provided by [ScanNet ](http://www.scan-net.org/) is very noisy, so how can I run `learn_image_filter.py` on ScanNet scene? Any toy example or suggestion?
I saw the descriptors are stored in PointTexture class as nn.Parameters, and they are rasterized to image through `index_select`. I am wondering how to get these indices?
First of all, nice project!!! In your cifar demo, your models are hard-coded into your lib. So, how do I define my own models and make them work well with...
Hi~ In generate_celeba_synthesize from `data_loading.py`([here](https://github.com/bhushan23/SfSNet-PyTorch/blob/68c09fd4224bd08398e2e5721a810296ae38316c/data_loading.py#L213)), you just applied `denorm` on `predicted_normal`, but others are ignored. Could you explain a little bit about why? I know the input face is in...
Hi! Thanks very much for releasing your code. But I have one more question: could you release your whole training dataset of CelebA? I saw it in `generate_dataset_csv.py`, in which...
Hi, after going through all examples, I'm wondering how to render a large point cloud? Such as scanned indoor point cloud. Any toy example for rendering a point cloud? Thanks...