Saurabh Nair

Results 16 comments of Saurabh Nair

Hi @Vincentqyw after capturing the depth, I see that the values are between 0 and 255. How do I get the maximum depth that I can use to normalize the...

I got it to work by changing the line to: `jnp.broadcast_to(jnp.array([last_sample_z]), z_vals[..., :1].shape)`

Hey @corlangerak, did you remove the pip install hypernerf line in the collab notebook? Because changing the line locally wouldn't reflect the change(You'll be loading the hypernerf pip install instead)....

@hsauod check these two changes: https://github.com/google/hypernerf/commit/ae29d1dc5824daaa59a7008df96442873017346e#diff-433be35a4beb7eeee9224dcbe28ec97d53330cd175060905cd5217863674003cR114 and check the second cell in my notebook here: https://github.com/saunair/hypernerf/blob/main/notebooks/HyperNeRF_Training.ipynb @corlangerak here you go. Sorry about the delay

@bishengderen were you able to source it by any chance? Or @keunhong is there a way to download it via the google-colab? (Apologies in advance if this is a dumb...

The definitions are here: https://github.com/eric-yyjau/pytorch-superpoint/blob/4ff74df8fa3c10ce9eb9fdc561f787d9e8bc9691/models/unet_parts.py#L38 They do seem like operations of another architecture, which is the descriptor network. The embedding network seems to be the same

Both magic point and superpoint need different directories. I had trained a Magic point from scratch, but the quality isn't as good as expected.

Happy to help with certain aspects of the package. I was able to get the repo working with some code changes, but not up to the mark of the original...

(Sorry for the delay)I tried the above mentioned solution by @EmmCo , still see the same issue.

@aganal could you provide a concrete example? In my case, I haven't changed anything with the config