AdaNeRF icon indicating copy to clipboard operation
AdaNeRF copied to clipboard

Fine-tuning stage process and filtering sample adaptively

Open dogyoonlee opened this issue 2 years ago • 0 comments

Hello,

I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.

However, I have additional questions about the adaptive sampling and fine-tuning stage process.

Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?

I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.

Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper. image

In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning): Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.

However, it does not work and I'm still get hard to fix it. Could you explain about the point in detail?

(I just add this implementation code for understanding my implementation. ) image

dogyoonlee avatar Jun 07 '23 10:06 dogyoonlee