Adversarial training is meaningless, maybe the codes are wrong?
Hello, Really nice job! But I have some concerns with the released codes.
According to your codes, AdvNeRFNet has three components, that are AdvNeRFNet.nerf_adv, AdvNeRFNet.nerf and AdvNeRFNet.nerf_fine.
During training, the loss_adv and adv_perturb are only related with the update of AdvNeRFNet.nerf_adv, and the loss_clean is only related with the update of AdvNeRFNet.nerf and AdvNeRFNet.nerf_fine.
During inference, only AdvNeRFNet.nerf and AdvNeRFNet.nerf_fine are involved.
After carefully checking your codes, I find that AdvNeRFNet.nerf_adv, AdvNeRFNet.nerf and AdvNeRFNet.nerf_fine are independent with each other and there are no common components before or after them, so the existence of the AdvNeRFNet.nerf_adv branch seems meaningless to the training and inference step of conventional NeRF.
What do you think? Is there any details I ignore?
I am a huge fan of adversarial training, and I have also checked your previous work CV_A-FAN. In the classification task of CV_A-FAN, there seems no additional "adversarial" branch within the ResNet. So the ResNet weights can be adversarially trained upon benign examples and adversarial examples. Maybe there should also exist no additional branch in Aug-NeRF?
Hi,
Thanks for raising this issue. I have checked the code, and I guess the current version may indeed have some problems. In our original implementation, I remember the adversarial attack should be added through one of the branches in NeRF (recall that in NeRF, the MLP has two branches, one for density prediction, and another for color prediction). One quick solution might be replacing all the nerf_adv with nerf_fine/nerf. I'll be working on correcting it. Thanks for your patience.
Yeah yeah, I think nerf_adv should be replaced with nerf_fine/nerf as well.