robust_overfitting
robust_overfitting copied to clipboard
Confusion about L2-norm normalization in PGD attack
elif norm == "l_2":
g_norm = torch.norm(g.view(g.shape[0],-1),dim=1).view(-1,1,1,1)
scaled_g = g/(g_norm + 1e-10)
d = (d + scaled_g*alpha).view(d.size(0),-1).renorm(p=2,dim=0,maxnorm=epsilon).view_as(d)
Why is the renorm operation in the L2-norm PGD attack, i.e., d = (d + scaled_g*alpha).view(d.size(0), -1).renorm(p=2, dim=0, maxnorm=epsilon).view_as(d), performed as column-wise normalization?