FLAG icon indicating copy to clipboard operation
FLAG copied to clipboard

Algorithm problem

Open wangzeyu135798 opened this issue 5 years ago • 2 comments

Hi: After reading your paper and code, I have a question. In ogbn_protiens/attacks.py, function flag, I think it's the core of this algorithm. While In this function, you first calculate a loss under data and perturb, in next args.m -1 times, calculate args.m -1 times loss and accumulate gradients of perturb. In last, the total loss backwards. In your code, accumulating several times gradients of perturb while updating model parameters once. It seems that this doesn't match your paper!
In your paper algotithm1, from line 6 to 8, the adversarial loop run one time, computing both the gradient for perturb and parameter simultaneously.

wangzeyu135798 avatar Nov 10 '20 13:11 wangzeyu135798

Hello, thanks for your interest in our paper!

The gradients will be accumulated for model parameters and perturbations M times in our algorithm. To way to realize it is to loss.backward() M times, but we only do gradient ascent for perturbations in the loop, without optimizing the model parameters. After we go outside the loop we optmize model parameter once. These match both our Algorithm 1 in the paper and our code.

Hope this makes sense!

devnkong avatar Nov 16 '20 00:11 devnkong

Hi:    I want to learn which conference or magazine do you publish this paper on ? 

------------------ 原始邮件 ------------------ 发件人: "devnkong/FLAG" <[email protected]>; 发送时间: 2020年11月16日(星期一) 上午8:45 收件人: "devnkong/FLAG"<[email protected]>; 抄送: "天津工业大学王泽宇"<[email protected]>;"Author"<[email protected]>; 主题: Re: [devnkong/FLAG] Algorithm problem (#1)

Hello, thanks for your interest in our paper!

The gradients will be accumulated for model parameters and perturbations M times in our algorithm. To way to realize it is to loss.backward() M times, but we only do gradient ascent for perturbations in the loop, without optimizing the model parameters. After we go outside the loop we optmize model parameter once. These match both our Algorithm 1 in the paper and our code.

Hope this makes sense!

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

wangzeyu135798 avatar Dec 07 '20 12:12 wangzeyu135798