ilo icon indicating copy to clipboard operation
ilo copied to clipboard

Projection back to the range step

Open 1094442522 opened this issue 4 years ago • 4 comments

Hi, I read the paper and find there is a 'projection back to the range step' in Algorithm 1. Is this step implemented in the ilo_stylegan.py?

From my understanding of the code, there is a projection to the l1 ball neighbourhood of prev_gen_out, but I don't find the step z_p ← G1(z_k) in the code (the step 6 in Algorithm1). I am wondering if there is something wrong with my understanding.

Thanks for your help and attached is the algorithm 1. image

1094442522 avatar Sep 19 '21 17:09 1094442522

@1094442522 Did you figure this out? @giannisdaras It would be really helpful to get your clarification on this?

akashsharma02 avatar Apr 11 '22 22:04 akashsharma02

I‘ve got the same question as you did...I doubt that the author didn't implement line5,6 in the ilo_stylegan.py @1094442522 @akashsharma02

ffhibnese avatar Sep 26 '22 03:09 ffhibnese

The following code projects back to the l1-ball from the solution of the previous layer: https://github.com/giannisdaras/ilo/blob/08a88f2ae0f6530211be93a0deed502d38a871bd/ilo_stylegan.py#L248

If the solution of the prev. layer lies within the range of the layer (which is definitely the case when you optimize in the first intermediate layer), you are guaranteed to stay in an l1-deviation from the range.

Is this answering what you guys are asking? Thanks for your interest!

giannisdaras avatar Sep 26 '22 07:09 giannisdaras

The following code projects back to the l1-ball from the solution of the previous layer:

https://github.com/giannisdaras/ilo/blob/08a88f2ae0f6530211be93a0deed502d38a871bd/ilo_stylegan.py#L248

If the solution of the prev. layer lies within the range of the layer (which is definitely the case when you optimize in the first intermediate layer), you are guaranteed to stay in an l1-deviation from the range.

Is this answering what you guys are asking? Thanks for your interest!

You wrote "This problem is solved by initializing a latent vector $z^p$ to $\hat{z}^p$ and then minimizing using gradient descent the loss $||G_1(z^k) − \tilde{z}_p||$" in the paper, namely the 5th line of Algorithm 1. But I don't find any inplementation of this. In my opinion, the code just simplly project present vector into an l1-deviation ball.

Thanks for answering my question! And I would be grateful if you help me figure this out.

ffhibnese avatar Sep 26 '22 11:09 ffhibnese