Results 13 comments of Shiyu Duan

Thanks a lot for answering! I agree that a sequence of linear convolution layers can be collapsed into a single one, but is there a simple way to see that...

@jnoylinc ```post_process_k``` does two things. It first zeros out negligible values in the extracted kernel, which, if anything, only results in the kernel further deviating from the generator. Second, it...

Hi @1214635079, I haven't. I might be wrong here but I do not think the extracted kernel is the same as the learned generator. Hi @sefibk, have you got a...

@1214635079 I agree that a sequence of conv layers can be collapsed into a single conv layer if there are no nonlinearities involved. My concern is that this single conv...

@liuweiyy nope. Still waiting for the author to address this issue.

@sefibk Thanks for the reply. It'd be great if you could elaborate on the motivation part. Pointing us to a full proof somewhere on this would also be much appreciated....

BTW, the current mnist_tutorial in #1028 is only a few edits away from a cifar10 tutorial. Just change torchvision.datasets.MNIST to torchvision.datasets.CIFAR10 and change the in_channel of the model from 1...

Yes, I agree that these two tutorials would look similar. And I can't see how a cifar10 tutorial would be able to demonstrate any functionality missing from the existing mnist...

@iamgroot42 I might be misunderstanding this but if by "normalize" you mean the projection step, I think it is in ```clip_eta```.

@iamgroot42 I don't think this normalization is mentioned in either of the papers referenced by CleverHans' PGD implementation. Although, I think both papers only discussed the inf-norm case unless I've...