Igor Shilov

Results 32 issues of Igor Shilov

We want to warn users of an unexpected behaviour with `set_to_none` flag. Normally, both `nn.Module` and `Optimizer` let clients choose whether they want to remove the `.grad` attribute altogether or...

CLA Signed

How Opacus helps protect from privacy attacks

Differential Revision: D38076558

CLA Signed
fb-exported

(possibly with autocast and loss scaling, but in our experience with Alex this may result in training instabilities). On the other hand, it reduces memory usage and increases speed (roughly...

enhancement

tl;dr: [dcgan.py](https://github.com/pytorch/opacus/blob/main/examples/dcgan.py) example adds noise to the fake data gradients, which it doesn't need to. We should change the training pipeline (because it's more correct) and measure the impact (because...

good first issue

A lot of our grad sample computations are based on einsum. There's an easy (one-liner) way to optimize the expression: https://optimized-einsum.readthedocs.io/en/stable/ We want to try this and evaluate the impact

enhancement

There are many possible tensor modifications that break DP guarantees. We make a weak attempt to verify the model does not break any conventions (e.g. check for BatchNorm layers), but...

enhancement

Now that functorch is publicly released (https://pytorch.org/blog/pytorch-1.11-released/) we want to evaluate what does it mean for our per sample gradient computation How to compute per sample gradients with functorch: https://pytorch.org/functorch/0.1.0/notebooks/per_sample_grads.html...

enhancement

PyTorch is slowly introducing native support for per-sample gradients (https://github.com/pytorch/pytorch/pull/70141) This is a good reason to get rid of custom grad sampler code

enhancement