galenmandrew

Results 15 comments of galenmandrew

Hello. Yes the basic algorithm of Abadi et al. is supported. You can get that by setting leaving num_microbatches unspecified so it will default to the size of the minibatch....

Please see the comment [here](https://github.com/tensorflow/privacy/blob/f0daaf085fb235f48bcb7d15561060af5b127ec7/tensorflow_privacy/privacy/optimizers/dp_optimizer.py#L59). Clipping gradient per example is supported by leaving num_microbatches=None.

A few points. First the generator never sees user data, so it shouldn't be trained with noise. (Not sure if you were doing that.) The discriminator sees both, fake data...

Apologies for this and thanks for reporting. The Google differential privacy library should be updated to include the new function in the next few days.

We are now using the new Google DP library and this should be fixed.

Yes, fixing TFP to work with keras optimizers in TF 2.0 is a high priority feature for us now, although I don't have a specific date I can promise you...

The relation of the clip to the actual gradients' magnitudes is an important one, but there are two slight problems with the algorithm you are describing. First, we can't use...

The first thing to establish is whether your l2_norm_clip and noise_multiplier are appropriately set. As a sanity check, try setting l2_norm_clip very large (1e3) and noise multiplier very small (1e-8)....

l2_clip_norm is completely model specific. If it is too low, your gradients will be clipped heavily, incurring bias. If it is too high, a huge amount of noise will be...

There are several papers cited in the [accounting code](https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/analysis/rdp_accountant.py) that are refinements of the works you cited. Probably what you are observing is from those refined bounds.