privacy icon indicating copy to clipboard operation
privacy copied to clipboard

Poor performance while using DPAdam in WGAN-GP

Open Raymond-Xue opened this issue 5 years ago • 5 comments

I am trying to add privacy to my WGAN. With Adam optimizer without DP, my WGAN works well. But after I change it to

optimizer = DPAdamGaussianOptimizer( l2_norm_clip=3, noise_multiplier=0.5, num_microbatches=1, learning_rate=0.001)

The D_loss never went smaller than 1. And my output is not convincing neither. Anybody knows how to fix it?

Raymond-Xue avatar Jun 30 '20 13:06 Raymond-Xue

The first thing to establish is whether your l2_norm_clip and noise_multiplier are appropriately set. As a sanity check, try setting l2_norm_clip very large (1e3) and noise multiplier very small (1e-8). Then you should be able to replicate non-private training performance. Then reduce l2_norm_clip until you see a decrease in performance. Finally increase noise_multiplier.

galenmandrew avatar Jun 30 '20 17:06 galenmandrew

Thanks, I will try this!

Raymond-Xue avatar Jun 30 '20 17:06 Raymond-Xue

The first thing to establish is whether your l2_norm_clip and noise_multiplier are appropriately set. As a sanity check, try setting l2_norm_clip very large (1e3) and noise multiplier very small (1e-8). Then you should be able to replicate non-private training performance. Then reduce l2_norm_clip until you see a decrease in performance. Finally increase noise_multiplier.

Usually what is the reasonable value for l2_norm_clip and noise_multiplier? Or is there a range for them?

Raymond-Xue avatar Jun 30 '20 18:06 Raymond-Xue

l2_clip_norm is completely model specific. If it is too low, your gradients will be clipped heavily, incurring bias. If it is too high, a huge amount of noise will be added to achieve privacy, which destroys model accuracy. A good rule of thumb is to compute some gradients with your model and use the median norm. There are also methods for doing such adaptiation automatically, which are included in tensorflow_privacy as QuantileAdaptiveClipAverageQuery. To use this, you would change the dp_query created here.

noise_multiplier controls the amount of noise, hence the privacy guarantee you will get. As a rule of thumb, it should be about 1.0 for good privacy. If you find model performance degrades with noise_multiplier less than one (hint: it probably will), you may be able to increase the number of microbatches and proportionally increase the noise_multiplier. For a good description of this methodology I recommend this paper.

galenmandrew avatar Jun 30 '20 21:06 galenmandrew

l2_clip_norm is completely model specific. If it is too low, your gradients will be clipped heavily, incurring bias. If it is too high, a huge amount of noise will be added to achieve privacy, which destroys model accuracy. A good rule of thumb is to compute some gradients with your model and use the median norm. There are also methods for doing such adaptiation automatically, which are included in tensorflow_privacy as QuantileAdaptiveClipAverageQuery. To use this, you would change the dp_query created here.

noise_multiplier controls the amount of noise, hence the privacy guarantee you will get. As a rule of thumb, it should be about 1.0 for good privacy. If you find model performance degrades with noise_multiplier less than one (hint: it probably will), you may be able to increase the number of microbatches and proportionally increase the noise_multiplier. For a good description of this methodology I recommend this paper.

Thanks a lot. If I increase the noise_multiplier to 1e-2, my performance will get worse. So I tried to increase the number of microbatches. However, I got the same error as https://github.com/tensorflow/privacy/issues/102.

I currently set minibatch as 32 and microbatch as 16.

The error is below: ValueError: Dimension size must be evenly divisible by 16 but is 1 for 'training_7/TFOptimizer/Reshape' (op: 'Reshape') with input shapes: [], [2] and with input tensors computed as partial shapes: input[1] = [16,?].

Do you know how to fix it?

Thanks!!

Raymond-Xue avatar Jul 01 '20 12:07 Raymond-Xue