Переглядів 18,150
We're performing a technical deep dive into differential privacy: preventing models from memorising private data. Theory + Colab notebook using Tensorflow Privacy!
Social:
Twitter: / mukulrathi_
My website (+ blog): mukulrathi.com/
My email newsletter: newsletter.mukulrathi.com
-----------------
Links:
DP-SGD paper: arxiv.org/pdf/1607.00133.pdf
Tensorflow Privacy Tutorials: github.com/tensorflow/privacy...
Tensorflow Privacy: github.com/tensorflow/privacy
Pytorch Opacus: github.com/pytorch/opacus
Moments accountant implementation: github.com/marcotcr/tf-models...
GPT-2 memorises private data: ai.googleblog.com/2020/12/pri...
Netflix dataset deanonymised: www.wired.com/2007/12/why-ano...
Netflix deanonymisation paper: www.cs.utexas.edu/~shmat/shma...
Strava heatmap leaks: www.zdnet.com/article/strava-...
------------------------------------------------------------------------------------
Timestamps:
00:00 Introduction
00:32 Overview
01:26 Why Anonymisation Isn't Enough
02:38 Intuition for Differential Privacy
03:12 Example: Predict whether Bob has Cancer
04:11 Privacy Intuition
04:51 Privacy Loss Definition
05:26 Definition of Differential Privacy
06:40 Role of Noise in DP
07:08 Privacy Amplification Theorem
07:26 Fundamental Law of Information Recovery
07:51 Composition in DP
08:19 DP-SGD
09:20 Moments Accountant
12:55 Google Colab Notebook
14:39 Limitations of DP-SGD
---------------------------------------------------------------------------
Music: Coffee Break by Pyrosion is licensed under a Creative Commons License.
creativecommons.org/licenses/...
/ pyrosion
Support by RFM - NCM: bit.ly/2xGHypM