RStudio AI Blog: Differential Privacy with TensorFlow

0
111
RStudio AI Blog: Differential Privacy with TensorFlow


What could possibly be treacherous about abstract statistics?

The well-known cat obese research (X. et al., 2019) confirmed that as of May 1st, 2019, 32 of 101 home cats held in Y., a comfortable Bavarian village, have been obese. Even although I’d be curious to know if my aunt G.’s cat (a cheerful resident of that village) has been fed too many treats and has collected some extra kilos, the research outcomes don’t inform.

Then, six months later, out comes a brand new research, bold to earn scientific fame. The authors report that of 100 cats dwelling in Y., 50 are striped, 31 are black, and the remaining are white; the 31 black ones are all obese. Now, I occur to know that, with one exception, no new cats joined the neighborhood, and no cats left. But, my aunt moved away to a retirement residence, chosen in fact for the chance to carry one’s cat.

What have I simply discovered? My aunt’s cat is obese. (Or was, at the very least, earlier than they moved to the retirement residence.)

Even although not one of the research reported something however abstract statistics, I used to be in a position to infer individual-level details by connecting each research and including in one other piece of data I had entry to.

In actuality, mechanisms just like the above – technically referred to as linkage – have been proven to result in privateness breaches many instances, thus defeating the aim of database anonymization seen as a panacea in lots of organizations. A extra promising various is obtainable by the idea of differential privateness.

Differential Privacy

In differential privateness (DP)(Dwork et al. 2006), privateness just isn’t a property of what’s within the database; it’s a property of how question outcomes are delivered.

Intuitively paraphrasing outcomes from a site the place outcomes are communicated as theorems and proofs (Dwork 2006)(Dwork and Roth 2014), the one achievable (in a lossy however quantifiable manner) goal is that from queries to a database, nothing extra must be discovered about a person in that database than in the event that they hadn’t been in there in any respect.(Wood et al. 2018)

What this assertion does is warning towards overly excessive expectations: Even if question outcomes are reported in a DP manner (we’ll see how that goes in a second), they allow some probabilistic inferences about people within the respective inhabitants. (Otherwise, why conduct research in any respect.)

So how is DP being achieved? The major ingredient is noise added to the outcomes of a question. In the above cat instance, as an alternative of actual numbers we’d report approximate ones: “Of ~ 100 cats living in Y, about 30 are overweight….” If that is finished for each of the above research, no inference will probably be attainable about aunt G.’s cat.

Even with random noise added to question outcomes although, solutions to repeated queries will leak data. So in actuality, there’s a privateness price range that may be tracked, and could also be used up in the midst of consecutive queries.

This is mirrored within the formal definition of DP. The concept is that queries to 2 databases differing in at most one aspect ought to give mainly the identical consequence. Put formally (Dwork 2006):

A randomized operate (mathcal{Ok}) provides (epsilon) -differential privateness if for all information units D1 and D2 differing on at most one aspect, and all (S subseteq Range(Ok)),

(Pr[mathcal{K}(D1)in S] leq exp(epsilon) × Pr[K(D2) in S])

This (epsilon) -differential privateness is additive: If one question is (epsilon)-DP at a price of 0.01, and one other one at 0.03, collectively they are going to be 0.04 (epsilon)-differentially non-public.

If (epsilon)-DP is to be achieved by way of including noise, how precisely ought to this be finished? Here, a number of mechanisms exist; the essential, intuitively believable precept although is that the quantity of noise must be calibrated to the goal operate’s sensitivity, outlined as the utmost (ell 1) norm of the distinction of operate values computed on all pairs of datasets differing in a single instance (Dwork 2006):

(Delta f = max_{D1,D2} f(D1)−f(D2) _1)

So far, we’ve been speaking about databases and datasets. How does this apply to machine and/or deep studying?

TensorFlow Privacy

Applying DP to deep studying, we wish a mannequin’s parameters to wind up “essentially the same” whether or not skilled on a dataset together with that cute little kitty or not. TensorFlow (TF) Privacy (Abadi et al. 2016), a library constructed on high of TF, makes it simple on customers so as to add privateness ensures to their fashions – simple, that’s, from a technical viewpoint. (As with life total, the laborious selections on how a lot of an asset we must be reaching for, and the way to commerce off one asset (right here: privateness) with one other (right here: mannequin efficiency), stay to be taken by every of us ourselves.)

Concretely, about all we now have to do is trade the optimizer we have been utilizing towards one offered by TF Privacy. TF Privacy optimizers wrap the unique TF ones, including two actions:

  1. To honor the precept that every particular person coaching instance ought to have simply average affect on optimization, gradients are clipped (to a level specifiable by the person). In distinction to the acquainted gradient clipping typically used to forestall exploding gradients, what’s clipped right here is gradient contribution per person.

  2. Before updating the parameters, noise is added to the gradients, thus implementing the principle concept of (epsilon)-DP algorithms.

In addition to (epsilon)-DP optimization, TF Privacy offers privateness accounting. We’ll see all this utilized after an introduction to our instance dataset.

Dataset

The dataset we’ll be working with(Reiss et al. 2019), downloadable from the UCI Machine Learning Repository, is devoted to coronary heart charge estimation by way of photoplethysmography.
Photoplethysmography (PPG) is an optical technique of measuring blood quantity adjustments within the microvascular mattress of tissue, that are indicative of cardiovascular exercise. More exactly,

The PPG waveform includes a pulsatile (‘AC’) physiological waveform attributed to cardiac synchronous adjustments within the blood quantity with every coronary heart beat, and is superimposed on a slowly various (‘DC’) baseline with varied decrease frequency parts attributed to respiration, sympathetic nervous system exercise and thermoregulation. (Allen 2007)

In this dataset, coronary heart charge decided from EKG offers the bottom fact; predictors have been obtained from two business gadgets, comprising PPG, electrodermal exercise, physique temperature in addition to accelerometer information. Additionally, a wealth of contextual information is obtainable, starting from age, peak, and weight to health stage and sort of exercise carried out.

With this information, it’s simple to think about a bunch of fascinating data-analysis questions; nevertheless right here our focus is on differential privateness, so we’ll preserve the setup easy. We will attempt to predict coronary heart charge given the physiological measurements from one of many two gadgets, Empatica E4. Also, we’ll zoom in on a single topic, S1, who will present us with 4603 cases of two-second coronary heart charge values.

As normal, we begin with the required libraries; unusually although, as of this writing we have to disable model 2 conduct in TensorFlow, as TensorFlow Privacy doesn’t but totally work with TF 2. (Hopefully, for a lot of future readers, this gained’t be the case anymore.)
Note how TF Privacy – a Python library – is imported by way of reticulate.

From the downloaded archive, we simply want S1.pkl, saved in a native Python serialization format, but properly loadable utilizing reticulate:

s1 factors to an R listing comprising components of various size – the varied bodily/physiological alerts have been sampled with totally different frequencies:

### predictors ###

# accelerometer information - sampling freq. 32 Hz
# additionally notice that these are 3 "columns", for every of x, y, and z axes
s1$sign$wrist$ACC %>% nrow() # 294784
# PPG information - sampling freq. 64 Hz
s1$sign$wrist$BVP %>% nrow() # 589568
# electrodermal exercise information - sampling freq. 4 Hz
s1$sign$wrist$EDA %>% nrow() # 36848
# physique temperature information - sampling freq. 4 Hz
s1$sign$wrist$TEMP %>% nrow() # 36848

### goal ###

# EKG information - offered in already averaged kind, at frequency 0.5 Hz
s1$label %>% nrow() # 4603

In mild of the totally different sampling frequencies, our tfdatasets pipeline may have do some shifting averaging, paralleling that utilized to assemble the bottom fact information.

Preprocessing pipeline

As each “column” is of various size and backbone, we construct up the ultimate dataset piece-by-piece.
The following operate serves two functions:

  1. compute working averages over in another way sized home windows, thus downsampling to 0.5Hz for each modality
  2. remodel the info to the (num_timesteps, num_features) format that will probably be required by the 1d-convnet we’re going to make use of quickly
average_and_make_sequences <-
  operate(information, window_size_avg, num_timesteps) {
    information %>% k_cast("float32") %>%
      # create an preliminary tf.information dataset to work with
      tensor_slices_dataset() %>%
      # use dataset_window to compute the working common of measurement window_size_avg
      dataset_window(window_size_avg) %>%
      dataset_flat_map(operate (x)
        x$batch(as.integer(window_size_avg), drop_remainder = TRUE)) %>%
      dataset_map(operate(x)
        tf$reduce_mean(x, axis = 0L)) %>%
      # use dataset_window to create a "timesteps" dimension with size num_timesteps)
      dataset_window(num_timesteps, shift = 1) %>%
      dataset_flat_map(operate(x)
        x$batch(as.integer(num_timesteps), drop_remainder = TRUE))
  }

We’ll name this operate for each column individually. Not all columns are precisely the identical size (when it comes to time), thus it’s most secure to chop off particular person observations that surpass a typical size (dictated by the goal variable):

label <- s1$label %>% matrix() # 4603 observations, every spanning 2 secs
n_total <- 4603 # preserve observe of this

# preserve matching numbers of observations of predictors
acc <- s1$sign$wrist$ACC[1:(n_total * 64), ] # 32 Hz, 3 columns
bvp <- s1$sign$wrist$BVP[1:(n_total * 128)] %>% matrix() # 64 Hz
eda <- s1$sign$wrist$EDA[1:(n_total * 8)] %>% matrix() # 4 Hz
temp <- s1$sign$wrist$TEMP[1:(n_total * 8)] %>% matrix() # 4 Hz

Some extra housekeeping. Both coaching and the check set must have a timesteps dimension, as normal with architectures that work on sequential information (1-d convnets and RNNs). To be sure there isn’t a overlap between respective timesteps, we cut up the info “up front” and assemble each units individually. We’ll use the primary 4000 observations for coaching.

Housekeeping-wise, we additionally preserve observe of precise coaching and check set cardinalities.
The goal variable will probably be matched to the final of any twelve timesteps, so we find yourself throwing away the primary eleven floor fact measurements for every of the coaching and check datasets.
(We don’t have full sequences constructing as much as them.)

# variety of timesteps used within the second dimension
num_timesteps <- 12

# variety of observations for use for the coaching set
# a spherical quantity for simpler checking!
train_max <- 4000

# additionally preserve observe of precise variety of coaching and check observations
n_train <- train_max - num_timesteps + 1
n_test <- n_total - train_max - num_timesteps + 1

Here, then, are the essential constructing blocks that can go into the ultimate coaching and check datasets.

acc_train <-
  average_and_make_sequences(acc[1:(train_max * 64), ], 64, num_timesteps)
bvp_train <-
  average_and_make_sequences(bvp[1:(train_max * 128), , drop = FALSE], 128, num_timesteps)
eda_train <-
  average_and_make_sequences(eda[1:(train_max * 8), , drop = FALSE], 8, num_timesteps)
temp_train <-
  average_and_make_sequences(temp[1:(train_max * 8), , drop = FALSE], 8, num_timesteps)


acc_test <-
  average_and_make_sequences(acc[(train_max * 64 + 1):nrow(acc), ], 64, num_timesteps)
bvp_test <-
  average_and_make_sequences(bvp[(train_max * 128 + 1):nrow(bvp), , drop = FALSE], 128, num_timesteps)
eda_test <-
  average_and_make_sequences(eda[(train_max * 8 + 1):nrow(eda), , drop = FALSE], 8, num_timesteps)
temp_test <-
  average_and_make_sequences(temp[(train_max * 8 + 1):nrow(temp), , drop = FALSE], 8, num_timesteps)

Now put all predictors collectively:

# all predictors
x_train <- zip_datasets(acc_train, bvp_train, eda_train, temp_train) %>%
  dataset_map(operate(...)
    tf$concat(listing(...), axis = 1L))

x_test <- zip_datasets(acc_test, bvp_test, eda_test, temp_test) %>%
  dataset_map(operate(...)
    tf$concat(listing(...), axis = 1L))

On the bottom fact aspect, as alluded to earlier than, we omit the primary eleven values in every case:

y_train <- tensor_slices_dataset(label[num_timesteps:train_max] %>% k_cast("float32"))

y_test <- tensor_slices_dataset(label[(train_max + num_timesteps):nrow(label)] %>% k_cast("float32")

Zip predictors and targets collectively, configure shuffling/batching, and the datasets are full:

ds_train <- zip_datasets(x_train, y_train)
ds_test <- zip_datasets(x_test, y_test)

batch_size <- 32

ds_train <- ds_train %>% 
  dataset_shuffle(n_train) %>%
  # dataset_repeat is required due to pre-TF 2 model
  # hopefully at a later time, the code can run eagerly and that is not wanted
  dataset_repeat() %>%
  dataset_batch(batch_size, drop_remainder = TRUE)

ds_test <- ds_test %>%
  # see above reg. dataset_repeat
  dataset_repeat() %>%
  dataset_batch(batch_size)

With information manipulations as difficult because the above, it’s at all times worthwhile checking some pipeline outputs. We can do this utilizing the standard reticulate::as_iterator magic, offered that for this check run, we don’t disable V2 conduct. (Just restart the R session between a “pipeline checking” and the later modeling runs.)

Here, in any case, could be the related code:

# this piece wants TF 2 conduct enabled
# run after restarting R and commenting the tf$compat$v1$disable_v2_behavior() line
# then to suit the DP mannequin, undo remark, restart R and rerun
iter <- as_iterator(ds_test) # or another dataset you wish to examine
whereas (TRUE) {
 merchandise <- iter_next(iter)
 if (is.null(merchandise)) break
 print(merchandise)
}

With that we’re able to create the mannequin.

Model

The mannequin will probably be a somewhat easy convnet. The major distinction between normal and DP coaching lies within the optimization process; thus, it’s simple to first set up a non-DP baseline. Later, when switching to DP, we’ll be capable to reuse virtually the whole lot.

Here, then, is the mannequin definition legitimate for each instances:

mannequin <- keras_model_sequential() %>%
  layer_conv_1d(
      filters = 32,
      kernel_size = 3,
      activation = "relu"
    ) %>%
  layer_batch_normalization() %>%
  layer_conv_1d(
      filters = 64,
      kernel_size = 5,
      activation = "relu"
    ) %>%
  layer_batch_normalization() %>%
  layer_conv_1d(
      filters = 128,
      kernel_size = 5,
      activation = "relu"
    ) %>%
  layer_batch_normalization() %>%
  layer_global_average_pooling_1d() %>%
  layer_dense(models = 128, activation = "relu") %>%
  layer_dense(models = 1)

We practice the mannequin with imply squared error loss.

optimizer <- optimizer_adam()
mannequin %>% compile(loss = "mse", optimizer = optimizer, metrics = metric_mean_absolute_error)

num_epochs <- 20
historical past <- mannequin %>% match(
  ds_train, 
  steps_per_epoch = n_train/batch_size,
  validation_data = ds_test,
  epochs = num_epochs,
  validation_steps = n_test/batch_size)

Baseline outcomes

After 20 epochs, imply absolute error is round 6 bpm:


Training history without differential privacy.

Figure 1: Training historical past with out differential privateness.

Just to place this in context, the MAE reported for topic S1 within the paper(Reiss et al. 2019) – based mostly on a higher-capacity community, intensive hyperparameter tuning, and naturally, coaching on the entire dataset – quantities to eight.45 bpm on common; so our setup appears to be sound.

Now we’ll make this differentially non-public.

DP coaching

Instead of the plain Adam optimizer, we use the corresponding TF Privacy wrapper, DPAdamGaussianOptimizer.

We want to inform it how aggressive gradient clipping must be (l2_norm_clip) and the way a lot noise so as to add (noise_multiplier). Furthermore, we outline the training charge (there isn’t a default), going for 10 instances the default 0.001 based mostly on preliminary experiments.

There is a further parameter, num_microbatches, that could possibly be used to hurry up coaching (McMahan and Andrew 2018), however, as coaching length just isn’t a problem right here, we simply set it equal to batch_size.

The values for l2_norm_clip and noise_multiplier chosen right here observe these used within the tutorials within the TF Privacy repo.

Nicely, TF Privacy comes with a script that enables one to compute the attained (epsilon) beforehand, based mostly on variety of coaching examples, batch_size, noise_multiplier and variety of coaching epochs.

Calling that script, and assuming we practice for 20 epochs right here as effectively,

python compute_dp_sgd_privacy.py --N=3989 --batch_size=32 --noise_multiplier=1.1 --epochs=20

that is what we get again:

DP-SGD with sampling charge = 0.802% and noise_multiplier = 1.1 iterated over
2494 steps satisfies differential privateness with eps = 2.73 and delta = 1e-06.

How good is a price of two.73? Citing the TF Privacy authors:

(epsilon) provides a ceiling on how a lot the likelihood of a selected output can enhance by together with (or eradicating) a single coaching instance. We normally need it to be a small fixed (lower than 10, or, for extra stringent privateness ensures, lower than 1). However, that is solely an higher certain, and a big worth of epsilon should imply good sensible privateness.

Obviously, selection of (epsilon) is a (difficult) matter unto itself, and never one thing we will elaborate on in a publish devoted to the technical elements of DP with TensorFlow.

How would (epsilon) change if we skilled for 50 epochs as an alternative? (This is definitely what we’ll do, seeing that coaching outcomes on the check set have a tendency to leap round fairly a bit.)

python compute_dp_sgd_privacy.py --N=3989 --batch_size=32 --noise_multiplier=1.1 --epochs=60
DP-SGD with sampling charge = 0.802% and noise_multiplier = 1.1 iterated over
6233 steps satisfies differential privateness with eps = 4.25 and delta = 1e-06.

Having talked about its parameters, now let’s outline the DP optimizer:

l2_norm_clip <- 1
noise_multiplier <- 1.1
num_microbatches <- k_cast(batch_size, "int32")
learning_rate <- 0.01

optimizer <- priv$DPAdamGaussianOptimizer(
  l2_norm_clip = l2_norm_clip,
  noise_multiplier = noise_multiplier,
  num_microbatches = num_microbatches,
  learning_rate = learning_rate
)

There is one different change to make for DP. As gradients are clipped on a per-sample foundation, the optimizer must work with per-sample losses as effectively:

loss <- tf$keras$losses$MeanSquaredError(discount =  tf$keras$losses$Reduction$NONE)

Everything else stays the identical. Training historical past (like we stated above, lasting for 50 epochs now) appears to be like much more turbulent, with MAEs on the check set fluctuating between 8 and 20 over the past 10 coaching epochs:


Training history with differential privacy.

Figure 2: Training historical past with differential privateness.

In addition to the above-mentioned command line script, we will additionally compute (epsilon) as a part of the coaching code. Let’s double examine:

# likelihood of a person coaching level being included in a minibatch
sampling_probability <- batch_size / n_train

# variety of steps the optimizer takes over the coaching information
steps <- num_epochs * n_train / batch_size

# required for causes associated to how TF Privacy computes privateness
# this truly is Renyi Differential Privacy: https://arxiv.org/abs/1702.07476
# we do not go into particulars right here and use identical values because the command line script
orders <- c((1 + (1:99)/10), 12:63)

rdp <- priv$privateness$evaluation$rdp_accountant$compute_rdp(
  q = sampling_probability,
  noise_multiplier = noise_multiplier,
  steps = steps,
  orders = orders)

priv$privateness$evaluation$rdp_accountant$get_privacy_spent(
  orders, rdp, target_delta = 1e-6)[[1]]
[1] 4.249645

So, we do get the identical consequence.

Conclusion

This publish confirmed the way to convert a traditional deep studying process into an (epsilon)-differentially non-public one. Necessarily, a weblog publish has to go away open questions. In the current case, some attainable questions could possibly be answered by simple experimentation:

  • How effectively do different optimizers work on this setting?
  • How does the training charge have an effect on privateness and efficiency?
  • What occurs if we practice for lots longer?

Others sound extra like they may result in a analysis mission:

  • When mannequin efficiency – and thus, mannequin parameters – fluctuate that a lot, how will we determine on when to cease coaching? Is stopping at excessive mannequin efficiency dishonest? Is mannequin averaging a sound answer?
  • How good actually is anybody (epsilon)?

Finally, but others transcend the realms of experimentation in addition to arithmetic:

  • How will we commerce off (epsilon)-DP towards mannequin efficiency – for various purposes, with several types of information, in numerous societal contexts?
  • Assuming we “have” (epsilon)-DP, what may we nonetheless be lacking?

With questions like these – and extra, in all probability – to ponder: Thanks for studying and a cheerful new yr!

Abadi, Martin, Andy Chu, Ian Goodfellow, Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In twenty third ACM Conference on Computer and Communications Security (ACM CCS), 308–18. https://arxiv.org/abs/1607.00133.
Allen, John. 2007. “Photoplethysmography and Its Application in Clinical Physiological Measurement.” Physiological Measurement 28 (3): R1–39. https://doi.org/10.1088/0967-3334/28/3/r01.
Dwork, Cynthia. 2006. “Differential Privacy.” In thirty third International Colloquium on Automata, Languages and Programming, Part II (ICALP 2006), thirty third International Colloquium on Automata, Languages and Programming, half II (ICALP 2006), 4052:1–12. Lecture Notes in Computer Science. Springer Verlag. https://www.microsoft.com/en-us/research/publication/differential-privacy/.
Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. “Calibrating Noise to Sensitivity in Private Data Analysis.” In Proceedings of the Third Conference on Theory of Cryptography, 265–84. TCC’06. Berlin, Heidelberg: Springer-Verlag. https://doi.org/10.1007/11681878_14.
Dwork, Cynthia, and Aaron Roth. 2014. “The Algorithmic Foundations of Differential Privacy.” Found. Trends Theor. Comput. Sci. 9 (3–4): 211–407. https://doi.org/10.1561/0400000042.
McMahan, H. Brendan, and Galen Andrew. 2018. “A General Approach to Adding Differential Privacy to Iterative Training Procedures.” CoRR abs/1812.06210. http://arxiv.org/abs/1812.06210.
Reiss, Attila, Ina Indlekofer, Philip Schmidt, and Kristof Van Laerhoven. 2019. “Deep PPG: Large-Scale Heart Rate Estimation with Convolutional Neural Networks.” Sensors 19 (14): 3079. https://doi.org/10.3390/s19143079.
Wood, Alexandra, Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David O’Brien, Thomas Steinke, and Salil Vadhan. 2018. “Differential Privacy: A Primer for a Non-Technical Audience.” SSRN Electronic Journal, January. https://doi.org/10.2139/ssrn.3338027.

LEAVE A REPLY

Please enter your comment!
Please enter your name here