exchanging values on certain variables between participants
types of swapping
record swapping (also known as data swapping): for categorial variables; swapping the values on the variables (e.g., gender, country of residence); t-order equivalence means that the frequency tables for t variables are not changed (e.g., 1-order equivalence: same number of males and females as before, 2-order equivalence: same number of males and females from Switzerland and Germany respectively)
rank swapping: for continuous variables; swapping values only within certain range of the rank to limit distortion of data
advantages
removes relationship between record and individual
can be used in one or more sensitive variables without disturbing the non-sensitive variables
provides protection to rare and unique values
not limited to the type of variable
disadvantages
may produce number of cases with unusual combinations
non-random swapping means work
can severely distort statistics for subgroups
not useful against attribute disclosures
Re-sampling
idea: create averages of independent samples
bootstrap independent samples
use average of first sample for first row, then average of second sample for second row…
check for correct understanding
Noise
also known as randomization
idea: add more or less random value (additive noise) or multiply by more or less random value (multiplicative noise)
noise can be correlated or uncorrelated with values
transformations after adding the noise are possible
differential privacy methods usually mean noise
TipDifferential Privacy
adds noise to data, leading to plausible deniability for any individual
results of analysis stay the same independent of noise
results of analysis stay the same, independent if one person is in there or not
diffpriv: An R Package for Easy Differential Privacy
“Even if the attacker already suspects X is the only possible HIV case in the dataset, the data release should not confirm or deny that suspicion.”
Microaggregation
idea: create groups of similar values and change these to an aggregate value (e.g., mean, median)
works better when groups are more homogeneous
Rounding
round values to certain other values
PRAM
Post RAndomisation Method
values on a categorical variable are recoded with a certain probability
Shuffling
variation of swapping
generate new sensitive data based on similar distributional properties
change the order of sensitive values based on the rank of new sensitive data
Keeping Utility
Explain how to ensure that the statistics are the same (or reference utility section)
Pro and Contra of Using Perturbative Techniques
Add pro and con list
danger of reverse-engineering the perturbation technique applied
Exercise
Perturbative techniques modify values rather than removing or generalising them. The data still looks complete and realistic, but individual values have been altered enough that re-identification becomes unreliable. Two common methods:
Microaggregation — records are grouped by similarity and values within each group are replaced by the group mean. Individual values are obscured but aggregate statistics are preserved.
Adding noise — small random amounts are added to numeric values. The distribution stays plausible but exact values are no longer trustworthy.
Both are available directly in sdcMicro.
Exercise: Applying Perturbative Techniques
Continue working with sdc_nonpert from the previous exercise.
Apply microaggregation to income using the default method ("mdav"). Use a group size of k = 5.
Add noise to income as an alternative — apply additive noise with a noise level of 0.1 (10% of the standard deviation). Compare the result to microaggregation: which distorts the data more?
Compare the information loss reported by sdcMicro after each step. Which method better balances risk and utility for this variable?
Tip
You cannot undo perturbation steps within the same sdc object. Create a fresh copy of sdc_nonpert before trying the second method so you can compare them side-by-side.
NoteSolution
Step 1 – Microaggregation
sdc_micro <-microaggregation(obj = sdc_nonpert,variables ="income",aggr =5, # group size kmethod ="mdav"# Maximum Distance to Average Vector)print(sdc_micro, type ="numrisk")
Numerical key variables: income, years_in_job
Disclosure risk is currently between [0.00%; 86.00%]
Current Information Loss:
- IL1: 885.42
- Difference of Eigenvalues: 0.150%
----------------------------------------------------------------------
mdav groups records by their distance to the group centroid, then replaces each value with the group mean. With aggr = 5, at least 5 records share the same income value, so singling out an individual is harder.
Step 2 – Additive noise (on a fresh copy)
sdc_noise <-addNoise(obj = sdc_nonpert,variables ="income",noise =0.1# noise level as fraction of SD)print(sdc_noise, type ="numrisk")
Numerical key variables: income, years_in_job
Disclosure risk is currently between [0.00%; 100.00%]
Current Information Loss:
- IL1: 8.64
- Difference of Eigenvalues: 0.010%
----------------------------------------------------------------------
addNoise() draws from a normal distribution with mean 0 and standard deviation = noise × sd(income) and adds it to each value. Every record gets a unique (slightly wrong) income.
Step 3 – Comparing information loss
# Information loss after microaggregationil_micro <-get.sdcMicroObj(sdc_micro, "utility")# Information loss after noise additionil_noise <-get.sdcMicroObj(sdc_noise, "utility")il_micro
Interpretation: Microaggregation typically produces lower IL1 (mean absolute deviation between original and perturbed values) because entire groups share one value — the average. Noise addition preserves individual-level variation better but introduces random error into every single record. For a dataset where income is used in regression analyses, noise addition is often preferable; for frequency tables or group comparisons, microaggregation is a safer choice.