Advertisement

Study says Apple data-mining safeguards don't protect privacy enough

The opt-in data collection method's 'differential privacy' doesn't anonymize data as much as it should.

Bloomberg via Getty Images

During last year's WWDC in June 2016, Apple noted it would be adopting some degree of differential privacy methods to ensure privacy while the company mined user data on iOS and Mac OS. In short, the technique adds noise to data that scrambles it enough to prevent it from becoming identifiable -- though the company made clear at the time that its data collection process was opt-in. Over a year later, a study claims that Apple's methods fall short of the digital privacy community's expectations for how much a user's data is kept private.

As they reveal in their study (PDF), researchers from the University of Southern California, Indiana University and China's Tsinghua University evaluated how Apple injects static into users' identifiable info, from messages to your internet history, to baffle anyone looking at the data, from the government to Apple's own staff. The metric for measuring a setup's differential privacy effectiveness is called a "privacy loss parameter" or, as a variable, "epsilon." In this case, the researchers discovered that Apple's epsilon on MacOS allowed a lot more personal data to be identifiable than digital privacy theorists are comfortable with, and iOS 10 permits even more.

Apple has refuted the study's findings, especially on its alleged ability to link data to particular users. But Apple still hasn't released much information on how it specifically implements its differential privacy. As Wired points out, the most unsettling part is that Apple keeps its epsilon numbers secret, meaning it could change the amount of privacy-enabling static conceivably at any time.

We reached out to Apple for additional comment and will add when we hear back.