Last September, a coalition of privacy activists and browser-makers targeted Google and the advertising technology industry with complaints about “a massive and ongoing data breach that affects virtually every user on the web” — the broadcasting of people’s personal data to dozens of companies, without proper security.
Now, on International Privacy Day, they’ve released new evidence showing this data includes information about people’s ethnicity, disabilities, sexual orientation and more. The data is so sensitive that it even allows advertisers to specifically target incest and abuse victims, or those with eating disorders.
How does this information get shared? The online ad industry often uses a technique called behavioral advertising, which basically means they track you around the web and build a profile based on what you look at. When you then visit a webpage that runs behavioral ads, there’s often an automated auction with the winner getting to show you an ad that supposedly matches your profile.
The real-time bidding system means broadcasting the details of your profile to the advertisers in so-called “bid requests” — and these details get really, really personal.
Last year’s complaints were lodged by Jim Killock of the U.K.’s Open Rights Group, tech policy researcher Michael Veale of University College London, and Johnny Ryan of the pro-privacy browser firm Brave. They complained to privacy regulators in the U.K. and Ireland, claiming Google and other ad-tech firms were breaking the EU’s strict General Data Protection Regulation (GDPR) by unlawfully profiling people’s sensitive characteristics.
Now Poland’s Panoptykon Foundation, another rights group, has hopped on board, complaining to its local data protection authority. The targets include Google and the Interactive Advertising Bureau (IAB), which is the industry body that sets the rules for ad auctions.
The evidence comprises category lists from Google and IAB, which allow advertisers to target people according to characteristics such as being an incest victim, having cancer, having a substance-abuse problem, being into a certain kind of politics or adhering to a certain religion or sect.
“Actors in this ecosystem are keen for the public to think they are dealing in anonymous, or at the very least non-sensitive data, but this simply isn’t the case,” said Veale (who has also made a GDPR complaint against Twitter over opaque tracking.) “Hugely detailed and invasive profiles are routinely and casually built and traded as part of today’s real-time bidding system, and this practice is treated though it’s a simple fact of life online. It isn’t: and it both needs to and can stop.”
A Google spokesperson said the company has “strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories such as race, sexual orientation, health conditions, pregnancy status, etc.”
“If we found ads on any of our platforms that were violating our policies and attempting to use sensitive interest categories to target ads to users, we would take immediate action,” the spokesperson said.
The IAB said in a statement that its categorization protocols came from its non-profit partner organization, the IAB Tech Lab. It also said it was up to companies to choose how to use the protocols, in line with “applicable laws, regulations, and consumer preferences,” and the protocols were “not themselves subject to GDPR.”
This article was updated to add Google and the IAB’s responses, and to clarify that it’s the real-time bidding mechanism being complained about, rather than behavioral advertising as a whole.