Skip to main contentSkip to navigationSkip to navigation
‘We wrote and called nearly a hundred companies, and asked if we could have the raw data, the clickstream from people’s lives.’
‘We wrote and called nearly a hundred companies, and asked if we could have the raw data, the clickstream from people’s lives.’ Photograph: Steve Marcus/Reuters
‘We wrote and called nearly a hundred companies, and asked if we could have the raw data, the clickstream from people’s lives.’ Photograph: Steve Marcus/Reuters

'Anonymous' browsing data can be easily exposed, researchers reveal

This article is more than 6 years old

A journalist and a data scientist secured data from three million users easily by creating a fake marketing company, and were able to de-anonymise many users

A judge’s porn preferences and the medication used by a German MP were among the personal data uncovered by two German researchers who acquired the “anonymous” browsing habits of more than three million German citizens.

“What would you think,” asked Svea Eckert, “if somebody showed up at your door saying: ‘Hey, I have your complete browsing history – every day, every hour, every minute, every click you did on the web for the last month’? How would you think we got it: some shady hacker? No. It was much easier: you can just buy it.”

Eckert, a journalist, paired up with data scientist Andreas Dewes to acquire personal user data and see what they could glean from it.

Presenting their findings at the Def Con hacking conference in Las Vegas, the pair revealed how they secured a database containing 3bn URLs from three million German users, spread over 9m different sites. Some were sparse users, with just a couple of dozen of sites visited in the 30-day period they examined, while others had tens of thousands of data points: the full record of their online lives.

Getting hold of the information was actually even easier than buying it. The pair created a fake marketing company, replete with its own website, a LinkedIn page for its chief executive, and even a careers site – which garnered a few applications from other marketers tricked by the company.

They piled the site full of “many nice pictures and some marketing buzzwords,” claiming to have developed a machine-learning algorithm which would be able to market more effectively to people, but only if it was trained with a large amount of data.

“We wrote and called nearly a hundred companies, and asked if we could have the raw data, the clickstream from people’s lives.” It took slightly longer than it should have, Eckert said, but only because they were specifically looking for German web surfers. “We often heard: ‘Browsing data? That’s no problem. But we don’t have it for Germany, we only have it for the US and UK,’” she said.

The data they were eventually given came, for free, from a data broker, which was willing to let them test their hypothetical AI advertising platform. And while it was nominally an anonymous set, it was soon easy to de-anonymise many users.

Dewes described some methods by which a canny broker can find an individual in the noise, just from a long list of URLs and timestamps. Some make things very easy: for instance, anyone who visits their own analytics page on Twitter ends up with a URL in their browsing record which contains their Twitter username, and is only visible to them. Find that URL, and you’ve linked the anonymous data to an actual person. A similar trick works for German social networking site Xing.

For other users, a more probabilistic approach can deanonymise them. For instance, a mere 10 URLs can be enough to uniquely identify someone – just think, for instance, of how few people there are at your company, with your bank, your hobby, your preferred newspaper and your mobile phone provider. By creating “fingerprints” from the data, it’s possible to compare it to other, more public, sources of what URLs people have visited, such as social media accounts, or public YouTube playlists.

A similar strategy was used in 2008, Dewes said, to deanonymise a set of ratings published by Netflix to help computer scientists improve its recommendation algorithm: by comparing “anonymous” ratings of films with public profiles on IMDB, researchers were able to unmask Netflix users – including one woman, a closeted lesbian, who went on to sue Netflix for the privacy violation.

Another discovery through the data collection occurred via Google Translate, which stores the text of every query put through it in the URL. From this, the researchers were able to uncover operational details about a German cybercrime investigation, since the detective involved was translating requests for assistance to foreign police forces.

So where did the data come from? It was collated from a number of browser plugins, according to Dewes, with the prime offender being “safe surfing” tool Web of Trust. After Dewes and Eckert published their results, the browser plugin modified its privacy policy to say that it does indeed sell data, while making attempts to keep the information anonymous. “We know this is nearly impossible,” said Dewes.

Most viewed

Most viewed