Skip to main contentSkip to navigationSkip to navigation
ben goldacre illustration data security
We have few sound intuitions into what is safe and what is flimsy when it comes to securing our digital lives – let alone what is ethical and what is creepy. Photograph: Darrel Rees/Heart Agency for the Guardian
We have few sound intuitions into what is safe and what is flimsy when it comes to securing our digital lives – let alone what is ethical and what is creepy. Photograph: Darrel Rees/Heart Agency for the Guardian

When data gets creepy: the secrets we don’t realise we’re giving away

This article is more than 9 years old
We all worry about digital spies stealing our data – but now even the things we thought we were happy to share are being used in ways we don’t like. Why aren’t we making more of a fuss?

It’s easy to be worried about people simply spying on your confidential data. iCloud and Google+ have your intimate photos; Transport for London knows where your travelcard has been; Yahoo holds every email you’ve ever written. We trust these people to respect our privacy, and to be secure. Often they fail: celebrity photos are stolen; emails are shared with spies; the confessional app Whisper is caught tracking the location of users.

But these are straightforward failures of security. At the same time, something much more interesting has been happening. Information we have happily shared in public is increasingly being used in ways that make us queasy, because our intuitions about security and privacy have failed to keep up with technology. Nuggets of personal information that seem trivial, individually, can now be aggregated, indexed and processed. When this happens, simple pieces of computer code can produce insights and intrusions that creep us out, or even do us harm. But most of us haven’t noticed yet: for a lack of nerd skills, we are exposing ourselves.

At the simplest level, even the act of putting lots of data in one place – and making it searchable – can change its accessibility. As a doctor, I have been to the house of a newspaper hoarder; as a researcher, I have been to the British Library newspaper archive. The difference between the two is not the amount of information, but rather the index. I recently found myself in the quiet coach on a train, near a stranger shouting into her phone. Between London and York she shared her (unusual) name, her plan to move jobs, her plan to steal a client list, and her wish that she’d snogged her boss. Her entire sense of privacy was predicated on an outdated model: none of what she said had any special interest to the people in coach H. One tweet with her name in would have changed that, and been searchable for ever.

An interesting side-effect of public data being indexed and searchable is that you only have to be sloppy once, for your privacy to be compromised. The computer program Creepy makes good fodder for panic. Put in someone’s username from Twitter, or Flickr, and Creepy will churn through every photo hosting service it knows, trying to find every picture they’ve ever posted. Cameras – especially phone cameras – often store the location where the picture was taken in the picture data. Creepy grabs all this geo-location data and puts pins on a map for you. Most of the time, you probably remember to get the privacy settings right. But if you get it wrong just once – maybe the first time you used a new app, maybe before your friend showed you how to change the settings – Creepy will find it, and your home is marked on a map. All because you tweeted a photo of something funny your cat did, in your kitchen.

Many people will soon be able to access their full medical records online – but some might get some nasty surprises. Photograph: Sean Justice/Getty

Some of these services are specifically created to scare people about their leakiness, and nudge us back to common sense: PleaseRobMe.com, for example, checks to see if you’re sharing your location publicly on Twitter and FourSquare (with sadistic section headings such as “recent empty homes” and “new opportunities”).

Some are less benevolent. The Girls Around Me app took freely shared social data – intended to help friends get together – and repurposed it for ruthless, data-driven sleaziness. Using FourSquare and Facebook data, it drew neat maps with the faces of nearby women pasted on. With your Facebook profile linked, I could research your interests before approaching you. Are all the women visible on Girls Around Me willingly consenting to having their faces mapped across bars or workplaces or at home – with links to their social media profiles – just by accepting the default privacy settings? Are they foolish to not foresee that someone might process this data and present them like products in a store?

But beyond mere indexing comes an even bigger new horizon. Once aggregated, these individual fragments of information can be processed and combined, and the resulting data can give away more about our character than our intuitions are able to spot.

Last month the Samaritans launched a suicide app. The idea was simple: they monitor the tweets of people you follow, analyse them, and alert you if your friends seem to be making comments suggestive of very low mood, or worse. A brief psychodrama ensued. One camp were up in arms: this is intrusive, they said. You’re monitoring mood, you need to ask permission before you send alerts about me to strangers. Worse, they said, it will be misused. People with bad intentions will monitor vulnerable people, and attack when their enemies are at their lowest ebb. And anyway, it’s just creepy. On the other side, plenty of people couldn’t even conceive of any misuse. This is clearly a beneficent idea, they said. And anyway, your tweets are public property, so any analysis of your mood is fair game. The Samaritans sided with the second team and said, to those worried about the intrusion: tough. Two weeks later they listened, and pulled the app, but the squabble illustrates how much we can disagree on the rights and wrongs around this kind of processing.

The Samaritans app, to be fair, was crude, as many of these sites currently are: analyzewords.com, for example, claims to spot personality characteristics by analysing your tweets, but the results are unimpressive. This may not last. Many people are guarded about their sexuality: but a paper from 2013 [pdf donwload] looked at the Facebook likes of 58,000 volunteers and found that, after generating algorithms by looking at the patterns in this dataset, they were able to correctly discriminate between homosexual and heterosexual men 88% of the time. Liking “Colbert” and “Science” were, incidentally, among the best predictors of high IQ.

Sometimes, even when people have good intentions and clear permission, data analysis can throw up odd ethical quandaries. Recently, for example, the government has asked family GPs to produce a list of people they think are likely to die in the next year. In itself, this is a good idea: a flag appears on the system reminding the doctor to have a conversation, at the next consultation, about planning “end of life care”. In my day job, I spend a lot of time working on interesting uses of health data. My boss suggested that we could look at automatically analysing medical records in order to instantly identify people who are soon to die. This is also a good idea.

But add in one final ingredient and the conclusion isn’t so clear. We are entering an age – which we should welcome with open arms – when patients will finally have access to their own full medical records online. So suddenly we have a new problem. One day, you log in to your medical records, and there’s a new entry on your file: “Likely to die in the next year.” We spend a lot of time teaching medical students to be skilful around breaking bad news. A box ticked on your medical records is not empathic communication. Would we hide the box? Is that ethical? Or are “derived variables” such as these, on a medical record, something doctors should share like anything else? Here, again, different people have different intuitions.

Many shopping centres can now use your mobile data to track you as you walk from shop to shop. Photograph: Christian Sinibaldi/Guardian

Then there’s the information you didn’t know you were leaking. Every device with Wi-Fi has a unique “MAC address”, which is broadcast constantly as long as wireless networking is switched on. It’s a boring technical aspect of the way Wi-Fi works, and you wouldn’t really care if anyone saw your MAC address on the airwaves as you walk past their router. But again, the issue is not the leakiness of one piece of information, but rather the ability to connect together a thread. Many shops and shopping centres, for example, now use multiple Wi-Fi sensors, monitoring the strength of connections, to triangulate your position, and track how you walk around the shop. By matching the signal to the security video, they get to know what you look like. If you give an email address in order to use the free in-store Wi-Fi, they have that too.

In some respects, this is no different to an online retailer such as Amazon tracking your movement around their website. The difference, perhaps, is that it feels creepier to be tracked when you walk around in physical space. Maybe you don’t care. Or maybe you didn’t know. But crucially: I doubt that everyone you know agrees about what is right or wrong here, let alone what is obvious or surprising, creepy or friendly.

It’s also interesting to see how peoples’ limits shift. I felt OK about in-store tracking, for example, but my intuitions shifted when I realised that I’m traced over much wider spaces. Turnstyle, for example, stretches right across Toronto – a city I love – tracing individuals as they move from one part of town to another. For businesses, this is great intelligence: if your lunchtime coffeeshop customers also visit a Whole Foods store near home after work, you should offer more salads. For the individual, I’m suddenly starting to think: can you stop following me, please? Half of Turnstyle’s infrastructure is outside Canada. They know what country I’m in. This crosses my own, personal creepiness threshold. Maybe you think I’m being precious.

There is an extraordinary textbook written by Ross Anderson, professor of computer security at University of Cambridge. It’s called Security Engineering, and despite being more than 1,000 pages long, it’s one of the most readable pop-science slogs of the decade. Firstly, Anderson sets out the basic truisms of security. You could, after all, make your house incredibly secure by fitting reinforced metal shutters over every window, and 10 locks on a single reinforced front door; but it would take a very long time to get in and out, or see the sunshine in the morning.

Digital security is the same: we all make a trade-off between security and convenience, but there is a crucial difference between security in the old-fashioned physical domain, and security today. You can kick a door and feel the weight. You can wiggle a lock, and marvel at the detail on the key. But as you wade through the examples in Anderson’s book – learning about the mechanics of passwords, simple electronic garage door keys, and then banks, encryption, medical records and more – the reality gradually dawns on you that for almost everything we do today that requires security, that security is done digitally. And yet to most of us, this entire world is opaque, like a series of black boxes into which we entrust our money, our privacy and everything else we might hope to have under lock and key. We have no clear sight into this world, and we have few sound intuitions into what is safe and what is flimsy – let alone what is ethical and what is creepy. We are left operating on blind, ignorant, misplaced trust; meanwhile, all around us, without our even noticing, choices are being made.

Ben Goldacre’s new book, I Think You’ll Find It’s a Bit More Complicated Than That, is published by Fourth Estate. Buy it for £11.99 at bookshop.theguardian.com

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed