Skip to main contentSkip to navigationSkip to navigation
cory doctorow seated in an armchair in his office
Cory Doctorow: ‘The mind control thing designed to sell you fidget spinners got hijacked to make your uncle racist.’ Photograph: Jonathan Worth
Cory Doctorow: ‘The mind control thing designed to sell you fidget spinners got hijacked to make your uncle racist.’ Photograph: Jonathan Worth

Cory Doctorow: ‘Technologists have failed to listen to non-technologists’

This article is more than 3 years old

The tech activist on his new sci-fi novel and why we mustn’t treat the moral downsides of social media as a necessary evil

Cory Doctorow, 49, is a British-Canadian blogger, science fiction author and tech activist. He has worked for the Electronic Frontier Foundation and helped found the Open Rights Group – he is an advocate of liberalising copyright law. He has held various academic posts and is a visiting professor of the Open University. His latest novel, Attack Surface, was published earlier this month.

The protagonist in your new novel tries to offset her job at a tech company where she is working for a repressive regime by helping some of its targets evade detection. Do you think many Silicon Valley employees feel uneasy about their work?
Anyone who has ever fallen in love with technology knows the amount of control that it gives you. If you can express yourself well to a computer it will do exactly what you tell it to do perfectly, as many times as you want. Across the tech sector, there are a bunch of workers who are waking up and going: “How did I end up rationalising my love for technology and all the power it gives me to take away that power from other people?”

As a society, we have a great fallacy, the fallacy of the ledger, which is that if you do some bad things, and then you do some good things, you can talk them up. And if your balance is positive, then you’re a good person. And if the balance is negative, you’re a bad person. But no amount of goodness cancels out the badness, they coexist – the people you hurt will still be hurt, irrespective of the other things you do to make amends. We’re flawed vessels, and we need a better moral discourse. That’s one of the things this book is trying to establish.

There’s a lot of tech commentary, journalism, boards of serious people producing reports and warnings about the very near future. What does fiction add to the mix?
It’s not that one substitutes for the other, but fiction gives you an emotional fly-through. It invites you to consider the lived experience of what is otherwise a very abstract and technical debate. And in the same way that Orwell bequeathed us this incredibly useful adjective Orwellian, as a way to talk about not the technical characteristics of the technology, but who does it and whom it does it to, these stories are a way of intervening in the world.

Many technologists, cryptographers, human rights workers, cyber lawyers, and so on have told me that their start was reading [my previous novels] Little Brother or Homeland. It talks to people who got involved with the tech industry and are waking up today, and maybe wishing that they had been a little more stringent in the moral compromises they made along the way.

Do you find yourself writing about things that you fear are going to happen or that you hope will happen?
It’s definitely both. If they ever do bury me, as opposed to scattering my ashes in the Haunted Mansion at Disneyland, I want my tombstone to read: “This will all be so great if we don’t screw it up.” And I think that’s the core message of the so-called cyber utopian project – people don’t start organisations such as the Open Rights Group or Electronic Frontier Foundation because they’re sanguine about the future of technology. You have to be very excited about it, and on the other hand, be very fearful of how it could go wrong.

The recent Netflix drama-documentary The Social Dilemma features a number of ex-Silicon Valley executives warning us about the practices of companies they worked for. Do you think we should be looking to these people for guidance about how to regulate the big tech firms?
The brilliant critic Maria Farrell calls them the “prodigal tech bros”. And she says that the real problem in terms of the prodigal son narrative is that the prodigal son is redeemed because he really suffers. However the suffering these sort of prodigal sons have experienced is just feeling sad.

So whose guidance should we be seeking?
Technologists have failed to listen to non-technologists. In technological circles, there’s a quantitative fallacy that if you can’t do maths on it, you can just ignore it. And so you just incinerate the qualitative elements and do maths on the dubious quantitative residue that remains. This is how you get physicists designing models for reopening American schools – because they completely fail to take on board the possibility that students might engage in, say, drunken eyeball-licking parties, which completely trips up the models.

Anthropologists have been warning us about this since the year dot, people like Danah Boyd, who was hired by Google and Intel, but they just ignored her. We could listen to people like her. And we could listen to the people who’ve been harmed by this stuff – we could get into their lived experience.

So ignore the prodigal tech bros?
One of the problems with The Social Dilemma is that it supposes that tech did what it claims it did – that these are actually such incredible geniuses that they figured out how to use machine learning to control minds. And that’s the problem – the mind control thing they designed to sell you fidget spinners got hijacked to make your uncle racist. But there’s another possibility, which is that their claims are rubbish. They just overpromised in their sales material, and that what actually happened with that growth of monopolies and corruption in the public sphere made people cynical, angry, bitter and violent. In which case the problem isn’t that their tools were misused. The problem is that the structures in which those tools were developed are intrinsically corrupt and corrupting.

You seem to be leaning towards that latter theory…
Yeah, I just published a short book on this called How to Destroy Surveillance Capitalism, that was bullish on the idea that surveillance itself is bad, commercial surveillance is bad and that we are living in a moment of great political terror. But it also looks critically on the evidence for mind control, and explores more parsimonious explanations for why people believe outlandish things, like for instance that, for the first time in history, someone who claims to have mind control is not a charlatan.

So you think the role of Facebook in influencing elections, distributing misinformation and so on is overstated?
What Facebook does is it locates people. So if you want to locate people, because you want to say something heterodox, which you might get punished for if you shouted it aloud, you can quietly find and talk to them.

That’s not an unalloyed evil – this is how we got Black Lives Matter, non-binary gender identity and so on. People have been able to find one another and quietly share the fact that they disagreed with the overarching consensus and build a coalition.

But you also get people locating people and saying: “Hey, you know, I’m not gonna openly call myself a racist when I’m running for office, but you and I, we’re both quite racist. And I just wanted you to know that.”

So you can build a coalition of racists who would otherwise struggle to find one another because of the social risk that they take if they go public with their views, but it’s really not the same thing as mind control.

You’ve been reporting and studying the internet for many years. Would you describe that experience as a long process of disillusionment?
Certainly, there’s been a lot of disillusionment. To the extent that I laboured under an illusion, it wasn’t an illusion about either the harms or the liberating power of technology, it was rather an illusion about the extent to which firms would be constrained by pro-competitive, anti-monopolistic constraints. I overestimated the extent to which there would be competitive pressure that would stop firms from attaining the kind of dominance that they have – turning the web into five giant websites filled with screenshots from the other four. That was a major miscalculation.

The good news is that we are in a moment in which people’s lives are being harmed by monopolies in lots of ways not just due to technology, and that there is a real energy, a coalition of anti-monopoly sentiment building. People are angry about the Big Four accounting firms, large corporate landlords, the energy sector or even professional wrestling, which used to have 30 leagues and is now down to one. We can build a coalition to take on monopoly that includes people who are angry about tech monopolies.

You’ve described yourself as a jetpack socialist. Is this jetpack socialism?
I was more bullish on jetpack socialism or fully automated luxury communism before it was clear how much climate degradation we would endure before we took action. Now we’re not going to have technological unemployment. We’ve got 200 to 300 years of full employment for every working pair of hands, to do things like relocate every city on a coast 20km inland. The extended amounts of labour ahead of us are more than any technology could offset.

For a long time you were distributing electronic versions of your novels free of charge. Why?
My view is that nothing you do is going to change whether people can get your books for free. But making it harder to copy is like making water that’s less wet – computers operate by making copies. The best thing to do is to create a managed system for it and to create a kind of moral case for paying for it.

Our norms for books date back to papyrus. I dislike the idea that some technologist in Seattle, or Mountain View, can hire a lawyer to write a garbage novella of legalise, and confiscate the rights that you expect to have in your books, such as the right to sell it on, to give it away, to do what you will with it within the confines of copyright law.

Does it concern you that the vast majority of internet users aren’t very bothered about the kind of issues that you campaign about? They’re happy with the benefits of Google, the things that WhatsApp allows them to do for free and so on…
First of all ad blocking is the largest consumer revolt in history. One in four web users is running ad blocks. So clearly, they do care.

I’m not someone who says that the benefits aren’t real. We need to articulate the fact that the benefits should not be intrinsically linked to the harms. No one came down off a mount with two stone tablets saying thou shalt stop rotating log files and start mining for actionable market intelligence, right? Those are choices that people made. And you could make different choices. Sergey Brin was not dragged into spying on you by the forces of history. He made a choice. We could make a different one. If we remove the spying would you find the web searches less adorable?

Attack Surface by Cory Doctorow is published by Head of Zeus (£18.99). To order a copy go to guardianbookshop.com. Delivery charges may apply

More on this story

More on this story

  • Nintendos, hot tubs and even pedigree dogs top online shopping scams

  • US internet bill seen as opening shot against end-to-end encryption

  • Nando's-inspired sex slang used by girls as young as 10

  • What powers will Ofcom have to regulate the internet?

  • Ofcom to be put in charge of regulating internet in UK

  • Watchdog cracks down on tech firms that fail to protect children

  • As a Facebook moderator I saw the worst of humanity. We need to be valued

  • Firefox: 'no UK plans' to make encrypted browser tool its default

  • Chinese deepfake app Zao sparks privacy row after going viral

  • Schools to teach pupils about perils of fake news and catfishing

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed