Skip to main contentSkip to navigationSkip to navigation
Genevieve Bell at the Intel headquarters in Oregon.
‘Anxiety about new technology is nothing new’: Genevieve Bell at the Intel headquarters in Oregon. Photograph: Leah Nash/NYT/Eyevine
‘Anxiety about new technology is nothing new’: Genevieve Bell at the Intel headquarters in Oregon. Photograph: Leah Nash/NYT/Eyevine

Genevieve Bell: ‘Humanity’s greatest fear is about being irrelevant’

This article is more than 7 years old
The anthropologist explains why being scared about AI has more to do with our fear of each other than killer robots

Genevieve Bell is an Australian anthropologist who has been working at tech company Intel for 18 years, where she is currently head of sensing and insights. She has given numerous TED talks and in 2012 was inducted into the Women in Technology hall of fame. Between 2008 and 2010, she was also South Australia’s thinker in residence.

Why does a company such as Intel need an anthropologist?
That is a question I’ve spent 18 years asking myself. It’s not a contradiction in terms, but it is a puzzle. When they hired me, I think they understood something that not everyone in the tech industry understood, which was that technology was about to undergo a rapid transformation. Computers went from being on an office desk spewing out Excel to inhabiting our homes and lives and we needed to have a point of view about what that was going to look like. It was incredibly important to understand the human questions: such as, what on earth are people going to do with that computational power. If we could anticipate just a little bit, that would give us a business edge and the ability to make better technical decisions. But as an anthropologist that’s a weird place to be. We tend to be rooted in the present – what are people doing now and why? – rather than long-term strategic stuff.

A criticism that is often made of tech companies is that they are dominated by a narrow demographic of white, male engineers and as a result the code and hardware they produce have a narrow set of values built into them. Do you see your team as a counterbalance to that culture?
Absolutely. I suspect people must think I’m a monumental pain. I used to think my job was to bring as many other human experiences into the building as possible. Being a woman, being Australian and not being an engineer – those were all valuable assets because they gave me a very different point of view.

Now, the leadership of Intel is around 25% female, which is about what market availability is in the tech sector. We are conscious of what it means to have a company whose workforce doesn’t reflect the general population. Repeated studies show that the more diverse your teams are, the richer the outcomes. You have to tolerate a bit of static, but that’s preferable to the self-perpetuating bubble where everyone agrees with you.

You are often described as a futurologist. A lot of people are worried about the future. Are they right to be concerned?
That technology is accompanied by anxiety is not a new thing. We have anxieties about certain types of technology and there are reasons for that. We’re coming up to the 200th anniversary of Mary Shelley’s Frankenstein and the images in it have persisted.

Shelley’s story worked because it tapped into a set of cultural anxieties. The Frankenstein anxiety is not the reason we worried about the motor car or electricity, but if you think about how some people write about robotics, AI and big data, those concerns have profound echoes going back to the Frankenstein anxieties 200 years ago.

What is the Frankenstein anxiety?
Western culture has some anxieties about what happens when humans try to bring something to life, whether it’s the Judeo-Christian stories of the golem or James Cameron’s The Terminator.

So what is the anxiety about? My suspicion is that it’s not about the life-making, it’s about how we feel about being human. What we are seeing now isn’t an anxiety about artificial intelligence per se, it’s about what it says about us. That if you can make something like us, where does it leave us? And that concern isn’t universal, as other cultures have very different responses to AI, to big data. The most obvious one to me would be the Japanese robotic tradition, where people are willing to imagine the role of robots as far more expansive than you find in the west. For example, the Japanese roboticist Masahiro Mori published a book called The Buddha in the Robot, where he suggests that robots would be better Buddhists than humans because they are capable of infinite invocations. So are you suggesting that robots could have religion? It’s an extraordinary provocation.

So you don’t agree with Stephen Hawking when he says that AI is likely “either the best or the worst thing ever to happen to humanity”?
Mori’s argument was that we project our own anxieties and when we ask: “Will the robots kill us?”, what we are really asking is: “Will we kill us?” Coming from a Japanese man who lived through the 20th century that might not be an unreasonable question. He wonders what would happen if we were to take as our starting point that technology could be our best angels, not our worst – it’s an interesting thought exercise. When I see some of the big thinkers of our day contemplating the arc of artificial intelligence, what I see is not necessarily a critique of the technology itself but a critique of us. We are building the engines, so what we build into them is what they will be. The question is not will AI rise up and kill us, rather, will we give it the tools to do so?

Stephen Hawking has warned about the dangers of artificial intelligence. Photograph: Chris Radburn/PA

Is there a movie that you think creates a convincing picture of the future? The Matrix, Her, Planet of the Apes?
In terms of capturing the current anxiety and ambivalence we have about the role of technology, it’s two movies: Her and Ex Machina. Not because they are visions of the future, but because they underline a particular set of concerns, which is not that the machines will kill us but that we will become irrelevant. James Cameron’s The Terminator promised death, but in Spike Jonze’s Her the anxiety is that the machine will become bored with you. It’s the same in Ex Machina – you build the perfect machine and it abandons you. In both instances, there’s a notion that the technology is self-determining and it’s decision is to leave us; in both movies, there is a conversation about gender – the machines are women that are leaving men. The machines’ voices are female, which isn’t what they were 40 or 50 years ago, like Hal [in 2001: A Space Odyssey].

A lot of the work you do examines the intersection between the intended use of a device and how people actually use it – and examining the disconnection. Could you talk about something you’re researching at the moment?
I’m interested in how animals are connected to the internet and how we might be able to see the world from an animal’s point of view. There’s something very interesting in someone else’s vantage point, which might have a truth to it. For instance, the tagging of cows for automatic milking machines, so that the cows can choose when to milk themselves. Cows went from being milked twice a day to being milked three to six times a day, which is great for the farm’s productivity and results in happier cows, but it’s also faintly disquieting that the technology makes clear to us the desires of cows – making them visible in ways they weren’t before. So what does one do with that knowledge? One of the unintended consequences of big data and the internet of things is that some things will become visible and compel us to confront them.

Why is your Twitter handle “feraldata”?
I was castigating an Australian colleague about 10 years ago about how we talked about technology using British idioms. For example, we kept talking about the digital commons, yet Australia does not have an enclosure act.

So what are the Australian experiences we could use to talk about technology? I began to think about camels, goats and cats – lots of animals jumped the boats in Australia and created havoc by becoming feral. Would feral be an interesting way for thinking about how technology had unintended consequences? It occurred to me that of all the things that were most likely to go feral in the technological landscape it was data. It gets created in one context, is married with a third thing and finds itself in another.

Genevieve Bell will be the keynote speaker at the Ireland’s Edge conference, part of the Other Voices festival in Dingle, Co. Kerry, 2-4 December

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed