‘Short window’ to stop AI taking control of society, warns ex-Google employee

Meredith Whittaker warns we are becoming lab animals in a giant tech experiment


The big news in tech this week is the announcement that Google co-founders Larry Page and Sergey Brin are stepping down as executives of parent company Alphabet, founded in 2015 with the goal of making it "cleaner and more accountable".

Page and Brin said it was time to "assume the role of proud parents – offering advice and love, but not daily nagging" while handing the reins to Sundar Pichai.

A group of Google employees feel differently: “Some had seriously hoped Sergey and Larry would step in and fix Google. Instead of righting the sinking ship, they jumped ship,” tweeted Google Walkout for Real Change, a worker’s organisation protesting, various unfair practices and policies within the tech company.

Google Walkout for Real Change is not an obscure fringe group. It began last year as a huge organised protest that saw 20,000 employees in 50 cities across the globe stage a walkout in protest of how the company had handled sexual harassment claims, followed by more walkouts protesting Google’s AI contracts with the US defence department, its plans to build a censored search engine, and their AI ethics policies.

READ MORE

As one of the original organisers, Meredith Whittaker, has something to say about the vast amount of power global technology companies like Google have been allowed to amass, largely unchecked and undeterred, over the past 20 years.

Social implications

Google’s code of conduct states unequivocally: “don’t be evil, and if you see something that you think isn’t right – speak up!” Whittaker, who worked as a research scientist in AI at Google for more than a decade, did speak up, but Google didn’t like it.

Ultimately working for Google was incompatible with her role at the AI Now Institute, which she co-founded to examine the social implications of artificial intelligence.

Having left Google in July, Whittaker has a clear message about how artificial intelligence is developed, applied, and controlled: “What’s frightening about AI isn’t terminators and super intelligent machines: it’s the way AI works to centralise knowledge and power in the hands of those who already have it and further disempower those who don’t.”

Speaking at the Falling Walls conference during Berlin Science Week recently, Whittaker told the audience that she was going to "make a case for collective action in artificial intelligence". She turned to AI's beginnings, explaining that while neural nets (algorithms, modelled loosely on the human brain, that are designed to recognise patterns) have existed since the mid-1960s, and convolutional neural nets were already being developed in the 1980s, what was missing for growth was access to computing power and datasets.

This all changed around 2012 when the image classification system AlexNet was released. We expect instant image recognition these days – it’s on all our photo apps – but less than a decade ago it was only made possible with the availability of ImageNet, a huge dataset amassed by Princeton and Harvard.

But, as Whittaker points out, ImageNet was created on the back of questionable data collection practices and cheap human labour. "It consists of over 15 million images scraped from Flickr and the web without consent, then labelled by low-paid Amazon Mechanical Turk workers," she says.

It’s difficult to grapple with certain technological developments because, as is the case with medical innovations, we are more than happy as end users to enjoy the fruits of this labour. However, we don’t like to think about the lab animals that may have sacrificed in the process. And in the case of technological progress, it’s hard to spot the lab rats, especially if they are the end user or even society at large.

How did the power suddenly tip in favour of the Big Three? “Not long before 2012 the resources that made AlexNet possible weren’t available, they weren’t even conceivable. And these just happened to be resources that major tech companies have in abundance and that few others do,” explains Whittaker.

"Ask any AI start-up and they will tell you they rent their computational infrastructure from one of the Big Three tech companies – Amazon, Microsoft or Google, in that order. They're often at a loss for where to get their own data, so from this perspective, we can understand the popular commodification of internet technologies that began in the 1990s as creating the conditions for the AI of today."

Issues of power

She explains that large technology companies happen to have all the necessary ingredients to push the envelope on AI: they have masses of social data thanks to their vast consumer market reach, as well as powerful infrastructure designed to collect, process and store such data.

“In short, the current crop of AI, the AI that is touching our lives and institutions, is a corporate technology,” stated Whittaker.

“Only five or so companies in the West have the resources to develop this technology at scale, which means that we cannot talk about AI without confronting issues of power. And frankly, we cannot talk about AI without talking about neoliberal capitalism: these systems are already being quietly integrated throughout our social institutions and we’re seeing the costs.”

Whittaker gave some examples of these costs: Amazon Ring is a surveillance AI video camera and doorbell system designed so people can have 24/7 footage of their homes and neighbourhoods. This sounds all above board, but Amazon has begun partnering with over 400 police departments in the US that has, explained Whittaker, resulted in police recommending residents to buy an Amazon Ring for their house.

“They’ve basically turned cops into door-to-door Amazon salesmen,” she says.

“It sounds funny, but it’s horrible, right? Amazon gets ongoing access to videos like those from Ring so they can continue to train their AI systems – this is really valuable for them. Police get access to a portal of Ring videos that they can use whenever they want, no subpoena required. So, they’re effectively creating a privatised surveillance system of homes across the US.”

And Amazon is currently filing a patent for facial recognition in this space.

Meanwhile, in Japan a start-up named Vaak sells surveillance AI that claims to be able to detect shoplifting before it happens, a la the Precogs from Minority Report. This is already being used by department stores to profile shoppers and as Whittaker points out, there is no transparency about how this AI model works, which makes it almost impossible for an individual to contest unfair profiling.

“It’s really important to know that that is a system in a long line of AI systems that are replicating the logics of discredited race science and physiognomy – claiming to be able to tell interior characteristics and personality traits based on people’s physical appearance, their speech and mannerisms.”

And here's how these systems can get it horribly wrong with fatal consequences: in 2018 the US state of Michigan began using an automated decision system named MiDAS to detect employment fraud. After dumping its entire fraud detection system, it turned out that MiDAS was deeply flawed and got it wrong 93 per cent of the time. But it was too late – the system had falsely accused at least 42,000 Michigan residents leading to cases of bankruptcy filings and even suicides.

“As you can see, the stakes are extremely high. We’re already seeing these systems used to justify inequality and set the boundaries between the haves and the have nots,” noted Whittaker.

Surely people can refuse to use the products of big tech and thereby not provide free data to further bolster these systems?

“We all live our lives and make choices, but we’re talking about huge structural issues,” she explains to The Irish Times. “Whether or not you use Gmail, you walk down the street of a smart city and you are being tagged, you are being profiled, you are being recorded by infrastructures of surveillance.

Shadow account

"If I am a Gmail user [and you're not], if I email you, your information is now on Google servers. Facebook has a shadow account for a huge number of users who never signed up for the service because they buy data from data brokers and they do network analysis connecting you with your friends, et cetera."

This cannot be solved by individually choosing to be a more righteous person and opt out of these systems, says Whittaker, who explains that these systems are being built into our economic, societal and political infrastructures, profiling us without our consent.

“This is why we need this sort of larger scale structural change and regulation driven by social movements that are demanding outcomes and service of justice,” she tells me.

To remedy the harms of AI, Whittaker says we need to confront power: “As you probably know, power doesn’t move based on the force of argument. Throughout history, it’s clear that social movements and organised workers in particular play a central role in shaping structural change in service of justice. And this is why after a decade at Google spent researching and writing and speaking about technology’s social consequences, I joined with my colleagues and I began labour organising.

“Where little else had, collective action worked. Contracts were cancelled, workplace policies changed, and oversight practices put in place.”

Whittaker says this is only the start. There is much more to be done to take back control of AI and stop the amassing of power within large private entities with more clout and technological know-how than most governments. And it is time to stop and think before we let AI devalue and degrade human labour by firing at will on the basis of algorithmically determined productivity benchmarks like those used in Amazon.

Whittaker warns: “It’s time to take advantage of the short window we have before technical infrastructure and AI are so far embedded in our social institutions as to be impossible to pull loose.”