Blackballed by machine learning: how algorithms can destroy your chances of getting a job


The Guardian's published a long excerpt from Cathy O'Neil's essential new book, Weapons of Math Destruction, in which O'Neil describes the way that shoddy machine-learning companies have come to dominate waged employment hiring, selling their dubious products to giant companies that use them to decide who can and can't work.

Because so many of America's biggest employers use these systems, it can be nearly impossible to find work if their secret, unaudited models decide that you're a bad hire. What's more, many of the models' litmus tests are just proxies for race, poverty, and disability — things that companies are not legally allowed to consider when hiring (unless they're being considered by unaccountable software provided by a third party).

This hurts everyone, not just the people who get blackballed. Because the machine learning companies that supply this HR-ware don't refine their models based on the success of their predictions, they end up excluding lots of people who'd be excellent hires, and hiring people who are no good for their customers. O'Neil contrasts this with the statistical tools used by major league sports-teams to decide which athletes to draft, where the goal is to get the best people, not just tick a box or push a product, and shows just how reckless and low-quality these hiring tools are.


What's worse, after the model is calibrated by technical experts, it receives precious little feedback. Sports provide a good contrast here. Most professional basketball teams employ data geeks, who run models that analyse players by a series of metrics, including foot speed, vertical leap, free-throw percentage, and a host of other variables. Teams rely on these models when deciding whether or not to recruit players. But if, say, the Los Angeles Lakers decide to pass on a player because his stats suggest that he won't succeed, and then that player subsequently becomes a star, the Lakers can return to their model to see what they got wrong. Whatever the case, they can work to improve their model.

Now imagine that Kyle Behm, after getting red-lighted at Kroger, goes on to land a job at McDonald's. He turns into a stellar employee. He's managing the kitchen within four months and the entire franchise a year later. Will anyone at Kroger go back to the personality test and investigate how they could have got it so wrong?

Not a chance, I'd say. The difference is this: Basketball teams are managing individuals, each one potentially worth millions of dollars. Their analytics engines are crucial to their competitive advantage, and they are hungry for data. Without constant feedback, their systems grow outdated and dumb. The companies hiring minimum-wage workers, by contrast, act as if they are managing herds. They slash expenses by replacing human resources professionals with machines, and those machines filter large populations into more manageable groups. Unless something goes haywire in the workforce – an outbreak of kleptomania, say, or plummeting productivity – the company has little reason to tweak the filtering model. It's doing its job – even if it misses out on potential stars. The company may be satisfied with the status quo, but the victims of its automatic systems suffer.

How algorithms rule our working lives
[Cathy O'Neil/The Guardian]