Skip to main content

University of Toronto researchers develop AI that can defeat facial recognition systems

Image Credit: chombosan / Shutterstock

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Facial recognition systems are controversial, to say the least. Amazon made headlines last week over supplying law enforcement agencies with face-scanning tech. Schools in China are using facial recognition cameras to monitor students. And studies show that some facial recognition algorithms have built-in biases against certain ethnicities.

Concerns about encroaching AI-powered surveillance systems motivated researchers in Toronto to develop a shield against them. Parham Aarabi, a professor at the University of Toronto, and Avishek Bose, a graduate student, created an algorithm that can disrupt facial recognition systems dynamically, on the fly, by applying light transformations to images.

“Personal privacy is a real issue as facial recognition becomes better and better,” Aarabi said in a statement. “This is one way in which beneficial anti-facial-recognition systems can combat that ability.”

Products and software that purport to defeat facial recognition are nothing new. In a November 2016 study, researchers at Carnegie Mellon designed spectacle frames that could trick systems into misidentifying people. And in November 2017, experts at MIT and Kyushu University fooled an algorithm into labeling a picture of a 3D-printed turtle as a rifle by altering a single pixel.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite
facial recognition algorithm

Above: The researchers’ anti-facial recognition system in action.

Image Credit: University of Toronto

But this is one of the first solutions that uses AI, according to Bose and Aarabi.

Their algorithm, which was trained on a dataset of 600 faces, spits out a real-time filter that can be applied to any picture. Because it targets highly specific, individual pixels in the image, it’s almost imperceptible to the human eye.

The two researchers employed adversarial training, a form of AI that comprises two neural networks — a “generator” that produces outputs from data and a “discriminator” that detects fake data fabricated by the generator — to train the network. Aarabi and Bose’s system uses the generator to identify faces and the discriminator to disrupt the facial recognition.

In the research paper, which is due to be published at the 2018 IEEE International Workshop on Multimedia Signal Processing, Bose and Aarabi claim that their algorithm reduces the proportion of detected faces in facial recognition systems to 0.5 percent.

They hope to make the neural network available in an app or website.

“Ten years ago these algorithms would have to be human-defined, but now neural nets learn by themselves — you don’t need to supply them anything except training data,” says Aarabi. “In the end they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.