Following criticism from tech watchdogs and business interests alike, New York City will take more time to work out kinks in a new law intended to identify biases in high-tech hiring tools.

Even before it went into effect on Jan. 1, the measure billed as a trailblazing racial equity initiative stirred confusion. The city’s Department of Worker Protection announced before the new year that it wouldn't enforce the law until April 15, while it seeks more clarity.

Under Local Law 144, “In the city, any employer or employment agency” must notify job candidates in New York at least 10 business days in advance before using an “automated” hiring tool such as artificial intelligence or machine learning to “substantially assist or replace discretionary decision-making” in hiring processes.

The tool must be audited for bias “no more than one year prior” to use, and the results must be made public.

New York City is among the first locales to intervene as more companies rely on artificial intelligence to screen resumes, conduct interviews and assess job applicants. California released draft regulations last year that would ban automated tools that discriminate based on protected characteristics, and Illinois recently required employers to notify job candidates if they use AI to judge video interviews.

But there are concerns that the omnipresent tools replicate biases and prejudices and unfairly widen inequality, to the detriment especially of women and people of color.

Many employers who rely on hiring tools with different impacts face virtually no pressure to revisit their methods despite increased calls to hold companies accountable on equity commitments.
Frida Polli, former neuroscientist and chief data scientist at AI-hiring company Pymetrics

Without such audits or bias checks, “the technology will ‘screen out’ most Black and brown applicants," said Selvena Brooks-Powers, the City Council's majority whip, at a public hearing on the law last year. “Too many of these candidates have been turned away from a job that could change their lives.”

Nonetheless, the city is holding off on enforcement for now because of sundry complaints; the law carries penalties up to $500 for the first infraction and $500 to $1,500 for subsequent offenses.

Some of the confusion swirled around what qualifies as an “automated” hiring tool, who should conduct the bias audits, and how. Those concerns have been raised in what the agency has called “a high volume of public comments” last year ahead of the effective date. Employers and their advocates complain the lack of clarity could slow down operations and leave them legally and financially vulnerable.

Meanwhile, some tech watchdogs said the new law offers companies too many loopholes to escape scrutiny.

“Many employers who rely on hiring tools with different impacts face virtually no pressure to revisit their methods despite increased calls to hold companies accountable on equity commitments,” said Frida Polli, former neuroscientist and chief data scientist at AI-hiring company Pymetrics, recently acquired by Harver, in the November hearing on the law. “No one is sufficiently informed about the extent of bias.”

“While the sentiments expressed by New York’s business community are more impassioned than ever, it is still a challenge to hold actors accountable for their promises,” Kirsten John Foy, president and CEO of the NYC-based advocacy group Arc of Justice, also said in the hearing.

Some critics complained the law is too broad and could call into question some widely used employment tools, even background checks.

“Presumably, every recruitment, selection, and onboarding tool cannot be covered,” said Robert T. Szyba and Annette Tyman, employer-focused labor attorneys from law firm Seyfarth Shaw LLP, in an email to the DCWP commissioner. “As doing so would inflict crippling costs upon employers that would risk noncompliance, as well as abandonment of technology that would result in severe delays and administrative burdens.”

Reverting to manual processes would “dramatically impede business operations and frustrate recruitment and hiring efforts in New York City,” the email said.

While some algorithmic auditors generally support the law, they said the law is too narrowly crafted, providing loopholes for employers to exempt themselves from bias audits. For example, the law says the automated tools must “substantially assist or replace discretionary decision-making,” but employers could argue an audit is unnecessary because they don't give significant weight to the results of the tool.

“There are also companies [using tech in hiring] that are at best pseudoscientific snake oil and at worst algorithmic phrenology,” said Scott Allen Cambo, a data scientist at tech compliance company Parity AI, which is now called Vera, in the November hearing. “We’ve heard these companies said the audit doesn’t need to be good, it just needs to happen and have been lucky enough to be in a position to turn down these deals. There are of course competitors more than willing to cut corners.”

Confusion has also swirled around what exactly a bias audit must entail. For example, do the audits have to be focused on New York City candidates or a sample data set? Also, though auditors must be “independent,” tech watchdogs said they fear the definition isn’t clear or strong enough to prevent people employed by the company they’re assessing to conduct the audits.

Some tech watchdogs suggested the scope of the law should expand, for example, to test the accuracy of automated tools – and even to require bias assessments of all hiring tools, automated or not, like standardized tests.

The law requires that employers in New York publish on their website the results of a required annual “bias audit” that measures the disparate impact of the hiring tool on protected classes like race and sex.

You can weigh in on Local Law 144 at DCWP’s second public hearing about the measure on Jan. 23.