The state comptroller’s office is calling for stronger oversight of New York City government’s artificial intelligence programs, after an audit identified lapses that it says heighten risks of bias, inaccuracies and harm “for those who live, work or visit NYC.”

The audit of four city agencies from January 2019 to last November turned up inconsistent, “ad hoc and incomplete approaches to AI governance,” and no rules at all in some areas. Without proper oversight, according to the findings, “misguided, outdated or inaccurate outcomes can occur and may lead to unfair or ineffective outcomes.”

Some of the sampled agencies – those audited were the NYPD, the Department of Education, the Department of Buildings and the Administration for Children’s Services – identified potential AI risks and took mitigation steps, while others had no such systems in place.

None kept formal policies on the “intended use and outcomes” of the tool, the report said.

Government’s use of artificial intelligence to improve public services is not new. But there needs to be formal guidelines governing its use.
Thomas DiNapoli, state comptroller

“NYC does not have an effective AI governance framework,” according to the findings. While local law requires city agencies to annually report certain algorithmic tools they use, the auditors noted that “there are no rules or guidance on the actual use of AI.”

Comptroller Thomas P. DiNapoli recommended a clear list of AI programs used by the city government, explanations for why they’re being used, and advocated standards to prevent bias and inaccuracy.

“Government’s use of artificial intelligence to improve public services is not new,” DiNapoli said in a statement. “But there need to be formal guidelines governing its use.”

In a statement Tuesday, Ray Legendre, senior director of communications for the city's Office of Technology and Innovation, said the Adams administration has made gains in AI oversight and more are in the offing.

“While much of this audit focused on the work of the prior administration and a different government structure, this administration's recent consolidation of technology agencies and entities under the Office of Technology and Innovation (OTI) umbrella puts the City in a strong position to approach AI in a more centralized, coordinated way," Legendre said. "Further, the office recently announced that it is hiring for director of artificial intelligence and machine learning to ensure city agencies are integrating AI technologies in a productive and responsible manner. OTI has already started to advance this important work in our first year. We look forward to even more progress in the coming months and years ahead.”

According to the report, a 2019 executive order established a reporting framework of “algorithmic tools, policies and protocols” to guide the city and its agencies in the “fair and responsible” use of AI. It also called for setting up a process to resolve complaints from those affected by AI use.

A January 2022 executive order, however, “discontinued” that process before such policies and protocols were established, according to the audit. “In addition, we identified instances where agency tools were not reported or included in the public listing of (AI) tools,” the audit said.

The policies of the ACS and DOE, as detailed in the report, demonstrate the divergence in approaches among city agencies. ACS removed certain types of racial and ethnic data to reduce possible bias in its “Severe Harm Predictive Model,” which is used to predict which children will be most likely to experience harm in the future in order to decide which cases to prioritize for quality assurance reviews.

Meanwhile, the DOE has no such assessment to evaluate bias in schools’ AI tools, including those that analyze classroom discussion patterns and help staff improve their communication skills.

The report also takes issue with some internal agency rules for AI tools. For example, with the passage of the Public Oversight of Surveillance Technology Act, the NYPD has created a policy governing the impact and use of its facial recognition software, which is used to identify unknown people. But the department didn’t decide on an acceptable level of accuracy for the tool, according to the auditors.

This article was updated: A response from the city Office of Technology and Innovation was included.