Home

How To Fix Canada’s Proposed Artificial Intelligence Act

Sonja Solomun, Christelle Tessono / Dec 6, 2022

Christelle Tessono is a Tech Policy Researcher based at Princeton University’s the Center for Information Technology Policy (CITP), where she develops solutions to emerging regulatory challenges in AI governance.

Sonja Solomun is the Deputy Director of the Centre for Media, Technology and Democracy at McGill University, where she is completing her doctorate. She works on platform governance, AI and climate justice.

Canada. Shutterstock

Canada is finally joining international efforts to regulate artificial intelligence. In June 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022, consisting of three separate acts, including the Artificial Intelligence and Data Act (AIDA), Canada’s first attempt to regulate AI systems outside privacy legislation.

After years of growing calls to regulate AI, AIDA is an important and encouraging first step. But it requires further consideration to provide adequate oversight, accountability and human-rights protections that would elevate it to international precedents in this space. Together with researchers from McGill University’s Centre for Media, Technology and Democracy, the Cybersecure Policy Exchange at the Toronto Metropolitan University, and the Center for Information Technology Policy at Princeton University, we outlined a few key challenges and recommendations for AIDA in a new report, AI Oversight, Accountability and Protecting Human Rights: Comments on Canada’s Proposed Artificial Intelligence and Data Act.

Below, we summarize our first reactions to the proposed legislation:

1. The Canadian government did not hold a formal public consultation on AIDA.

AIDA came as a surprise to many working in this space. There were no public consultations on Bill C-27, and the previous iteration of the Bill did not include an AI regulatory framework. If closed-door consultations occurred, there appear to be no publicly accessible records to account for them.

As noted by economist and journalist Erica Ifill, the absence of meaningful public consultation is evidenced by the absence of provisions acknowledging AI’s capacity to exacerbate systemic forms of discrimination. A more robust Bill will require holding meaningful public consultation with the specific goal of enabling greater interaction between technical experts, as well as civil society groups, representatives of marginalized communities, and regulators.

2. Independent oversight is missing.

In its current iteration, AIDA lacks provisions for robust independent oversight of the AI market. Instead, it proposes self-administered audits at the discretion of the Minister of Innovation, Science, and Industry in the event of suspicion of Act contravention.

The audit can be done internally by the company under scrutiny, or by hiring the services of an independent auditor - which is at the discretion and expense of the audited company. However, recent findings by Deb Raji, Peggy Xu, Colleen Honigsberg, and Daniel Ho, demonstrate the poor quality of audits when the audited company in question selects and compensates its auditor. An adequate audit mechanism would ensure that auditor selection, funding, and scope are not established by the audited company - but instead by regulation developed through independent oversight.

Moreover, the Bill creates the position of the Artificial Intelligence and Data Commissioner, who is intended to be a public servant designated by the Minister. This role is tasked with assisting the Minister in the enforcement and administration of the Bill, yet without the power to draft regulation or to enforce AIDA beyond the discretion of the Minister. As such, the Commissioner cannot make critical policy interventions independently, as they report directly to the Minister.

As a result, to effectively regulate the AI market in Canada, we recommend that an independent body be vested with the power to administer and enforce the law. We suggest empowering the existing Office of the Privacy Commissioner, or creating an independent body that can enforce the Act.

Canada can look to several international examples. The European Union’s Digital Services Act (DSA) provides another level of transparency and oversight by mandating audits by independent third-party auditors with both technical knowledge of algorithms and other expertise. The DSA would give national authorities – “Digital Services Coordinators” – and, in some circumstances, the European Commission, the authority to conduct on-site inspections of these companies.

Closer to home, the United States’ proposed Algorithmic Accountability Actwould have authorized and directed the Federal Trade Commission (FTC) to issue and enforce regulations that would require certain entities using personal information to conduct impact assessments and "reasonably address in a timely manner" any identified biases or security issues.

3. AIDA excludes government institutions.

The Act does not apply to products, services, or activities under the direction of the Minister of National Defence, the Canadian Security Intelligence Service (CSIS), the Chief of the Communications Security Establishment (CSE), or “any other person who is responsible for federal or provincial departments or agencies”.

The absence of regulation for these law enforcement and public safety agencies poses significant human rights risks. As illustrated by the Royal Canadian Mounted Police’s unlawful use of facial recognition technology from Clearview AI and the Department of National Defence’s procurement of two AI-driven hiring services, there exists a dangerous precedent that the Canadian government must address.

​​Meanwhile, the European Union’s Artificial Intelligence Act only exempts AI systems developed or used exclusively for military purposes. This is only a partial solution – it is imperative that AIDA’s framework be broadened to include government institutions given the country’s history of unlawful use by public bodies.

4. Inconsistent definitions for AI systems

More broadly, Bill C-27 has significant definitional inconsistencies with regards to AI systems, which could lead to an uneven application of the legislation.

Instead, it is crucial for Bill C-27 to have definitions that are consistent and technologically-neutral, i.e. addressing the source of concern around a technology rather than the technology itself. And, it is important that definitions are future-proofed enough to account for AI’s propensity to exacerbate existing forms of systemic discrimination, such as sexism and racism.

Instead of defining AI systems based on a limited number of techniques– such as predictive analytics, genetic algorithms, machine learning, or deep learning– the legislation could focus on defining these technologies based on their application and how end-users interact with them. One possibility is to define AI systems based on their ability to generate outputs such as predictions, recommendations, and other types of decisions.

Compared to the EU’s prescriptive approach in classifying “high risk” AI systems, AIDA relies on a more principles-based approach, leaving key definitions of “high impact system” to be defined in future regulation.

5. Bill C-27 fails to adequately address the human rights risks of AI systems.

More broadly, Bill C-27 does not sufficiently address the human rights risks that AI systems pose, putting it out of step with international precedents. Surprisingly, there are no explicit provisions which acknowledge the well-established disproportionate impact these systems have on marginalized populations such as BIPOC, 2SLGBTQIA+, economically disadvantaged, disabled, and other equity-deserving communities in Canada.

To address these important gaps, the government should consider developping a framework on the processing of biometric information, have high-level protections for children under 18, and include explicit prohibitions on certain algorithmic systems and practices.

For instance, while the EU AI Act prohibits certain practices such as using AI for "real time" biometric identification of individuals in public spaces for law enforcement, including social scoring systems, those intended to subliminally manipulate a persons behaviour, and those likely to cause physical or psychological harm - AIDA does not currently outline any outright prohibitions on AI systems, including those deemed to present appropriate risk.

The Illinois’Biometric Information Privacy Act (BIPA) also outlines strong prohibitions against private collection, disclosure and profit from biometric information, along with efforts (in both Illinois and Massachusetts) to restrict uses for law enforcement.

But there is one area of AIDA that could be especially promising. Given Canada’s stated emphasis on “protecting children with Bill C-27,” we remain hopeful that the government will include special category status for children under 18, and will further elaborate on AIDA with high levels of privacy protections by default, especially against commercial use of children’s data.

We are encouraged by the inclusion of valuable rights to erasure for children's data in Bill C-27, which also considers the personal information of minors as “sensitive information”, a significant step according to legal experts. Since the age of majority is not defined in the Bill (and poses some jurisdictional tensions among the different Canadian provinces) Canada should follow the United Kingdom'sAge Appropriate Design Code (“Children’s Code”) which sets under-18 as the legal age of a child and outlines design standards to minimize children’s data collection by default.

- - -

Overall, AIDA has a lot of problems and requires significant rewriting. We hope that in the coming months the Canadian government will be receptive to these recommendations and move towards an AI governance framework that centers accountability, independent oversight and the protection of human rights.

Authors

Sonja Solomun
Sonja Solomun is an Academic Associate at the Max Bell School of Public Policy, McGill University where she also serves as the Deputy Director of the Centre for Media, Technology and Democracy. She is completing her doctoral dissertation on platform governance and climate justice at McGill Universit...
Christelle Tessono
Christelle Tessono is a Tech Policy Researcher at the Center for Information Technology Policy (CITP) at Princeton University. Prior to joining CITP, she served as a parliamentary intern at the House of Commons of Canada, where she supported the legislative work of both opposition and government Mem...

Topics