There is a frustration with Canada’s AI legislation that goes beyond the typical policy disagreement in the offices of civil society organizations in Toronto, Ottawa, and Vancouver—in rooms full of people who have spent years carefully considering what technology does to human rights when left ungoverned. The Artificial Intelligence and Data Act’s detractors don’t just believe it could be improved. Many of them contend that it was completely misconceived and that it would be more beneficial to take a step back and address a more fundamental question: for whom is this law truly intended?
Canada attempted to create a federal framework for regulating artificial intelligence with the introduction of AIDA as part of Bill C-27 in 2022. The aspiration was genuine. Being one of the first nations to have a national AI plan, Canada’s government recognized the need for some legislative scaffolding around the technology for both international positioning and public trust. The EU was constructing its own structure. The United Kingdom was observing. The US was debating. Canada wants to be perceived as a responsible actor—a nation that could draw AI investment while also demonstrating that it takes the dangers seriously—because it has a right to house some of the top AI research in the world through its university environment.
| Canada’s AI Act (AIDA) — Key Information | |
| Legislation Name | Artificial Intelligence and Data Act (AIDA) — introduced as part of Bill C-27 in 2022; Canada’s primary federal framework for regulating artificial intelligence in commerce and trade |
|---|---|
| Legislative Status | By early 2025, AIDA faced significant parliamentary and public challenges — critics demanding foundational redesign rather than amendment; the bill’s progress has been slower than originally anticipated |
| Core Criticism | Advocacy groups argue AIDA is structured around commercial AI regulation — prioritizing innovation and trade over the protection of human rights, privacy, and labor protections for vulnerable populations |
| Organizations Opposing | Over 19 civil society organizations and numerous legal experts signed open letters to the government demanding a rights-centric redesign of the legislation before it proceeds to committee consideration |
| Enforcement Weakness | Critics identify vague definitions throughout the bill and insufficient detail on enforcement mechanisms — arguing the law as written lacks the procedural teeth to meaningfully address AI-driven harms |
| Public Opinion & Regulatory Context | |
| Canadian Public Concern | Approximately 81% of Canadians report privacy concerns regarding AI — 78% believe AI use should be regulated; public sentiment is significantly ahead of the legislative progress on the issue |
| Privacy Commissioner Role | The Office of the Privacy Commissioner of Canada (OPC) is actively engaged — working to ensure privacy rights are embedded in AI governance rather than treated as secondary to commercial considerations |
| Comparison Point | The EU AI Act, which came into force in 2024, classified high-risk AI applications under mandatory human rights and transparency requirements — critics point to this framework as the standard Canada’s legislation falls short of reaching through the EU AI Act model |
The expanding coalition of organizations opposing the Act claims that the issue is that the balance achieved in AIDA clearly favored the business side. Critics contend that this framing results in a measure that is essentially focused on controlling AI as a commercial input rather than a societal force. The law’s authority is based on Canada’s constitutional trade and commerce power, which shapes what it may and cannot address. Concerns about labor rights raised by algorithmic management systems, the dangers of AI-driven discrimination, and the surveillance implications of automated decision-making are all included in the legislation, if at all, in ways that organizations representing impacted communities believe fall short of the true scope of the threat.
A significant number of legal and technical experts, along with over nineteen organizations, issued open letters to the government requesting that AIDA be reexamined at the foundational level before moving further with the parliamentary procedure. The letters requested a different strategy rather than small changes. One that begins with human rights instead of arriving at them after the establishment of the business framework. This is not a position on the periphery. It represents a sizable portion of knowledgeable Canadian civil society, including groups that directly assist the groups most likely to suffer from AI-driven harm, such as low-income groups, racialized communities, and individuals who interact with automated systems in social services, employment, and housing but lack the capacity to effectively challenge those systems.

Because it addresses a persistent issue with technology legislation worldwide, the enforcement criticism merits special consideration. Many have attacked AIDA’s definitions of important concepts, such as what defines a “high impact” AI system, what constitutes harm, and what obligations apply at each tier of risk, for being too vague to offer useful guidance or trustworthy accountability. The side with greater resources to challenge ambiguity typically benefits from vague definitions since they lead to flexible compliance. Uncertainty in the law is manageable for IT firms developing AI technologies. It is significantly less so for those who are damaged by an AI decision that falls into definitional murky regions.
Because it is exceptionally apparent, the public opinion data behind this discussion is worth considering. About 81% of Canadians say they are worried about AI privacy. The usage of AI should be restricted, according to over 78% of respondents. These are not specialized roles; rather, they reflect a persistent, widespread public need for significant oversight. The gap between that degree of public concern and the proposed legislation raises a question that the Office of the Privacy Commissioner of Canada has been attempting to address through its own regulatory engagement: would the bill as written truly provide the protection that Canadians seem to be requesting, or would it merely create the appearance of regulation while largely ignoring the underlying risks?
The most frequently mentioned alternative model is provided by the EU AI Act, which went into effect in 2024. It is a framework that categorizes AI applications according to risk level and places mandatory human rights and transparency requirements on the highest-risk categories, with enforcement mechanisms that go beyond the voluntary or lax approaches found in AIDA. There are legitimate concerns about whether the European model would work in Canada’s constitutional and commercial framework without adjustment, and Canada is not required to adopt it. However, the comparison is still helpful in determining what rights-centric AI policy might actually look like.
Observing this legislative process gives the impression that Canada came to this point with the best of intentions, but instead of beginning with the values the technology is truly testing, it produced a bill that reflects the limitations of drafting technology law through the available constitutional frames. The alliance of contesting organizations is requesting that Parliament bridge this gap between intent and instrument. It is still really unclear if it will.