Just down the street, Parliament Hill, with its gothic towers rising as symbols of oversight and accountability, is reflected in the glass entrance of the Royal Canadian Mounted Police headquarters in Ottawa. For months in 2019 and the first few months of 2020, one of Canada’s most influential organizations was secretly testing surveillance technology without informing the occupants of those towers, which creates a startling visual contrast.
Later, the RCMP acknowledged that it had been using Clearview AI, a facial recognition program that could scan billions of internet-scraped photos, without prior permission from privacy regulators or parliamentary approval. Canada, a nation that frequently takes pride in handling privacy differently than its southern neighbor, was particularly affected by that revelation. The RCMP initially denied the use of such technology.
Important Information About the RCMP and AI Facial Recognition Controversy
| Category | Details |
|---|---|
| Organization | Royal Canadian Mounted Police (RCMP) |
| Founded | 1920 |
| Headquarters | Ottawa, Ontario, Canada |
| Controversial Tool | Clearview AI facial recognition software |
| Period of Use | October 2019 – July 2020 |
| Privacy Finding | Ruled a “serious violation” of Canadian privacy laws |
| Reference Link |
The RCMP admitted to using Clearview AI’s software when its client list was leaked in early 2020. Officials described it as a tool that assisted in victim rescue and stated that it was restricted to child exploitation investigations. Officers may have viewed the software as just another investigative shortcut due to the horrifying crimes they were dealing with and the tremendous pressure they were under. However, taking short cuts that involve widespread surveillance usually has repercussions. The scale’s eyebrows shot upwards.
More than 500 searches had been carried out, despite police initially claiming the tool was only occasionally used. Just a small percentage had anything to do with cases of child exploitation. Privacy advocates were uneasy about this discrepancy between explanation and reality because they sensed that technology was growing beyond its initial rationale.
The Privacy Commissioner of Canada didn’t hold back. The office determined in June 2021 that the RCMP had violated federal privacy law by using Clearview AI. According to the ruling, the tool used photos that were taken without permission, thereby compiling a database of millions of Canadians who had never consented to take part. It’s difficult to overlook how subtly that database, which was compiled from LinkedIn profiles, wedding photos, and social media posts, had expanded.
The quiet in the political corridors of Ottawa was telling. The technology had never been approved by Parliament. There had been no debate in public. No hearings. The technology merely materialized within investigations, operating in the background. Lawmakers seem to have been caught off guard and forced to face the unsettling fact that surveillance tools can proliferate more quickly than the regulations intended to control them.
Clearview AI was eventually discontinued by the RCMP. The force launched what it called a National Technology Onboarding Program and unveiled new internal policies, promising more stringent oversight. Although it’s still unclear if policy changes can keep up with technologies that are developing more quickly than legal frameworks, the phrase sounds comforting. It seems impossible to ignore the larger context.

From smartphones to airports, facial recognition technology has been widely adopted as a convenient and safe tool. It appears that investors think it will become an inevitable aspect of contemporary law enforcement. However, its growth keeps running afoul of privacy standards, making societies choose how much surveillance they can tolerate.
Bundled pedestrians passing Parliament Hill in the winter hardly look at the RCMP officers parked nearby, their presence familiar and unremarkable. The trust includes that familiarity. According to laws crafted with public approval, police are expected to operate within established bounds. Rarely is there a calm response when those lines are blurred.
Privacy activists are concerned that this episode might be a test run for a more widespread use of AI-powered surveillance. Meanwhile, police chiefs contend that they need to stay up with criminals who are employing more advanced technology. Neither argument completely eases the tension, but both have merit.
As this develops, it seems as though Canada is at a silent turning point. Nowadays, there are tools to quickly identify anyone, anywhere. It is unclear if governments will resist that temptation or eventually give in to it. And the question lingers in the chilly air as one stands in Ottawa, where oversight and power share the same skyline.