Canadian Judge Rules Predictive Policing Data Is Not Constitutionally Protected

On a Tuesday morning, you’ll notice the same things in practically every neighborhood in Vancouver’s Downtown Eastside: people moving slowly, the smell of coffee coming from corner stores, and the sporadic flash of a police cruiser parked at an intersection. The algorithm that operates in the background and silently determines which six locations in the city are most likely to experience a break-in within the next two hours is probably something you won’t notice. Since the Vancouver Police Department has been using its GeoDASH system for years, it’s possible that the officers who patrol those flagged blocks don’t always understand or give it much thought as to why they were assigned there. Go, the system says. So they leave.

This is the unspoken reality of predictive policing in Canada, and it has now encountered a number of legal issues that the courts are only now starting to adequately address. The main question is whether the use of area-based algorithmic predictions to support police deployment and subsequent detention of individuals in those areas violates Section 9 of the Canadian Charter of Rights and Freedoms, which shields people from arbitrary detention. This question is best expressed in a 2021 analysis by legal scholar Kaitlynd Hiller that was published in the Manitoba Law Journal. The solution is not straightforward. Furthermore, civil liberties advocates are genuinely troubled by the decision that has resulted from Canadian judicial thinking thus far.

DetailInformation
Core IssueWhether predictive policing data and algorithmic tools attract constitutional protection under the Canadian Charter of Rights and Freedoms
Relevant Charter SectionSection 9 — protection against arbitrary detention
Key Legal AnalysisPredictive Policing and the Charter, Kaitlynd Hiller, Manitoba Law Journal (2021)
Primary Research ReportTo Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada, Citizen Lab & University of Toronto (September 2020)
Report AuthorsKate Robertson, Cynthia Khoo, and Yolanda Song
Technologies in Use (Canada)GeoDASH (Vancouver PD), Palantir Gotham (Calgary PS), SPPAL (Saskatchewan), facial recognition (multiple agencies), ALPRs (nationwide)
Core Constitutional RiskAlgorithmic outputs used to justify investigative detention in flagged “high-crime” areas
Identified Discrimination RiskFeedback loops from historically biased policing data perpetuating over-policing of minority and low-income communities
Key Recommendation (Citizen Lab)Moratorium on use of past police datasets; federal judicial inquiry into predictive policing tools
Comparative JurisdictionUnited States — PredPol, Chicago SSL, Operation LASER; NYPD CompStat (1990s precursor)
Canadian Human Rights BodyCanadian Human Rights Commission — published facial recognition and policing report, April 2022
Reference WebsiteCitizen Lab — Algorithmic Policing in Canada Explained

Predictive policing algorithms are trained on historical crime data, which in Canada, as in most colonial nations, reflects decades of overpolicing in low-income and minority communities. This is the main argument against predictive policing data being protected by the constitution. You don’t get an objective picture of where crime actually occurs when you feed that data into a machine learning model and ask it to identify high-crime areas; instead, you get a map of where police have traditionally chosen to look. The algorithm’s predictions are then validated by the system sending officers back to those same areas and making more arrests, which feed back into the data. In a thorough, meticulous study released in September 2020, researchers at the Citizen Lab and the University of Toronto’s International Human Rights Program described this feedback loop. The pattern was referred to in the report as a “runaway feedback loop”—a system that replicates the circumstances that led to it in order to validate itself.

Hiller’s legal analysis, which draws from both Canadian and American case law, comes to the somewhat cautious conclusion that Section 9 rights will probably endure the introduction of predictive tools as long as courts uphold the requirement of individualized suspicion for investigative detention. She contends that stopping someone cannot be justified by the algorithm alone. Even so, an officer must have a clear and concise reason to suspect that this individual has committed or is about to commit a crime. GeoDASH’s designation of an area as high-risk is context; it may prompt a reasonable suspicion analysis, but it does not, by itself, warrant detention. To date, the courts have refused to let generalized suspicion take the place of specific suspicion. The primary defense that common people in flagged neighborhoods currently have is that holding, as flimsy as it may seem.

In practice, it’s still unclear if that protection is strong enough. What actually occurs at two in the morning in a block that an algorithm has determined is dangerous differs greatly from what the law states. Officers are people. Before they even speak to a pedestrian, the thumb is already on the scale if the area they have been instructed to patrol has been pre-labeled as high-crime by a system their department trusts. Doctrinal analysis of Charter sections can only partially address this issue, which was at the heart of the Citizen Lab report.

The extent of adoption that has quietly occurred without any official legal framework governing it is what makes the Canadian situation especially noteworthy. Palantir Gotham, a tool developed by a company whose predictive policing initiatives in the US have sparked ongoing controversy and numerous civil rights complaints, is used by the Calgary Police Service. In collaboration with the University of Saskatchewan, the Saskatoon Police Service established its own algorithmic lab. Its initial focus was on identifying possible victims, but it has stated plans to broaden its scope to include evaluating repeat offenders and people with mental illness. It was discovered that several police departments nationwide had either used or tested The facial recognition startup Clearview AI created its database by unlawfully obtaining three billion photos from the internet. None of this was put to a vote. The majority of Canadians were unaware that it was taking place.

Three specific recommendations on equality and discrimination were made by the Citizen Lab researchers, and they deserve more attention than they have received: a federal judicial investigation into the repurposing of historical police datasets, an immediate moratorium on using those datasets to train predictive algorithms, and a mandatory tracking requirement to identify bias as it manifests. None of those suggestions have been incorporated into any comprehensive national policy to date. The federal government hasn’t taken any action regarding a ban, hasn’t held an investigation, and hasn’t mandated the kind of algorithmic transparency that would enable outside observers to confirm whether these tools are operating as their creators claim.

As all of this develops, it seems like the legal and policy systems are lagging behind the technology by roughly ten years. This is not unusual, but it is unsettling when it comes to the possibility that someone walking home from work in a neighborhood that the algorithm has determined is suspicious could be stopped, asked to explain their presence, and then sent on their way despite having done nothing wrong. The Constitution does not protect the data that drove that encounter. It is worthwhile to inquire as to whether it ought to be and precisely who stands to gain if the answer is negative.

Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments