Canadian Immigration Officers Used Emotion-Detection AI Without Consent, Report Finds

Nobody particularly enjoys standing in line at a Canadian border checkpoint. There are the fluorescent lights, the waiting, and the mild anxiety that arises even when you haven’t done anything wrong. However, travelers were unaware—and could not have known—that an AI system may have been observing their faces, interpreting their facial expressions, and trying to determine whether or not they were being honest when they responded to questions about the reason for their visit.

According to a recent report, Canadian immigration officials tested emotion-detection AI on tourists without getting their permission. The technology was used as part of what seems to have been an experimental approach to border security. It is intended to interpret emotional cues and micro-expressions. How many people were exposed to this system and what happened to the data gathered are still unknown. The lack of transparency has sparked a flurry of criticism from civil liberties organizations, privacy advocates, and immigration attorneys who contend that these systems are unethical and illegal.

Key InformationDetails
OrganizationImmigration, Refugees and Citizenship Canada (IRCC)
Technology UsedEmotion-detection AI / Facial Recognition Technology
Years Active2018–present (AI use); specific emotion-detection trials unclear
Governing BodyMinister of Immigration, Refugees and Citizenship Canada
Deputy MinisterDr. Harpreet S. Kochhar
Chief Digital OfficerJason Choueiri
Key ConcernLack of informed consent, algorithmic bias, privacy violations
Reference SourceImmigration, Refugees and Citizenship Canada

Artificial intelligence has long been a topic of experimentation for Immigration, Refugees and Citizenship Canada. Since 2018, the department has processed applications, identified fraud, and streamlined operations using machine learning and advanced analytics. The reasoning makes sense on paper. AI promises efficiency as Canada processes millions of immigration applications annually. Emotion-detection technology, however, is distinct. It does more than just flag inconsistencies in documents or sort data. It claims to be able to read people’s intentions, inner states, and sincerity from their facial expressions.

The issue is that emotion-detection AI doesn’t function as its supporters claim. Numerous studies have demonstrated the unreliability, bias, and scientific dubiousness of these systems. Emotions do not correspond well with honesty, and facial expressions do not correspond well with emotions. A nervous smile could indicate that the person is lying or that they are nervous about being questioned by someone in a position of authority. The AI is unable to distinguish between the two. Basically, it’s guesswork with consequences.

Canadian Immigration Officers
Canadian Immigration Officers

The context in which the technology was employed is what makes this revelation especially concerning. Immigration is not an impartial area. The stakes—deportation, family separation, loss of refuge—are life-altering in this system, where power disparities are glaring and people frequently come from vulnerable backgrounds. Without informed consent, using experimental AI in such a setting feels more like exploitation than innovation.

Bias is another issue. People with darker skin, especially Black women, have consistently shown lower accuracy rates in facial recognition and emotion detection systems. Researchers refer to this as “algorithmic racism” because the algorithms are trained mostly on pictures of white faces. This has led to erroneous arrests in the US. Black refugee claimants have already had their status revoked in Canada due to facial recognition matches that were later disputed in court. People of color could be disproportionately harmed by the use of emotion-detection AI at the border if it has similar biases, which there is every reason to suspect.

The immigration system in Canada takes great pride in being equitable, open, and compassionate. A humanistic approach to AI that is based on accountability, transparency, and anti-racism has been stressed by Deputy Minister Dr. Harpreet S. Kochhar. In its recently published AI Strategy, the department promises strict governance, moral supervision, and ongoing stakeholder engagement. These are the appropriate words. However, if words are not accompanied by actions, they are meaningless, and the emotion-detection trials point to a concerning discrepancy between practice and policy.

IRCC Chief Digital Officer Jason Choueiri has admitted that bias, privacy issues, and cybersecurity threats are among the inherent risks associated with AI. He is dedicated to putting checks and balances in place. However, that commitment is difficult to reconcile with a system that was tested on actual people without their knowledge. It is not a technicality to give consent. It is essential to both human rights and ethical research. Ignoring it damages trust in addition to breaking privacy laws.

Canada is expected to have strict privacy laws. Organizations must get meaningful consent before collecting personal data, according to the Personal Information Protection and Electronic Documents Act. There is no proof that passengers exposed to emotion-detection AI were notified, much less asked for their consent. There would have been indignation if the government had carried out this experiment in a school or a hospital. Why should the border be altered?

It’s also important to consider the timing of this revelation. IRCC has faced pressure to update, stay up to date with technological advancements, shorten processing times, and enhance program integrity. AI provides a means to accomplish that, but it also provides a shortcut, a way to avoid the laborious tasks of increasing the number of officers, enhancing training, and resolving structural injustices. It’s easy to think that issues that are essentially human can be resolved by technology. However, technology without responsibility is just power without supervision.

This is part of a larger pattern that goes beyond Canada. Globally, governments are racing to implement AI in delicate domains like welfare systems, border security, and law enforcement—often without sufficient protections. The instruments are promoted as impartial, objective judges of reality. However, they’re not. They are developed by humans, trained on skewed data, and used in systems that already contain significant inequality. As a result, bias is not eliminated by AI; rather, it is automated, scaled, and made more difficult to challenge.

What comes next is important. IRCC might step up and defend the trials as essential security-related experiments. Alternatively, it could own up to the error, make a commitment to openness, and guarantee that any future AI implementations are subject to thorough ethical evaluation and sincere public consultation. Although the latter route is more difficult, it is the only one that is consistent with Canada’s stated values.

As of right now, visitors entering Canada are left wondering what else they are unaware of. Which other systems are keeping an eye on them? What other experiments are being carried out without their knowledge? The questions raised by the emotion-detection trials are still unanswered. AI has arrived and won’t go away. It is unclear if people will be served by it or if they will be monitored.

Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments