AI use in immigration processing

IRCC releases its new artificial intelligence strategy

Immigration, Refugees and Citizenship Canada (IRCC) has officially rolled out a formal strategy governing its use of artificial intelligence (AI), signaling that the technology is now deeply embedded in the country’s immigration processing machinery after more than a decade of experimentation. Since first deploying AI tools in 2013 and expanding to machine learning in 2018, the department has quietly used automated systems to assess over seven million applications, primarily to triage cases and flag straightforward files for faster officer review while directing more complex matters to human staff.

Currently, AI handles significant volumes of routine administrative work, including triaging roughly four million emails annually through client support centres and powering “Quaid,” a chatbot that responds to about 80 per cent of basic online inquiries without human involvement. The department is also testing more intrusive applications, including fraud detection tools designed to scan documents for anomalies and predictive analytics that would recommend settlement locations to economic immigrants based on past earnings data and regional economic indicators.

However, officials claim that all systems operate under strict limitations. They further emphasized that no AI is permitted to refuse an application, final decisions remain with human officers and the department avoids “black box” models that cannot provide clear explanations for their outputs. The newly published strategy outlines ongoing efforts to monitor these tools for bias, protect private data and maintain human oversight. However, critics noted that the growing reliance on automation raises persistent questions about transparency and the potential for embedded discrimination in systems processing millions of vulnerable applicants.

As the department pushes forward with experiments in generative AI and advanced analytics, it faces the delicate task of balancing operational efficiency against the life-altering consequences of its decisions.

While the Canadian government has been formalizing its approach to AI in immigration, applicants are increasingly turning to the tool to informally aid in the navigation of the nation’s immigration landscape. Many newcomers are using AI to draft crucial documents such as submission letters and to conduct research on which immigration pathway might suit them best, often as a cost-saving measure when they cannot afford legal representation.

The most frequently accessible AI systems to these applicants are usually general-purpose chatbots like ChatGPT. However, increased usage carries significant risks, as these general chatbots are trained on broad datasets and are not reliable for high-stakes legal matters. They frequently “hallucinate” information, generating confident sounding but entirely fictitious court cases and laws — a problem highlighted recently when 31 Cameroonian applicants cited the same non-existent case law in their applications. While specialized legal tools exist, the average applicant’s reliance on generic chatbots threatens to undermine the system’s integrity and jeopardize individual cases.

Unlike Canada’s relatively cautious approach focused on triaging applications and reducing processing backlogs, the U.S. Department of Homeland Security has built what critics describe as a sprawling surveillance infrastructure that increasingly targets not just undocumented immigrants but American citizens who exercise their First Amendment rights. The contrast could hardly be starker — while IRCC explicitly prohibits AI from refusing applications and maintains human oversight, United States Immigration and Customs Enforcement (ICE) has deployed AI tools that help identify, track and detain people with deadly consequences.

The human toll of this technological arms race is already visible. Since September 2025, ICE and U.S. Customs and Border Protection (CBP) agents have shot fourteen people, resulting in four deaths, including two U.S. citizens, Renée Good and Alex Pretti, both killed in January 2026. In Good’s case, an ICE agent was recording her on a cellphone moments before fatally shooting her as she observed agents from inside her SUV. The violence has sparked rare public dissent within the tech industry itself. Nearly 1,000 Google employees signed a letter condemning the violent law enforcement actions of ICE and CBP, expressing horror at the killings and demanding the company disclose and terminate its contracts with these agencies.

The contrast between these cases raised an important question — whether AI serves as a tool to assist overwhelmed immigration systems or becomes a weapon turned against the people those systems are meant to serve.

For Canada, the challenge lies in ensuring its cautious approach does not become an excuse for complacency, allowing bias to creep in under the guise of efficiency. The department’s reliance on automation to process millions of applications demands continuous scrutiny, not just periodic reassurances. Alternatively, for the U.S., the question is whether the surveillance infrastructure now being built can ever be meaningfully restrained or whether the country has already crossed a line that makes accountability nearly impossible.

What is clear is that the immigration debate is no longer just about borders and backlogs. It is about surveillance, data collection and, as the deaths in Minneapolis and the application backlogs in Ottawa make plain, the consequences of getting it wrong are not theoretical. They are measured in ruined lives, broken families and a fundamental erosion of trust in the institutions meant to serve the public.