+1(514) 937-9445 or Toll-free (Canada & US) +1 (888) 947-9445

Automated Decision-Making -- "Advanced Analytics" -- and AI, yeah that AI, so maybe some GOTCHA

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
This is not intended to be a screed opposing the development and implementation of automated decision-making tools. Many PRs, for example, are enjoying the benefits of automated approval, getting new PR cards in just a couple or four weeks. Most should readily agree that is good.

However, more and more it is important to recognize the extent to which automated decision-making, and the use of what IRCC and other Canadian agencies refer to as "advanced analytics," incorporating components of what is generally referred to as "AI," have a big, big role in processing immigration-related applications and procedures.

The decision-making landscape is changing, has already changed a lot, and it is increasingly less clear how the Federal Courts will be able to adequately review the required transparency, intelligibility, and justification for the decisions IRCC makes given the extent to which technology and digital tools in particular are employed (and will be more so) in processing applications. Which gets weedy.

Thus . . .

Warning: Cloudy, Weedy Conditions Ahead (proceed with caution):
(acknowledging this will be far too long to read for those who are content to know it all without bothering to do the homework)

No need to guess, undoubtedly many here are far more and far better acquainted with AI and related automated digital tasking than I am. That is, I am NO tech guy, not close (it has been more than a quarter century, for example, since I was employed in projects developing technology for capturing, organizing, maintaining, and publishing legal information). I am far more a law and bureaucracy guy, focused on content oriented decision making. So AI and related topics are a stretch for me.

However, there has been rather little shared here about the impact of automated decision-making, advanced analytics, and other components of AI (Artificial Intelligence), on bureaucratic processing, be that within the Canadian government generally, or IRCC in particular, except for a number of largely cryptic discussions about Chinook in forums focused on temporary resident programs. Those discussions are largely shallow (although some include links to good resources, such as podcasts sponsored by the Canadian Immigration Institute including discussions between immigration lawyers Mark Holthe and Will Tao specifically about Chinook and related technology tools used by IRCC) and at best gloss over how Chinook and related automated processing affect applicants and their applications. Unfortunately the Chinook and AI discussions in the immigration forums here tend to be rife with mischaracterizations and misstatements, some outright misleading and erroneous.

Meanwhile, over the course of the last year we have seen the positive impact of automated decision-making on PR card applications, which has culminated recently in the abrupt drop in processing times, virtually overnight falling from two months plus (itself way faster than the four to six months it has often been over the years) to around a couple weeks, currently just 11 days. Great news for some; at the least good news for many, if not most.

But not all PRs needing a new PR card will enjoy near immediate approval. There are some real questions looming, like
-- who benefits? (who will qualify for automated approval)​
-- how will this affect the processing times for PRs who do not get automated approval?​
-- what criteria does IRCC use and how?​
-- what can a PR do to improve the odds of automated approval? reduce the risk of being a "complex" application?​
-- to what extent will processing, and processing timelines, be impacted by electronic tools employing advanced analytics and other AI components, such as decision-making algorithms or machine-learning? and what impact will there be on outcomes?​

To be clear, the implementation of digital processing employing AI components in immigration related matters, including automated decision-making, is not particularly new. The eTA system, for example, which incorporates automated decision-making (no live person involved) in giving visa exempt travelers electronic travel authorization facilitating their boarding flights to Canada from abroad, was developed and implemented more than a decade ago. From 2017 through 2019, in addition to developing and implementing Chinook, IRCC implemented multiple pilot projects utilizing automated decision-making for some visa applications, including the use of AI components like advanced analytics and machine-learning. And now it is clear that due to automated decision-making, automated approval, many PRs are benefitting from the near immediate approval of their applications for a new PR card . . . which according to the IRCC processing times online information actually means "most" "complete" PR card applications.

IRCC is adamant that there is no automated negative decision-making. (In time I will get to particular IRCC and other Canadian government information about this, weedy and nerdy stuff.) And thus, according to IRCC, the automated decision making does not have a detrimental impact, so there is no need, no cause, to require these decisions meet reasonableness standards. No basis for judicial review. No grounds to challenge the process for failing to be transparent and intelligible. No need for IRCC to justify these decisions. Indeed, IRCC is adamant that the processing apparatus, including the criteria employed, should remain behind the confidential information curtain, NOT shared with the public.

It is frustrating how utterly insistent IRCC (backed by the FC) is about this when no special expertise is needed to recognize that denying automated approval in itself is a negative decision resulting in it taking (probably) five times as long to get a new PR card, let alone when the tools have an impact on outcomes. And the practical reality is that outcomes are almost certainly being affected. The latter is very complicated, best I can say about that for now is visit the Holthe and Tao podcasts about Chinook, and consider how the courts are responding to Tao's criticisms, such as in decisions like Mehrara v. Canada, 2024 FC 1554, https://canlii.ca/t/k74qm and Haghshenas v. Canada, 2023 FC 464, https://canlii.ca/t/jwhkd among other decisions citing these two cases.

Leading to . . .

Spoiler Alert: I can no longer confidently say IRCC does not engage in GOTCHA games . . . not that I think individual officers are now more likely or prone to deny applications or penalize immigrants for trivial or gotcha reasons, but because digital processing is playing a rapidly increasing role and it is inherently mechanical, making it is susceptible, when it has an active part in decision making, to triggering actions disproportionate to the criteria, to pose excessive hurdles based on trivial or gotcha criteria. IRCC claims to be avoiding the problem by vesting negative outcome decision-making exclusively in officers, not machines. That badly overlooks how severely incremental processing steps can affect the overall process, from steering the application toward unwarranted delays and invoking excessive non-routine processing, to in some cases ultimately steering the process toward focusing on negative criteria, such as risk factors, that do not directly invoke unwarranted denials but which can influence the officials making those decisions to go in that direction. Again, how this happens is complicated, but for the moment suffice to take notice this is causing some serious concern among immigration lawyers despite IRCC's effort to insulate machine/automated processing tasks from negative outcome decision-making, which is more likely about preventing disclosure of the analytical components and operative criteria than it is about protecting the integrity of decision making in the system, .

This includes, in particular, purportedly non-decision-making tools like Chinook, which IRCC insists does not make or recommend decisions, but which some immigration lawyers are rather vehemently criticizing, including claims it is the lynchpin in a process that leads to the bulk rejection of some applications for temporary resident status and does so largely in-the-dark, without transparency, and in many cases for insubstantial or even unjustifiable reasons.

This is intended to be about more than just the direct impact of IRCC employing automated decision making supported by "advanced analytics" incorporating elements of AI (Artificial Intelligence), including so-called "machine-learning," but also about processing procedures and incremental steps in IRCC decision making more broadly, and the rapidly increasing role of technology and various digital tools, including those which IRCC claims do not involve or use AI, advanced analytics, or "built-in decision-making algorithms."

More to come, including references and links to resources. It will take time.
 
  • Like
Reactions: Ponga

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
RESOURCES . . . a partial introduction . . .

I have not had time to properly organize many of the resources about the employment of AI, advanced analytics, algorithm-based and other automated decision-making, and machine-learning, in processing immigration-related applications.

There's a lot out there to review and digest. It really is weedy. Much is technical, technologically technical, but procedurally and legally technical as well. A rather bad combination.

Moreover, to make matters far more difficult, the substantive elements, ranging from risk indicators and triage criteria, to the factors and categories affecting forking path determinations (such as what is not complex or low complex? what is complex? what is high complex?), are almost entirely kept behind the confidentiality curtain. Not even the lawyers are given access to a huge amount of the information which has a big impact on the processing path for deciding immigration applications. Holthe and Tao have described the situation as like trying to figure things out with only 500 pieces of a 1000 piece puzzle.

I have long been planning to take a deeper dive into what this means, what impact it is having, but putting that off while planning to do some homework, a lot more homework actually, hoping to get more fully informed before doing that. I cannot say I have done enough homework about this yet, but it is time to directly address the role of automated decision making center stage . . . noting that along with the development and implementation of application processing tools like Chinook ( which IRCC says "does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms"), IRCC has been expanding the implementation of electronic tools which definitely employ AI (Artificial Intelligence), at the least going back to some key automated decision pilot projects in 2018/2019.

Six plus years into this it is time to wrestle with what it means, how it works, who it helps, who it hurts.

So, for those interested (genuinely interested), yeah there is a lot of homework to do.

So here are some resources. Again I have not had time to organize these well, and I will start by listing those about Chinook, even though this tool (a suite of tools) is not employed in applications by Canadians (like PR card or PR TD applications, or applications for citizenship so far as I am aware), except perhaps sponsorship applications. And even though IRCC denies that Chinook uses AI or even recommends, let alone makes decisions.

Re Chinook:

CIMM – Chinook Development and Implementation in Decision-Making – February 28, 2024
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-28-2024/chinook-development-decision-making.html

CIMM – Chinook Development and Implementation in Decision-Making – October 24, 2023
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-october-24-2023/chinook-development-implementation-decision-making.html

CIMM — Chinook Development and Implementation in Decision-Making – February 15 & 17, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022/chinook-development-implementation-decision-making.html

Chinook youtube/podcast videos re Chinook (featuring Mark Holthe and Will Tao) and other sources:
https://www.canadianimmigrationinstitute.com/podcasts/canadian-immigration-podcast?search=chinook
https://cila.co/examining-the-role-of-chinook-in-immigration-decision-making-mehrara-v-canada/



Some Chinook related Federal Court decisions:

Mehrara v. Canada, 2024 FC 1554 , https://canlii.ca/t/k74qm
Haghshenas v. Canada, 2023 FC 464, https://canlii.ca/t/jwhkd
Farshid v. Canada, 2024 FC 1573, https://canlii.ca/t/k76cw
Espinosa Cotacachi v. Canada, 2024 FC 2081, https://canlii.ca/t/k8h9m

IRCC Information About Automated Decision-Making, Advanced Analytics, Machine Learning:

How we use advanced analytics, automation and other technologies
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics.html

CIMM – Question Period Note - Use of AI in Decision-Making at IRCC – November 29, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-nov-29-2022/question-period-note-use-ai-decision-making-ircc.html

Uses of technology across the department
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/uses-technology.html

Using technology responsibly
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/technology-responsibly.html

Processing aids
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/processing-aids.html

Tools that protect the security of our immigration system
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/tools-security.html

OLLO — Advanced Data Analytics to Sort and Help Process Temporary Resident Visa Applications – May 16, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/ollo-may-16-2022/advanced-data-analytics.html

Some Related, More General Canadian Government Information:

Directive on Automated Decision-Making
https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592

Guide on the use of generative artificial intelligence
https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html

Guideline on Service and Digital
https://www.canada.ca/en/government/system/digital-government/guideline-service-digital.html#ToC10

Policy on Service and Digital
https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32603

Generative AI in your daily work
https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/generative-ai-your-daily-work.html



Some Forum Discussions About Chinook:

Either they have started using Chinook to speed up processing of Work permits or these is something else. Chinook allows them to bulk approve or reject work permits.

Typically they do not like to use Chinook inside Canada but who knows.

Chinook makes things worse because it allows mass approval and mass rejection. Imagine a VO making 100 mistakes with one click... Thats Chinook for you.
I feel the same. This whole boogieman issue is due to TR apps where 80% will get rejected anyways, thanks to chinook. They will probably have cleared CEC and TR2PR by the end of the year. Of course everyone who gets ITA now will have a wait of at least a year next year, because there is enough PR for quota in backlog. But that’s not that bad.
Since Chinook is an AI tool and I don't think it would have the capability to go through the supporting documents since most of them are not text searchable. The most Chinook could do is analyze the form which is machine-readable already and give its summary to the VO. The VO can then choose to refuse on the basis of that analysis by Chinook or go through supporting documents manually.
. . . and so much more . . .
 

scylla

VIP Member
Jun 8, 2010
97,608
23,333
Toronto
Category........
Visa Office......
Buffalo
Job Offer........
Pre-Assessed..
App. Filed.......
28-05-2010
AOR Received.
19-08-2010
File Transfer...
28-06-2010
Passport Req..
01-10-2010
VISA ISSUED...
05-10-2010
LANDED..........
05-10-2010
RESOURCES . . . a partial introduction . . .

I have not had time to properly organize many of the resources about the employment of AI, advanced analytics, algorithm-based and other automated decision-making, and machine-learning, in processing immigration-related applications.

There's a lot out there to review and digest. It really is weedy. Much is technical, technologically technical, but procedurally and legally technical as well. A rather bad combination.

Moreover, to make matters far more difficult, the substantive elements, ranging from risk indicators and triage criteria, to the factors and categories affecting forking path determinations (such as what is not complex or low complex? what is complex? what is high complex?), are almost entirely kept behind the confidentiality curtain. Not even the lawyers are given access to a huge amount of the information which has a big impact on the processing path for deciding immigration applications. Holthe and Tao have described the situation as like trying to figure things out with only 500 pieces of a 1000 piece puzzle.

I have long been planning to take a deeper dive into what this means, what impact it is having, but putting that off while planning to do some homework, a lot more homework actually, hoping to get more fully informed before doing that. I cannot say I have done enough homework about this yet, but it is time to directly address the role of automated decision making center stage . . . noting that along with the development and implementation of application processing tools like Chinook ( which IRCC says "does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms"), IRCC has been expanding the implementation of electronic tools which definitely employ AI (Artificial Intelligence), at the least going back to some key automated decision pilot projects in 2018/2019.

Six plus years into this it is time to wrestle with what it means, how it works, who it helps, who it hurts.

So, for those interested (genuinely interested), yeah there is a lot of homework to do.

So here are some resources. Again I have not had time to organize these well, and I will start by listing those about Chinook, even though this tool (a suite of tools) is not employed in applications by Canadians (like PR card or PR TD applications, or applications for citizenship so far as I am aware), except perhaps sponsorship applications. And even though IRCC denies that Chinook uses AI or even recommends, let alone makes decisions.

Re Chinook:

CIMM – Chinook Development and Implementation in Decision-Making – February 28, 2024
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-28-2024/chinook-development-decision-making.html

CIMM – Chinook Development and Implementation in Decision-Making – October 24, 2023
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-october-24-2023/chinook-development-implementation-decision-making.html

CIMM — Chinook Development and Implementation in Decision-Making – February 15 & 17, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-feb-15-17-2022/chinook-development-implementation-decision-making.html

Chinook youtube/podcast videos re Chinook (featuring Mark Holthe and Will Tao) and other sources:
https://www.canadianimmigrationinstitute.com/podcasts/canadian-immigration-podcast?search=chinook
https://cila.co/examining-the-role-of-chinook-in-immigration-decision-making-mehrara-v-canada/



Some Chinook related Federal Court decisions:

Mehrara v. Canada, 2024 FC 1554 , https://canlii.ca/t/k74qm
Haghshenas v. Canada, 2023 FC 464, https://canlii.ca/t/jwhkd
Farshid v. Canada, 2024 FC 1573, https://canlii.ca/t/k76cw
Espinosa Cotacachi v. Canada, 2024 FC 2081, https://canlii.ca/t/k8h9m

IRCC Information About Automated Decision-Making, Advanced Analytics, Machine Learning:

How we use advanced analytics, automation and other technologies
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics.html

CIMM – Question Period Note - Use of AI in Decision-Making at IRCC – November 29, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/cimm-nov-29-2022/question-period-note-use-ai-decision-making-ircc.html

Uses of technology across the department
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/uses-technology.html

Using technology responsibly
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/technology-responsibly.html

Processing aids
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/processing-aids.html

Tools that protect the security of our immigration system
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/digital-transparency-advanced-data-analytics/tools-security.html

OLLO — Advanced Data Analytics to Sort and Help Process Temporary Resident Visa Applications – May 16, 2022
https://www.canada.ca/en/immigration-refugees-citizenship/corporate/transparency/committees/ollo-may-16-2022/advanced-data-analytics.html

Some Related, More General Canadian Government Information:

Directive on Automated Decision-Making
https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592

Guide on the use of generative artificial intelligence
https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html

Guideline on Service and Digital
https://www.canada.ca/en/government/system/digital-government/guideline-service-digital.html#ToC10

Policy on Service and Digital
https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32603

Generative AI in your daily work
https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/generative-ai-your-daily-work.html



Some Forum Discussions About Chinook:










. . . and so much more . . .
My two cents that is that there's a big misunderstanding out there of what Chinook is. I don't think it's AI or even ML. It's an EUC / workflow tool based on everythign I've read.

IRCC is certainly using some ML (maybe even more advanced AI) and I'm sure exploring the use of more. However IMO that's not Chinook and there's lot of misunderstanding of what Chinook is / what it does. It's just an EUC. It's never going to be more than that. And at some point it will become obsolete when a more adanced tool takes over.

Again, all based on what I've read. I don't have the inside scoop so may not have it all right.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
My two cents that is that there's a big misunderstanding out there of what Chinook is.
To be clear, this topic is not intended to be about Chinook, which after all is not used in processing PR card or PR TD applications (and otherwise not of much relevance for PR admissibility issues, or in regards to another area of interest to me, citizenship applications, at least so far as I currently know). I have made references to Chinook, and resources about that, because that information provides extensive background and context, some key foundational information about tools IRCC employs in its processing and decision-making, which should be helpful for anyone trying to get a fuller picture of the increasing role of automation in IRCC decision-making.

This topic is intended to address the nature and scope of IRCC's use of AI components in particular, and the impact that has on processing applications by non-citizen Canadians (PRs). Which in turn should help inform some PRs about when to make a PR card application, identify risk issues, and be prepared for the range of what will happen.

IRCC is certainly using some ML (maybe even more advanced AI) and I'm sure exploring the use of more.
Make no mistake, IRCC is extensively employing artificial intelligence technology in processing applications and has been for years (see the IRCC information referenced and linked in a post above). And it has recently expanded the use of AI in facilitating automated approvals for several types of applications beyond its use in the eTA system (implemented more than a decade ago) including, in particular, PR card applications.

I am addressing this particular comment separately to clearly emphasize, in particular, that IRCC has been implementing what it calls "advanced analytics and automation" in processing immigration applications for years, including "machine learning," meaning it is very much currently engaged in utilizing advances in Artificial Intelligence, and again, it has been for years, specifically implementing automated decision making in some visa application pilot projects back in 2018 for example (and again the eTA system has been making automated decisions for more than a decade). In particular, I previously referenced and linked IRCC information specifically addressing its use of artificial intelligence from more than 2 years ago, and that in turn references implementation of automated triaging tools back in 2020, and acknowledges some triage tools employ what IRCC calls "more complex black box algorithms," but denies these are used to "support decision-making," and further claims:
"Where advanced analytics is used to support decision-making, IRCC does not use complex algorithmic systems that make decisions in unknowable or unexplainable ways. All rules applied by these systems can be clearly explained."​

But good luck getting information about what those rules are let alone any clear explanation for them. From details about what particular information is considered to what criteria is utilized, or how the criteria are applied, what decisions are made or what those decisions are based on, the nuts and bolts of these processing systems are largely redacted from information that is publicly accessible.

IRCC is basically saying "trust us."

But a key aspect of this, what looms large, are IRCC's disclaimers, its efforts to distinguish tools which it claims are not engaged in decision-making ("not AI" for example) and which (again, according to IRCC) do not "support decision-making." It is readily apparent that these claims are about outcome decisions, as if the scores of sub-step decisions made in processing applications are not really *decisions* even though many of these decisions can and often will have a big impact on the process, the processing time, and in many cases the outcome decisions.

It is very likely that IRCC is overtly framing the role of many of its digital processing tools (and this is quite obvious in its information about Chinook in particular) to separate and insulate processing decisions from outcome decisions, the latter necessarily being public information and subject to judicial oversight/review, in order to keep most of the processing decision-making confidential and outside the scope of judicial review.

But as experienced by scores and scores of immigrants bogged down in processing that takes many months, sometimes years longer, a decision that refers an application to CBSA's NSSD for background screening, let alone a referral to CSIS, can have a huge impact even though that is not an outcome decision (not a determination as to an element of eligibility for example).

In the PR card application context, an automated decision to approve and issue a new PR card, almost immediately, means there is a clearly detrimental decision made in regards to the many other applications which are not approved but instead go into a much longer processing stream (almost certainly at least five times as long, there being little indication that the processing times for these applications will be less than two months, which is more than five times as long as it takes for those getting automated approval). Yet the criteria for this decision-making is not at all transparent but, rather, kept confidential.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
More Re Chinook . . . even though this topic is not really about Chinook . . .

My two cents that is that there's a big misunderstanding out there of what Chinook is. I don't think it's AI or even ML. It's an EUC / workflow tool based on everythign I've read.

IRCC is certainly using some ML (maybe even more advanced AI) and I'm sure exploring the use of more. However IMO that's not Chinook and there's lot of misunderstanding of what Chinook is / what it does. It's just an EUC. It's never going to be more than that. And at some point it will become obsolete when a more adanced tool takes over.

Again, all based on what I've read. I don't have the inside scoop so may not have it all right.
In so far as what I have read in this forum, some of which I refenced and quoted, I agree there is a lot of misunderstanding about Chinook and what it actually does.

I disagree, however, it is merely end user computing software divorced from IRCC's automated processing. From what I can discern, I agree more with immigration lawyer and something of an expert on the impact of Chinook in visa application processing, Will Tao, who sees this tool as an integral part of IRCC's automated processing implementing advanced analytics and other AI components, subject to some serious pitfalls and the potential for unfair results, and does so despite how adamant IRCC is that Chinook is independent of automated decision making and advanced analytics, and it is not AI, not even a decision-making tool . . . (yet IRCC otherwise, inconsistently I'd suggest, claims the tool very significantly improves decision making efficiency in processing visa applications).

My sense is that a lot of the misunderstanding is rooted in a failure to distinguish what directly determines outcome decisions (mostly decisions about specific eligibility elements) versus what influences incremental processing decisions (such as whether the application qualifies for automated approval, or whether the application is referred to NSSD for non-routine investigation, among scores of other processing-step-decisions which are not subject to review by the Federal Court). The forum discussions I quoted above, for example, erroneously equate Chinook with decisions that constitute a bulk rejection of visa applications. That characterization may have come from comments by Will Tao, but those are comments which I believe were intended to be descriptive of the general results not an explanation of the particular processing mechanics leading to those decisions/results.

Will Tao and Mark Holthe navigate the dense details in how Chinook is a big part of the automated decision making apparatus for visa applications, but that gets complicated, sometimes confusing, and is in significant part obfuscated by what IRCC keeps secret.

All of which, again, is weedy, very weedy. And focusing on the nature of the Chinook tool itself misses the point. The point being about how automated decision-making, increasingly driven by advanced analytics and other AI components, is having a bigger and bigger impact on application processing, and Chinook is just one of the electronic tools employed by IRCC that is integrated into the processing (for visa applications), which on one hand illustrates the complexity of how information is used, while on the other hand becoming a key component in the decision-making process . . . no matter how much IRCC insists Chinook does not make or recommend decisions.

Again, I did not begin this discussion to talk about Chinook, which really is irrelevant in processing applications by Canadians, such as PR card or PR TD applications, or applications for citizenship, in contrast to applications by Foreign Nationals, many of which will involve Chinook.

Rather, I reference the information about Chinook for background and context, to illustrate the decision-making milieu, and the complex relationship between the decision-making process itself and the information that decisions are based on. As I have been researching this topic (that is the role of AI, not Chinook, and again I am no tech guy, so much of this is difficult going for me), I have found the information about Chinook, offset by the misinformation about it, and the discussion of it in the Holthe and Tao podcasts, to be the most comprehensive overview which illuminates the pitfalls, the risks, the potential for causing unfair processing delays and in some cases unfair outcomes. But again, I defer to Will Tao's extensive descriptions and explanations about how this happens, how it works, how it fails, and why.

Meanwhile, the Federal Court dismisses, quite casually, the significance of Chinook in visa application decisions, as discussed in the cases I referenced and linked above, despite the affidavit (consisting of more than a thousand pages) submitted by Will Tao in the Mehrara v. Canada, 2024 FC 1554, https://canlii.ca/t/k74qm case, which I understand detailed, in great depth, how the use of Chinook can undermine reasonable decision making and should be subject to judicial scrutiny (I have not seen that affidavit, just references to and discussions about it).

Meanwhile, in regards to Chinook becoming obsolete and being phased out, it was nearly three years ago (May 2022) when Mark Holthe and Will Tao did a podcast discussing how Artificial Intelligence might displace Chinook, which I previously linked and link here again:
https://www.canadianimmigrationinstitute.com/podcasts/canadian-immigration-podcast/episodes/2147725285

But in that podcast they go into some depth about how AI and advanced analytics are integrated with Chinook in application processing, and the impact these things combined are having. Moreover, it needs to be noted, notwithstanding their assessment that Chinook was "on the way out" (again that was back in May 2022), subsequent IRCC documentation illustrates that it still plays a big role. And much more recently, in a recent podcast about security screening which has been discussed in the Citizenship forum here, both Holthe and Tao refer to the ongoing, oft times problematic role of Chinook in visa application processing.

Moreover, even if the particular Microsoft Excel-based tool called Chinook is displaced, the underlying functions will almost certainly be incorporated into whatever particular electronic tools IRCC utilizes going forward.

The main thing to grasp, to recognize, is that Chinook, or whatever replaces it, is an electronic tool that is almost certainly being used in conjunction with AI components, including advanced analytics in particular, probably machine-learning as well. So IRCC's claims that their advanced analytics and automation systems "operate independently of Chinook" are, frankly, disingenuous, akin to saying driving an automobile is independent of the electronic maps a human driver uses to decide where to go (the maps do not decide or recommend a route, are not connected to the functional operation of the vehicle, but the information in the maps will have a big influence in the driver's decision making).

At the risk of oversimplifying things, my sense is that Chinook provides a means of viewing information somewhat like looking at a map. But that map is now increasingly populated by information derived from or influenced by digital processing, which in turn is increasingly dominated by advanced analytics and other AI components. And, moreover, the information in that map is increasingly part of what is assessed by AI components in application processing, and ultimately in the outcome decision-making as well as sub-step decision-making.

So, Chinook is probably our most expansive example of the highly complex relationship between the information that decision-making is based on and the mechanics of the decision-making itself . . . including, again, incremental sub-step decisions.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
Automated decision making in processing PR card applications in particular:

I did not take note of just when, precisely, IRCC began reporting PR card application processing times comparable to today's posted time, just 11 days, rather than the two plus months (60 plus days, and often months longer than that) it was posting previously. But this change was not all that long ago. Early this year? Late last year? Not before late last year anyway.

I cannot conclusively say the processing time change (from when processing times were 8 weeks plus and then suddenly dropped to around a couple weeks) reflects when a majority of PR card applications (that is "most," meaning at least one more than half) began getting automated approval. This is because it was just late last year IRCC also began revising how it reports processing times (clearly intended to deflect processing time criticisms and to deflate claims based on the processing time information).

However, there is little doubt now (for now anyway) that most PR card applications not involving complex issues will be given automated approval and this is fast, very fast. As I have periodically discussed, for more than a year now, over the course of the last year it was increasingly apparent that more and more online PR cards applications were getting automated approval and this approval happens almost immediately upon submission.

And, again, that is great news for all those PRs who are well settled in Canada, meeting the Residency Obligation based on days present in Canada (I am not certain, but almost, that relying on credit for days outside Canada, such as days accompanying a citizen-spouse, most likely will not qualify for automated approval), and who otherwise do not have issues lurking in their files (no potential inadmissibility concerns, no criminal or security concerns, no GCMS alerts or flags). Most will get near immediate approval resulting in a PR card in the mail within two to four or five weeks. May take longer to get a renewed drivers' license than it takes to get a new PR card. Wonderful. Wonderful for those who benefit.

What about those who do not get automated approval?

What this means for PRs who do not qualify for automated approval, however, is not clear . . . And herein lies the rub.

The dramatic reduction in processing times for most PR card applications, just 11 days as of today, is clearly the product of automated approvals. Not that long ago the processing time regularly exceeded two to three months, and at times has been several months longer than that. Again, the fast automated approval is good news for many.

But this now results in three different processing streams for PR card applications, with rather different processing times. How long it takes is no longer roughly divided into applications that are routinely processed compared to those subject to non-routine processing, but divided into three groups:

-- PR card applications qualifying for automated decision-making, those which are triaged as not complex or low complex; these are approved almost immediately, PR cards in the mail within two to four weeks (and given the posted processing times, apparently this applies to "most" "complete" applications now)​
-- PR card applications triaged complex or medium complex, to be processed routinely but by an officer, requiring the application go into a queue for officer review; we do not know the processing times (which is mostly the queue wait time) for these but it quite likely is at least two or three months, at least five times as long as it is currently taking for most complete applications​
-- PR card application triaged high complex, subjecting the application to non-routine processing, timelines ranging from three or four months, up to a year​

So, yeah, great for most PRs, those who will benefit from a very fast automated decision making process.

Not much impact on PRs with issues (such as those relying on H&C relief), since those applications will still end up in non-routine processing with much longer processing times.

But it is likely there is a significant disparity among PRs meeting the RO and in circumstances allowing for routine processing, many or even most getting the automated fast track approval, but many others who will not benefit from automated approval, whose PR card applications will go into a queue for processing by the ever decreasing number of IRCC officers handling these applications, while IRCC offers little or no clue as to how long that routine (but not automated) processing time will be . . . and while IRCC keeps secret the reasons why some get the benefit of immediate approval versus the reasons others do not

Leading to . . .

Some Factors:

Despite so much being behind the confidential information curtain, and the lack of access to the criteria employed, we can readily discern many factors that likely influence whether a PR card application gets automated approval, or routine processing if not given automated approval, or non-routine processing. PRs relying on H&C relief, for example, are almost certainly subject to non-routine processing, and significantly longer processing times accordingly. My guess is that PRs relying on outside Canada RO credit (credit for days accompanying a citizen spouse for example) will not qualify for automated approval but many of these should nonetheless still be routinely processed (not subject to or bogged down in a Secondary review non-routine process).

For example: There are almost certainly advanced analytics, and algorithms as well (directly or indirectly), employed in identifying applications to be excluded from automated approval based on address and employment history information, some threshold criteria based on the presence of or lack of certain Canadian ties.

But how this decision is made, what it is based on, even the nature and scope of what is considered, is largely confidential and not subject to oversight, not subject to judicial review, no assurances this process is based on reasonable or legitimate grounds, justifiable reasons, other than relying on IRCC's judgment.

Again, IRCC is essentially saying "trust us," with no outside evaluation of the reasonableness, not even as to whether the process is free of prohibited discrimination.

One factor which I might have guessed would preclude automated approval is cutting-it-close, applying for a new PR card with just a small margin over the minimum RO. But over the course of the last year there have been anecdotal reports from PRs saying they got near immediate approval applying with very few days over 730. I cannot quantify the number and this is probably something worth watching for, to see if this holds true.

A factor I am more confident about, even though it is still mostly guessing, is falling short by even just one day. Now this will be a kind of gotcha factor. Conventional wisdom here has long been that it is best to not apply for a new PR card unless the PR is in compliance with the RO. For anyone who has fallen out of compliance, better to wait and apply only when they are in compliance. But the main differences underlying that conventional wisdom were:
-- risk of lengthy delays for non-routine processing​
-- risk of triggering a 44(1) Inadmissibility Report if the PR is so short of meeting the RO they would still be short when an officer is reviewing the application​

Main thing in this regard is recognizing the probable impact of making a mistake in travel history. Previously an isolated mistake that meant the PR was just a few days short at the time the PR card application is made probably had no detrimental impact on processing, probably did not result in a significantly longer processing time. Quite likely this is now a gotcha factor. One day short, no automated approval, application goes into a processing stream that will likely take at least five times as long as most PR card applications.

For example, when the processing time was around a hundred days and the PR card application was made on paper, applying for a new PR card a month or two before actually being in RO compliance was not particularly risky. The PR would have no problem as long as the PR remained in Canada, since by the time IRCC opened and reviewed the application the PR is in RO compliance (with some exceptions, such as scenarios in which additional days in Canada do not increase the RO credit due to losing credit for days in Canada five years previous).

Probably an easy call now: PRs needing a new PR card sooner should be extra careful, recognizing that one day short as of the day the online application is made will almost certainly mean no automated approval, meaning the application will go into a much longer processing stream and be subject to more thorough screening.

But the reason this subject demands more attention is the impact of a wide, wide range of other factors, which again IRCC not only won't publish but will redact from responses to information requests. Factors the Federal Courts will not review. But factors that can, and for many will have a significant impact.
 

GandiBaat

VIP Member
Dec 23, 2014
3,733
3,007
NOC Code......
2173
App. Filed.......
26th September 2021
Doc's Request.
Old Medical
Nomination.....
None
AOR Received.
26th September 2021
IELTS Request
Sent with application
File Transfer...
11-01-2022
Med's Request
Not Applicable, Old Meds
Med's Done....
Old Medical
Interview........
Not Applicable
Passport Req..
22-02-2022
VISA ISSUED...
22-02-2022
LANDED..........
24-02-2022
My two cents that is that there's a big misunderstanding out there of what Chinook is. I don't think it's AI or even ML. It's an EUC / workflow tool based on everythign I've read.

IRCC is certainly using some ML (maybe even more advanced AI) and I'm sure exploring the use of more. However IMO that's not Chinook and there's lot of misunderstanding of what Chinook is / what it does. It's just an EUC. It's never going to be more than that. And at some point it will become obsolete when a more adanced tool takes over.

Again, all based on what I've read. I don't have the inside scoop so may not have it all right.
Chinook is essentially an excel plugin, it allows bulk decisioning on cases. Trouble with Chinook is that if and when it is used, the GCMS notes upon which lawyers rely upon to understand the decisioning and go with a JR gets short-circuted because officers can write some boilerplate reason for rejection. It operates outside of their GCMS system.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
At the risk of getting sidetracked, but recognizing illustrative, contextual value . . .

Chinook is essentially an excel plugin, it allows bulk decisioning on cases. Trouble with Chinook is that if and when it is used, the GCMS notes upon which lawyers rely upon to understand the decisioning and go with a JR gets short-circuted because officers can write some boilerplate reason for rejection. It operates outside of their GCMS system.
That's not really it . . . Despite being in the ballpark in terms of the gist of it, particularly as to a key complaint by lawyers in regards to the impact on judicial review (assuming you use "JR" to refer to judicial review).

Not that I can briefly describe what the Chinook tool does, in particular, much better. As I have said, for this I defer to the in-depth discussions of this tool in podcasts featuring Mark Holthe and Will Tao. (See links in post 2 in this thread.)

Moreover, without more context, to say it "operates outside their GCMS system" is, I believe, misleading. Indeed, part of the problem is how it works with the information captured in GCMS (not outside it) to facilitate decision-making influenced by undisclosed factors (a lot, a real, real lot of information in GCMS is not shared with clients, which means it is not shared with their lawyers either, even when there is an appeal in the Federal Court) . . . noting, however, how it works with the information captured in GCMS is also a big part of its usefulness, recognizing it is indeed a tool useful in identifying and more efficiently approving most qualified applicants. (Good for most, not so good for many.)

While Chinook is a relatively simple tool (built on Microsoft Excel), how it works in the decision-making process, and how and why it can be problematic (problematic only for some applicants, not most), is very complex, very weedy, and that is why I mostly defer to what Mark Holthe and Will Tao have said in multiple podcasts about it (again linked in post 2 in this thread) . . . in regards to which, as I recall, you have linked at least one of the Holthe/Tao podcasts about this tool, so I assume you are familiar with their description and explanation.

I do like your use of the term "allows" in regards to the resulting decisions, the significance of which I admit I had previously overlooked in your FSW posts. IRCC hides behind the claim the tool does not make or recommend decisions, but that belies the extent to which the tool presents information in a way that facilitates, or as you say "allows" IRCC officers to group or categorize applicants leading to their applications being rejected.

However, and in regards to this, as best I can figure things out, I feel that the reference to "bulk" rejections is also misleading, and this applies to some comments by Will Tao as well. Not that I give much credence to IRCC's denial of Chinook making decisions at all, let alone bulk rejections. That's a dodge, IRCC again hiding behind what constitutes the particular outcome decision itself, without acknowledging the role of the information that influences the processing underlying that decision-making.

Which leads to claims about "boilerplate" reasons for rejections.

I cannot say what particular reasons you are claiming to be "boilerplate," but among the more common reasons why applications for temporary resident status (be that work or student or visitor status) are rejected, is an officer's determination the applicant has not established that they will leave Canada by the end of the period authorized for their stay. That's not boilerplate. It is, for example, a legitimate reason for denying an application for a study or work permit.

As I understand criticisms about the impact of Chinook, including those made by Will Tao, the problem is that the presentation of information in Chinook leads officers to characterize applicants more or less by group, and for some in particular as belonging to a group considered at risk for not leaving. This leads the officer to, in turn, focus on the applicant's information that will support a determination the applicant has failed to establish they will leave Canada timely, a reason to deny the application.

At the risk of over-simplifying: It is as if the officer decides what the outcome will be, based in significant part on the information presented through Chinook, and then extracts justification for that determination from the information specific to the applicant. The officer's stated reasons for this determination are subject to judicial review. The reasons why the applicant was initially identified as being at risk for not leaving, that is as being in the group considered to be at risk for not leaving, are not.

Which in turn leads to Justice Battista's decision in Mehrara v. Canada, 2024 FC 1554 , https://canlii.ca/t/k74qm (the case in which an affidavit by Will Tao, of more than a thousand pages about Chinook, was submitted to the FC). Here too at the risk of oversimplifying, Justice Battista in effect ruled that as long as the reasons stated for the officer's determination meet the reasonableness standard, the officer's decision should be upheld. What may have led the officer to more thoroughly or skeptically or critically evaluate the applicant is speculative and not relevant.

Beyond that it gets even more weedy. It gets into Rule 17 motions and what is included in the Certified Tribunal Record, what should have been included in that record, what might not have been included in that record that influenced or could have influenced the decision made by IRCC.

Have I mentioned, yes I think I have mentioned, this is too long, too much for those content to know it all without bothering to do the homework, and there is so, so much homework.

Back On Topic, Automated Decision-Making et al:

The main thing that looking under-the-hood at Chinook illuminates is how information is presented and characterized in the decision-making process can have an impact. This gets amplified when the nature and scope of processing are channeled into processing streams based on criteria that is not public and not subject to judicial review.

And this is where automated processing of immigration applications is currently at. And obviously IRCC is not going to be forthcoming.

Even though I have been working on this for years, I was not ready to dive into an open discussion about it. However, the dichotomy of very good news for most, that most complete PR card applications are now getting automated approval, and getting it fast (in just 11 days), while those in need of a new PR card soon are querying about whether to make a request for urgent processing, demands recognizing the role and impact of these automated processes and how it will affect (as best we can sort out) PRs, particularly those PRs with issues, but also just generally those PRs who find themselves in a situation requiring travel soon and in need of a new PR card to facilitate their return to Canada.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
Call For Increased Caution Re Making PR card Application Short of RO Compliance:

Conventional wisdom here has long emphasized the risk of applying for a new PR card when short of meeting the RO, the overwhelming consensus being it is best, best by a lot, to stay and wait and not make the application until in compliance. Notwithstanding this, there is no shortage of ongoing anecdotal reports from PRs who do not heed this advice, who proceed to make the application despite being short, many short by a lot, and many making this application soon after arriving here after a lengthy absence, and quite a few who do this and leave Canada while their PR card application is being processed.

Given recent anecdotal reporting, in conjunction with what we are learning about IRCC's use of AI/AA, it very much looks like "automated triage" is likely elevating the risk even more for PRs applying when short of complying with the RO.

It has long been known that applying early is risky. But as long as the PR included information making at least a nominal case for H&C relief, by the time the PR would be called for an interview, typically several if not many months later, the PR would have accumulated a substantial number of additional days to be credited toward RO compliance (as long as the PR stayed in Canada after applying) in addition to becoming more established and settled in Canada, improving the Canadian ties factor in the H&C assessment, all of which would generally improve the PR's odds of not being the subject of a 44(1) Report, better odds of getting a new PR card for five years rather than a Removal Order and a one-year card (valid pending an appeal).

In particular, some very recent anecdotal reporting suggests that IRCC is probably dealing much faster than before with PR card applications made by PRs not in RO compliance, as reflected (for example) in notices to attend an interview within a month or so of when the application was submitted.

For PRs with very strong H&C cases, now settled in Canada and staying in Canada after making the application, this may not be problematic. But their risk depends a lot on the strength of their H&C reasons for not complying with the RO.

However, for PRs well short of meeting the RO, and especially those who have only recently returned to Canada from a lengthy absence abroad, this faster processing poses a serious risk of being the subject of a 44(1) Report much sooner than before. Once the Report is prepared, days in Canada no longer count toward meeting the RO.

The anecdotal reporting that I have seen has not, not yet, included any specific reports of these interviews resulting in the preparation of a 44(1) Report. But it is very early in this transition to AI/AA automated processing of PR cards, and I doubt it is guessing or speculating to see where this is headed.

In very general terms, one might conclude that this is very good news for those staying inside the compliance lanes, but increasing the risks for those who fail to meet the RO. There are many who will applaud this. But given the history of flexibility and leniency afforded PRs, and the extent to which many have more or less have relied on this, there are bound to be some casualties, more than a few caught off guard and unprepared.

Moreover, while I have not seen signs of it yet, no stretch of imagination needed to foresee the use of these AI/AA functions to flag the GCMS records of PRs who breach the RO. Regarding this I am guessing a bit, guessing that in the not too distant future there will be a serious decline in the number of PRs-in-RO-breach benefitting from a casual border waive-through.

Summary: Those PRs in RO breach who have been lucky enough to get back into Canada without encountering inadmissibility proceedings should be extra-cautious about triggering RO compliance assessments before they have stayed long enough to comply with the RO . . . wait until in compliance before applying for a new PR card or trying to sponsor family.


Additional Sources:

Additional documents published by IRCC that I did not include in what is listed in post 2 above:

IRCC's Evolving Use of Automation and Advanced Analytics (February 1, 2023 referenced: 1A-2023-49408-page#)​

IRCC's Use of Advanced Analytics (August 29, 2024 published under Wassim El-Kass, Assistant Director at IRCC)​

I do not have links for these pdf documents. The second one, dated August 2024 , can be easily found by online searching. This document is where I first saw the reference to "routine" for what, in previous documents, was referred to as "low complexity" (in contrast to "medium complexity" and "high complexity").

I have had some difficulty relocating the first one, from February 2023, but most likely it can be found and accessed with some effort. Here are some notes taken from this one:

IRCC's Evolving Use of Automation and Advanced Analytics

This was published February 1, 2023, and in addition to providing additional contextual information about what Advanced Analytics is (this and some other IRCC literature refers to Advanced Analytics as "AA") and how it is used by IRCC, it lists (as of that date, more than two years ago now) particular uses in seven automated decision-making systems (including temporary resident visas and family class sponsorship applications), listing the launch dates for implementation of AA in eleven, which for example includes Temporary Resident to Permanent Resident application processing (launched in July 2021), and generally describing the implementation of AA for uses that "vary from largely innocuous triage to higher-stakes final decisions."

The characterization of triage functions as "largely innocuous" is telling, and to my view disingenuous, given that triage decisions affecting the processing stream (such as routine or complex or high complex) can (and often will) make a huge difference in the level of scrutiny and length of processing time. For a Canadian (be they a citizen or PR) sponsoring a spouse and accompanying young child, particularly for a spouse located in some of the more difficult geographical parts of the world, a triage decision that means continuing separation for another year, for example, can hardly be said to be "largely innocuous."

This document reflects the extent to which AI, including IRCC's preferred AI components in AA, has been operational for years. It is interesting, however, that here again IRCC seems to go out of its way to separate (more like insulate) Chinook from other automation tools. It states (on page 4 where some bullet points are listed, what IRCC is doing with AA "today," on one side, and a list of what IRCC is not doing with AA):
"Unlike some media and public perception, Chinook is neither an automation nor an AA tool.
Chinook is rather a Microsoft Excel-based tool that primarily displays information about our clients in a more user-friendly interface."​



External / Third Party Source:

While it is specific to Spousal/Partner sponsorship applications, and might be published by a consultant not lawyer organization (I have made no secret of my biases about consultants), titled "IRCC to Introduce Advanced Analytics and Automation for Spousal and Partner Sponsorship Applications"
here is a webpage describing (in general not detail, an overview) the implementation of AA/Automation/AI in processing sponsorship applications:
https://guidemeimmigration.com/ircc-to-introduce-advanced-analytics-and-automation-for-spousal-and-partner-sponsorship-applications/
 
  • Like
Reactions: armoured

rogersfail

Newbie
Apr 12, 2025
3
3
This debate actually brought me to setup an account after lurking here for more than a decade.

At least in the context of PR applications I do welcome the use of systems to speed the process up. As someone who has had PR for decades and have family members in the same situation, to me it always seemed strange that it would take IRCC 5-8 months to renew a PR card for very simple applications like ours (only out of country for ~100 days out of the 5 year period, working in Canada and filed taxes for all the tax years in that period, and zero legal issue) given that the federal government already has much of the information they need to make that determination themselves (has access to travel, tax info, criminal record). IRCCs priority back than was prcessing cards for new PRs (understable because they need the card for many things) and us people who just wanted to renew were put on the backburner.

As much as you rail about there being an unfairness for more complicated cases, there was always a disparity between "easy" cases like mine and someone with a more complicated legal history in Canada or with meeting RO, seen posts in the past where people in those situations have waited for well over a year to get a new card, so I would think they would welcome that time to be cut down by some months

On the subject of applications being put under more scroutiny because of being flagged as higher risk, isn't that something that Canada actually wants ? One of the main criticism of the prior system was that there was not enough time and resources to check for fraud/irregularities given the volume of different applications so having some form of internal filters to have officers not spend so much time on straight foward cases, and spending more time on the more complicated cases.
 
  • Like
Reactions: sgsmrp and armoured

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
As much as you rail about there being an unfairness for more complicated cases, there was always a disparity between "easy" cases like mine and someone with a more complicated legal history in Canada or with meeting RO, seen posts in the past where people in those situations have waited for well over a year to get a new card, so I would think they would welcome that time to be cut down by some months
To say I "rail about there being an unfairness for more complicated cases" is overstating the case. Too strong despite my criticism of certain aspects of IRCC for its approach (in its implementation of automation) toward transparency, and very importantly, more so IRCC's approach toward oversight or review in regards to the transparency, intelligibly, and justification for decisions that have real consequences even though those are not final outcome decisions (noting that some of these have an impact on the final outcome even if not themselves a final-outcome decision).

Moreover, in regards to automation of PR card application processing, my previous post addresses (without criticism) that it appears even PR card applications by PRs out of compliance relying on H&C relief are being processed far more quickly than before. To note the impact of this as increasing risks for those affected is not criticism but, rather, an alert, a caution-ahead-sign, advice to drive wise.

Along the way I have emphasized (in bold no less) that it is not clear just what the impact will be for those who do not get automated approval:

What about those who do not get automated approval?

What this means for PRs who do not qualify for automated approval, however, is not clear . . .
However, I will say rather emphatically that a failure to criticize "unfairness" for any Canadians (PRs applying for a PR card or for citizenship, or to sponsor family) would be more grievous. One of the problems is discerning what is a fair negative impact or outcome versus what constitutes treating clients unfairly. The latter (unfairness) should be unacceptable. The former (just results) a primary objective.

Meanwhile, a reminder:

This is not intended to be a screed opposing the development and implementation of automated decision-making tools.
I go on, there, to say that . . .
Many PRs, for example, are enjoying the benefits of automated approval, getting new PR cards in just a couple or four weeks. Most should readily agree that is good.​
However, more and more it is important to recognize the extent to which automated decision-making, and the use of what IRCC and other Canadian agencies refer to as "advanced analytics," incorporating components of what is generally referred to as "AI," have a big, big role in processing immigration-related applications and procedures.
The decision-making landscape is changing, has already changed a lot, and it is increasingly less clear how the Federal Courts will be able to adequately review the required transparency, intelligibility, and justification for the decisions IRCC makes given the extent to which technology and digital tools in particular are employed (and will be more so) in processing applications. Which gets weedy.​


On the subject of applications being put under more scroutiny because of being flagged as higher risk, isn't that something that Canada actually wants ? One of the main criticism of the prior system was that there was not enough time and resources to check for fraud/irregularities given the volume of different applications so having some form of internal filters to have officers not spend so much time on straight foward cases, and spending more time on the more complicated cases.
There is no problem, should be no problem, if the criteria for identifying whose application is at "higher risk" actually flags who or what constitutes a higher risk, and does so appropriately (influence of skin tone is not), and the manner in which such applications are then processed is reasonable, not unduly delayed.

These are very different measures.

In regards to the latter, the manner of processing, we do not YET know how this will go. In regards to PR card applications, for example, we are just beginning to see the downstream impact of the very recent (last two years or so) integration of AI/AA into processing. It is possible, perhaps likely, that more complicated (for good reason) cases will also be processed more quickly, potentially a lot more quickly. But what constitutes an undue or unjust delay is relative, including the extent to which there is (or isn't) disparity, but also, importantly, to what extent there is reasonable cause for what disparity there is.

A key underlying motive for starting this thread was to expand awareness that will, in turn, increase observation leading to more reporting in this forum, leading I hope to more insight and information about who is being affected and how.

In regards to the former measure, the risk criteria, we do not know, and for the foreseeable future we will not know to what extent the criteria flags real risk, or to what extent it is appropriate (not based on religion for example). And it is IRCC's policy to conceal this information, not only from the public but, it appears, from any oversight. As I have said, IRCC is saying "trust us." This I do criticize. Part of the reason for focusing much on Chinook, after all, is that one of the main criticisms from lawyers is that it is unfairly grouping applicants, and not only is that delaying decision but, as explained by lawyers like Will Tao, it is resulting in unfair outcomes. So far IRCC has resisted review and the Federal Courts are OK with that.

It is not as if increased scrutiny in general is good or bad, fair or unfair. Whether increased scrutiny is fair or not depends on whether it is warranted. In terms of deciding when it is warranted, when it is fair, IRCC is saying "trust us," offering no guidance let alone transparent accounting of information that would allow a Federal Court to determine whether it was warranted or not, fair or not, and frankly whether it is even intelligible or not, a key component in procedural fairness.
 

armoured

VIP Member
Feb 1, 2015
18,843
9,961
In regards to the former measure, the risk criteria, we do not know, and for the foreseeable future we will not know to what extent the criteria flags real risk, or to what extent it is appropriate (not based on religion for example). And it is IRCC's policy to conceal this information, not only from the public but, it appears, from any oversight. As I have said, IRCC is saying "trust us." This I do criticize. Part of the reason for focusing much on Chinook, after all, is that one of the main criticisms from lawyers is that it is unfairly grouping applicants, and not only is that delaying decision but, as explained by lawyers like Will Tao, it is resulting in unfair outcomes. So far IRCC has resisted review and the Federal Courts are OK with that.
I for one also welcome the more rapid approvals and think that's important. I'd add one additional worry, the classic 'ratchet' effect that occurs - almost always - when new technology is introduced to improve productivity. The initial period is highlighted as a great improvement, and 'freeing up' staff to focus on more important work - i.e. the implication is serious improvements will be made elsewhere (such as handling and deciding complex files, too). The ratchet effect is that someone comes along and decides the budget savings must be had and the productivity/timeline improvements* are proof it can be done, and staff are removed until it returns to the status quo ante, or worse.

* Or instead of the ratchet effect, we could call it the bureaucratic/budgeting version of the Jevons paradox or effect, that energy efficiency is used up by inducing demand - although here it's working in reverse of Jevons.

It seems to me understandable that in a role where IRCC/govt has an enforcement/fraud detection responsibility there will always be reluctance / refusal to disclose the full mechanism.

What worries me in most such systems is what one author, Dan Davies, calls the 'unaccountability machine' - it's not just that it makes the system harder to query for 'the reasons' that such and such a thing has happened/decision has been made, it's that they tend towards explicitly removing accountability. 'The machine' made the decision. And the machine is a black box - we provide data to the box, we get answers out, but we cannot know the logic (directly anyway).

And with some experience in using systems that use advanced analytics / AI / whatever buzzword is fashionable, it very much is the case that usually no-one can actually know 'the reason'. The systems are not 'reasoning' in the sense that we use the word - they are using weightings to get to a decision, and even the people who built the system can't give 'the reason.' (Specialists - of whom I know a couple - have different descriptions and metaphors for all of this, which I'd paraphrase to say that one can work backwards from a given data set and the individual case and reconstruct explanations that we can call 'reasons', or narratives, or some more statistically derived nomenclature.

But there is not accountability because we cannot have one actual explanation (chain of logic) that drove the actual decision. It's the unaccountibility machine. And a post facto simulacrum/reconstruction is just that, a fabricated reconstruction - fake.

I don't have an answer. For full accountability, all rejections would have to be reviewed and decided anew by humans - not practical for some types of apps like visa; or build an appeal mechanism that can send some back (on some criteria?) for decision only by a human process.

I'm not sure that these two are meaningfully different. And of course, there would be pressure - explicit or implicit - to ensure the 'human' decision-making process comes to the same conclusion. (A perfect closed loop, in the end)

Or - arguably - consistently and randomly (on sample basis, some %) run human decision making processes parallel to the 'enhanced analytics' to ensure no serious divergence. (But on what process/results are those people trained?)

Apologies. Sunday morning coffee. A recent conversation sparked these meanderings, and the book by the author mentioned.
 
Last edited:
  • Like
Reactions: dpenabill

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
What worries me in most such systems is what one author, Dan Davies, calls the 'unaccountability machine' - it's not just that it makes the system harder to query for 'the reasons' that such and such a thing has happened/decision has been made, it's that they tend towards explicitly removing accountability. 'The machine' made the decision. And the machine is a black box - we provide data to the box, we get answers out, but we cannot know the logic (directly anyway).

And with some experience in using systems that use advanced analytics / AI / whatever buzzword is fashionable, it very much is the case that usually no-one can actually know 'the reason'. The systems are not 'reasoning' in the sense that we use the word - they are using weightings to get to a decision, and even the people who built the system can't give 'the reason.'

But there is not accountability because we cannot have one actual explanation (chain of logic) that drove the actual decision. It's the unaccountibility machine. And a post facto simulacrum/reconstruction is just that, a fabricated reconstruction - fake.
You well describe one of the more salient concerns. Lawyers like those in the podcasts I have referenced and linked above are already claiming that even just the information grouping is leading to unfair results in which the reasons stated for the decision are not what really led to the decision . . . not the underlying chain of logic driving the actual decision.

In the PR card renewal, or even in processing applications for a PR Travel Document, this is not likely to be as problematic as it can be (and lawyers like Will Tao are very much saying it is) for temporary resident visas. The latter, and spousal sponsorships as well, involve discretionary judgments, such as whether there is reason to conclude the applicant has demonstrated they will timely leave Canada, or in the case of spousal sponsorships whether the marital relationship is genuine, decisions or conclusions to some extent judging the applicant's state of mind. For PR card and PR TD applications the decision-making depends on more concrete facts.

But this thread is not just about problems with the implementation of AI/AA automation, and actually I was hoping to instigate more extensive monitoring of the impact this automation is having, hoping we can identify trends, processing times looming large of course, recognizing that IRCC's published processing times, for "most complete" applications, illuminates very little about what clients can expect if there are any wrinkles in their case, which is the situation for many, many PRs, and perhaps most of those visiting this forum with questions.

But patterns in decision-making is another big one. I very much anticipate there will be significant shifts in regards to some outcomes. Only if we watch for the correlations, tracking and comparing, will those of us who strive to provide information about how things work be able to recognize to what extent significant changes are taking place.

Consider the obvious implementation of automated approval for PR card applications, for example. It has already been over a year (I think it was February 2024) since I first posted, tentatively, the observation that it appeared some PR card applications were benefitting from automated approval, which over the course of this last year became more clear, but that depended largely on anecdotal reports from PRs getting notice of approval almost immediately after applying and the card itself within two to four weeks. Then it was only recently that this shift in processing had a visible impact on the processing times for non-complex or routine applications, a sudden and dramatic impact. BUT meanwhile, other than vague generalities about IRCC's implementation of AI/AA automation referencing what appears to be its most extensive use (so far) as a triage tool in application processing generally, so far I have not seen that IRCC even acknowledges the use of AI/AA in processing PR cards, in particular, or that it is using these tools for automated approval.

Black-Box Processing:

IRCC is rather vehemently denying it uses black-box processing except in limited, narrow circumstances, such as prioritizing and allocating emails to facilitate better responding. But IRCC is also denying that it uses AI/AA tools to make any negative decisions on applications, in effect denying that automated triage grouping applications constitutes making decisions when as a practical matter it is a decision that will have a significant negative impact on some clients.

Main Thing For Now: The main thing is to watch and correlate reports, trying to identify what impact this transition is having, how it will affect processing times, and more importantly to what extent it has an impact on outcomes. I don't closely follow temporary resident status matters much, or even spousal sponsorships for PR, but it appears that there has been an increase in denied applications based on the applicant's failure to establish they will timely exit Canada (for TRV applications), and possibly an increase in denied sponsorships based on a conclusion the relationship is not genuine. Because I do not follow these closely I cannot positively say this is related to, let alone the product of AI/AA implementation, but it appears likely and that is my take of what some lawyers have been saying.
 
  • Like
Reactions: armoured

rogersfail

Newbie
Apr 12, 2025
3
3
In regards to the former measure, the risk criteria, we do not know, and for the foreseeable future we will not know to what extent the criteria flags real risk, or to what extent it is appropriate (not based on religion for example).
For me this is more of a semantics argument. IRCC has always had risk criteria and as far as I know they have never fully made that public. Thats why certian applicant profiles simply often have a lot longer processing time than others. I don't think it makes much difference whether a human is applying those, instead or a spreadsheet.

And it is IRCC's policy to conceal this information, not only from the public but, it appears, from any oversight. As I have said, IRCC is saying "trust us." This I do criticize. Part of the reason for focusing much on Chinook, after all, is that one of the main criticisms from lawyers is that it is unfairly grouping applicants, and not only is that delaying decision but, as explained by lawyers like Will Tao, it is resulting in unfair outcomes. So far IRCC has resisted review and the Federal Courts are OK with that.

It is not as if increased scrutiny in general is good or bad, fair or unfair. Whether increased scrutiny is fair or not depends on whether it is warranted. In terms of deciding when it is warranted, when it is fair, IRCC is saying "trust us," offering no guidance let alone transparent accounting of information that would allow a Federal Court to determine whether it was warranted or not, fair or not, and frankly whether it is even intelligible or not, a key component in procedural fairness.
While I totally get that point of view for me it really comes down to the age old question thats always been there on whether IRCC staff are doing their job properly. As long as the final decision for rejections is rendered by humans, I would not focus so much on the filtering system itself but on how the humans reviewing the files actually make those determinations. If IRCC staff are being lazy and just rejecting applications based on what they see in the dashboard instead of looking through the documents that were submitted (like what IRCC claims their process is), that speaks more or an issue with how IRCC trains, and checks the quality of their staffs work rather than an inherent problem with the dashboard existing. I would argue that that the lazy officer using Chinnok would be pulling the same sort of shorcuts if we were back in the paper based days.
 

armoured

VIP Member
Feb 1, 2015
18,843
9,961
While I totally get that point of view for me it really comes down to the age old question thats always been there on whether IRCC staff are doing their job properly. As long as the final decision for rejections is rendered by humans, I would not focus so much on the filtering system itself but on how the humans reviewing the files actually make those determinations. If IRCC staff are being lazy and just rejecting applications based on what they see in the dashboard instead of looking through the documents that were submitted...
I think this overall - and particularly the last bit - falls into the trap of just saying staff are lazy or weak or whatever.

In reality, we know that the system design (the process) and the inputs (the 'parts') are incredibly important. And that if staff are poorly paid, not provided correct information, and the inputs they're given are incomplete or shoddy, and the training and time to use them properly to make real judgments insufficient, the output can't just be blamed on 'bad workers.' It depends whether the process is appropriate for the task.

When we get garbage products from some offshore factory, it's probably NOT the case that the workers are lazy or whatever - they're working at low pay, with bad parts.

In that kind of system, the system/process design and implementation are by far the most important. To paraphrase Marshall McLuhan's pithy statement about communications (the medium is the message) for systems design: the factory is the output, 'the workers' don't make the core decisions about how it will work.

Or in simple terms for IRCC decisions about various cases: I think anyone would recognize that (for example) ten seconds to read and consider a file would absolutely be insufficient. Any accelerated decision that doesn't leave time to read the file's documents would be entirely dependent on the data input process / characterization of the file's contents being accurate and correct (easier nowadays with online, etc).

And staff under pressure to meet difficult quantitative targets ('productivity') that are also given some kind of 'scorecard' summary (eg of risk factors or other) are effectively going to feel pressure to render decisions that are (in effect) 100% in accordance with those scorecard measures. (There are some ways to deal with this - eg 'blind' file evaluation, where the analysts are not given the scorecard summaries - but they're expensive in terms of productivity).

Bottom line: it can't just be put to 'the workers.'

My overall conclusion is that TRV processes are unlikely to ever have a meaningful 'rejection decision' explanation, at least without major changes. The volumes are too large. Probably the only way to include accountability is a robust appeal mechanism that requires the entire decision to be re-done by scratch with a written decision - and realistically this is going to cost a lot more, but cheaper than doing it exclusively by judicial appeal. EG: allow rejected applicants to request "hand processing" at $500 or $1000 or more (some amount that most won't bother), appealable. It won't be 'fair' in the sense that many Canadians think of it.