This is not intended to be a screed opposing the development and implementation of automated decision-making tools. Many PRs, for example, are enjoying the benefits of automated approval, getting new PR cards in just a couple or four weeks. Most should readily agree that is good.
However, more and more it is important to recognize the extent to which automated decision-making, and the use of what IRCC and other Canadian agencies refer to as "advanced analytics," incorporating components of what is generally referred to as "AI," have a big, big role in processing immigration-related applications and procedures.
The decision-making landscape is changing, has already changed a lot, and it is increasingly less clear how the Federal Courts will be able to adequately review the required transparency, intelligibility, and justification for the decisions IRCC makes given the extent to which technology and digital tools in particular are employed (and will be more so) in processing applications. Which gets weedy.
Thus . . .
Warning: Cloudy, Weedy Conditions Ahead (proceed with caution):
(acknowledging this will be far too long to read for those who are content to know it all without bothering to do the homework)
No need to guess, undoubtedly many here are far more and far better acquainted with AI and related automated digital tasking than I am. That is, I am NO tech guy, not close (it has been more than a quarter century, for example, since I was employed in projects developing technology for capturing, organizing, maintaining, and publishing legal information). I am far more a law and bureaucracy guy, focused on content oriented decision making. So AI and related topics are a stretch for me.
However, there has been rather little shared here about the impact of automated decision-making, advanced analytics, and other components of AI (Artificial Intelligence), on bureaucratic processing, be that within the Canadian government generally, or IRCC in particular, except for a number of largely cryptic discussions about Chinook in forums focused on temporary resident programs. Those discussions are largely shallow (although some include links to good resources, such as podcasts sponsored by the Canadian Immigration Institute including discussions between immigration lawyers Mark Holthe and Will Tao specifically about Chinook and related technology tools used by IRCC) and at best gloss over how Chinook and related automated processing affect applicants and their applications. Unfortunately the Chinook and AI discussions in the immigration forums here tend to be rife with mischaracterizations and misstatements, some outright misleading and erroneous.
Meanwhile, over the course of the last year we have seen the positive impact of automated decision-making on PR card applications, which has culminated recently in the abrupt drop in processing times, virtually overnight falling from two months plus (itself way faster than the four to six months it has often been over the years) to around a couple weeks, currently just 11 days. Great news for some; at the least good news for many, if not most.
But not all PRs needing a new PR card will enjoy near immediate approval. There are some real questions looming, like
To be clear, the implementation of digital processing employing AI components in immigration related matters, including automated decision-making, is not particularly new. The eTA system, for example, which incorporates automated decision-making (no live person involved) in giving visa exempt travelers electronic travel authorization facilitating their boarding flights to Canada from abroad, was developed and implemented more than a decade ago. From 2017 through 2019, in addition to developing and implementing Chinook, IRCC implemented multiple pilot projects utilizing automated decision-making for some visa applications, including the use of AI components like advanced analytics and machine-learning. And now it is clear that due to automated decision-making, automated approval, many PRs are benefitting from the near immediate approval of their applications for a new PR card . . . which according to the IRCC processing times online information actually means "most" "complete" PR card applications.
IRCC is adamant that there is no automated negative decision-making. (In time I will get to particular IRCC and other Canadian government information about this, weedy and nerdy stuff.) And thus, according to IRCC, the automated decision making does not have a detrimental impact, so there is no need, no cause, to require these decisions meet reasonableness standards. No basis for judicial review. No grounds to challenge the process for failing to be transparent and intelligible. No need for IRCC to justify these decisions. Indeed, IRCC is adamant that the processing apparatus, including the criteria employed, should remain behind the confidential information curtain, NOT shared with the public.
It is frustrating how utterly insistent IRCC (backed by the FC) is about this when no special expertise is needed to recognize that denying automated approval in itself is a negative decision resulting in it taking (probably) five times as long to get a new PR card, let alone when the tools have an impact on outcomes. And the practical reality is that outcomes are almost certainly being affected. The latter is very complicated, best I can say about that for now is visit the Holthe and Tao podcasts about Chinook, and consider how the courts are responding to Tao's criticisms, such as in decisions like Mehrara v. Canada, 2024 FC 1554, https://canlii.ca/t/k74qm and Haghshenas v. Canada, 2023 FC 464, https://canlii.ca/t/jwhkd among other decisions citing these two cases.
Leading to . . .
Spoiler Alert: I can no longer confidently say IRCC does not engage in GOTCHA games . . . not that I think individual officers are now more likely or prone to deny applications or penalize immigrants for trivial or gotcha reasons, but because digital processing is playing a rapidly increasing role and it is inherently mechanical, making it is susceptible, when it has an active part in decision making, to triggering actions disproportionate to the criteria, to pose excessive hurdles based on trivial or gotcha criteria. IRCC claims to be avoiding the problem by vesting negative outcome decision-making exclusively in officers, not machines. That badly overlooks how severely incremental processing steps can affect the overall process, from steering the application toward unwarranted delays and invoking excessive non-routine processing, to in some cases ultimately steering the process toward focusing on negative criteria, such as risk factors, that do not directly invoke unwarranted denials but which can influence the officials making those decisions to go in that direction. Again, how this happens is complicated, but for the moment suffice to take notice this is causing some serious concern among immigration lawyers despite IRCC's effort to insulate machine/automated processing tasks from negative outcome decision-making, which is more likely about preventing disclosure of the analytical components and operative criteria than it is about protecting the integrity of decision making in the system, .
This includes, in particular, purportedly non-decision-making tools like Chinook, which IRCC insists does not make or recommend decisions, but which some immigration lawyers are rather vehemently criticizing, including claims it is the lynchpin in a process that leads to the bulk rejection of some applications for temporary resident status and does so largely in-the-dark, without transparency, and in many cases for insubstantial or even unjustifiable reasons.
This is intended to be about more than just the direct impact of IRCC employing automated decision making supported by "advanced analytics" incorporating elements of AI (Artificial Intelligence), including so-called "machine-learning," but also about processing procedures and incremental steps in IRCC decision making more broadly, and the rapidly increasing role of technology and various digital tools, including those which IRCC claims do not involve or use AI, advanced analytics, or "built-in decision-making algorithms."
More to come, including references and links to resources. It will take time.
However, more and more it is important to recognize the extent to which automated decision-making, and the use of what IRCC and other Canadian agencies refer to as "advanced analytics," incorporating components of what is generally referred to as "AI," have a big, big role in processing immigration-related applications and procedures.
The decision-making landscape is changing, has already changed a lot, and it is increasingly less clear how the Federal Courts will be able to adequately review the required transparency, intelligibility, and justification for the decisions IRCC makes given the extent to which technology and digital tools in particular are employed (and will be more so) in processing applications. Which gets weedy.
Thus . . .
Warning: Cloudy, Weedy Conditions Ahead (proceed with caution):
(acknowledging this will be far too long to read for those who are content to know it all without bothering to do the homework)
No need to guess, undoubtedly many here are far more and far better acquainted with AI and related automated digital tasking than I am. That is, I am NO tech guy, not close (it has been more than a quarter century, for example, since I was employed in projects developing technology for capturing, organizing, maintaining, and publishing legal information). I am far more a law and bureaucracy guy, focused on content oriented decision making. So AI and related topics are a stretch for me.
However, there has been rather little shared here about the impact of automated decision-making, advanced analytics, and other components of AI (Artificial Intelligence), on bureaucratic processing, be that within the Canadian government generally, or IRCC in particular, except for a number of largely cryptic discussions about Chinook in forums focused on temporary resident programs. Those discussions are largely shallow (although some include links to good resources, such as podcasts sponsored by the Canadian Immigration Institute including discussions between immigration lawyers Mark Holthe and Will Tao specifically about Chinook and related technology tools used by IRCC) and at best gloss over how Chinook and related automated processing affect applicants and their applications. Unfortunately the Chinook and AI discussions in the immigration forums here tend to be rife with mischaracterizations and misstatements, some outright misleading and erroneous.
Meanwhile, over the course of the last year we have seen the positive impact of automated decision-making on PR card applications, which has culminated recently in the abrupt drop in processing times, virtually overnight falling from two months plus (itself way faster than the four to six months it has often been over the years) to around a couple weeks, currently just 11 days. Great news for some; at the least good news for many, if not most.
But not all PRs needing a new PR card will enjoy near immediate approval. There are some real questions looming, like
-- who benefits? (who will qualify for automated approval)
-- how will this affect the processing times for PRs who do not get automated approval?
-- what criteria does IRCC use and how?
-- what can a PR do to improve the odds of automated approval? reduce the risk of being a "complex" application?
-- to what extent will processing, and processing timelines, be impacted by electronic tools employing advanced analytics and other AI components, such as decision-making algorithms or machine-learning? and what impact will there be on outcomes?
To be clear, the implementation of digital processing employing AI components in immigration related matters, including automated decision-making, is not particularly new. The eTA system, for example, which incorporates automated decision-making (no live person involved) in giving visa exempt travelers electronic travel authorization facilitating their boarding flights to Canada from abroad, was developed and implemented more than a decade ago. From 2017 through 2019, in addition to developing and implementing Chinook, IRCC implemented multiple pilot projects utilizing automated decision-making for some visa applications, including the use of AI components like advanced analytics and machine-learning. And now it is clear that due to automated decision-making, automated approval, many PRs are benefitting from the near immediate approval of their applications for a new PR card . . . which according to the IRCC processing times online information actually means "most" "complete" PR card applications.
IRCC is adamant that there is no automated negative decision-making. (In time I will get to particular IRCC and other Canadian government information about this, weedy and nerdy stuff.) And thus, according to IRCC, the automated decision making does not have a detrimental impact, so there is no need, no cause, to require these decisions meet reasonableness standards. No basis for judicial review. No grounds to challenge the process for failing to be transparent and intelligible. No need for IRCC to justify these decisions. Indeed, IRCC is adamant that the processing apparatus, including the criteria employed, should remain behind the confidential information curtain, NOT shared with the public.
It is frustrating how utterly insistent IRCC (backed by the FC) is about this when no special expertise is needed to recognize that denying automated approval in itself is a negative decision resulting in it taking (probably) five times as long to get a new PR card, let alone when the tools have an impact on outcomes. And the practical reality is that outcomes are almost certainly being affected. The latter is very complicated, best I can say about that for now is visit the Holthe and Tao podcasts about Chinook, and consider how the courts are responding to Tao's criticisms, such as in decisions like Mehrara v. Canada, 2024 FC 1554, https://canlii.ca/t/k74qm and Haghshenas v. Canada, 2023 FC 464, https://canlii.ca/t/jwhkd among other decisions citing these two cases.
Leading to . . .
Spoiler Alert: I can no longer confidently say IRCC does not engage in GOTCHA games . . . not that I think individual officers are now more likely or prone to deny applications or penalize immigrants for trivial or gotcha reasons, but because digital processing is playing a rapidly increasing role and it is inherently mechanical, making it is susceptible, when it has an active part in decision making, to triggering actions disproportionate to the criteria, to pose excessive hurdles based on trivial or gotcha criteria. IRCC claims to be avoiding the problem by vesting negative outcome decision-making exclusively in officers, not machines. That badly overlooks how severely incremental processing steps can affect the overall process, from steering the application toward unwarranted delays and invoking excessive non-routine processing, to in some cases ultimately steering the process toward focusing on negative criteria, such as risk factors, that do not directly invoke unwarranted denials but which can influence the officials making those decisions to go in that direction. Again, how this happens is complicated, but for the moment suffice to take notice this is causing some serious concern among immigration lawyers despite IRCC's effort to insulate machine/automated processing tasks from negative outcome decision-making, which is more likely about preventing disclosure of the analytical components and operative criteria than it is about protecting the integrity of decision making in the system, .
This includes, in particular, purportedly non-decision-making tools like Chinook, which IRCC insists does not make or recommend decisions, but which some immigration lawyers are rather vehemently criticizing, including claims it is the lynchpin in a process that leads to the bulk rejection of some applications for temporary resident status and does so largely in-the-dark, without transparency, and in many cases for insubstantial or even unjustifiable reasons.
This is intended to be about more than just the direct impact of IRCC employing automated decision making supported by "advanced analytics" incorporating elements of AI (Artificial Intelligence), including so-called "machine-learning," but also about processing procedures and incremental steps in IRCC decision making more broadly, and the rapidly increasing role of technology and various digital tools, including those which IRCC claims do not involve or use AI, advanced analytics, or "built-in decision-making algorithms."
More to come, including references and links to resources. It will take time.