+1(514) 937-9445 or Toll-free (Canada & US) +1 (888) 947-9445

Automated Decision-Making -- "Advanced Analytics" -- and AI, yeah that AI, so maybe some GOTCHA

armoured

VIP Member
Feb 1, 2015
18,843
9,961
In that kind of system, the system/process design and implementation are by far the most important. To paraphrase Marshall McLuhan's pithy statement about communications (the medium is the message) for systems design: the factory is the output, 'the workers' don't make the core decisions about how it will work.
This is a bit of a tangent but I think I backed myself up to quoting another person's quote, Stafford Beer's "the purpose of a system is what it does" - which has a somewhat overlapping set of meanings to what I intended, i.e. it's reasonably apt even if not exactly what I'd intended (and I haven't come up with better).

Or in this context: if the IRCC system produces refusals that are vague, can't easily be challenged and also don't provide/hide meaningful information about the reasoning (causation) of that refusal - or in other words, it actively removes accountability for the decision - then that's the point of the system. The lesson I draw is that it has not much to do with the skill, laziness, etc., of the IRCC workers: this is what it does, and that is the purpose of the system. It is an unaccountability machine.

And with caveats from above, I think the chances are that it's unlikely the use of various forms of 'enhanced analytics' etc is going to make it more accountable are low, and rather high it will remove accountability - unless IRCC makes a series of (probably costly) system design decisions to make it accountable.
 

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
And with caveats from above, I think the chances are that it's unlikely the use of various forms of 'enhanced analytics' etc is going to make it more accountable are low, and rather high it will remove accountability - unless IRCC makes a series of (probably costly) system design decisions to make it accountable.
Yeah.

Nonetheless, there is a massive, intense effort to build robust safeguards into the systems. There are scores of project reports (not easily retrieved but not all that difficult to find, more difficult to read with comprehension, lots of government pdf documents out there) conducting impact studies in regards to the various automation components IRCC has been developing, launching, and employing. Quality assurance is clearly important to IRCC, and yes it undoubtedly comes with a rather large price tag, a cost worth bearing.

I for one, at least under the current government, apprehend IRCC is making an admirable effort. And it is not as if I judge the effort or its outcomes to be unsatisfactory on the whole (there will always be some unsatisfactory outcomes in any decision-making system), except to the extent that IRCC is grading itself, isolating some procedural decision-making from oversight or review (administratively or judicially).

But I think you are right, that the risk of less oversight, less review, less accountability, is likely increasing with the implementation of more extensive AI/AA automation.

This is not to be feared, but to be recognized, and watched, whether to identify ways to encourage IRCC to improve things (for those who are activists), or to share information in a forum like this which can (so many us hope) help those with questions better navigate the process, recognizing that to be more informed is generally to be better prepared.

. . . for me it really comes down to the age old question thats always been there on whether IRCC staff are doing their job properly. As long as the final decision for rejections is rendered by humans, I would not focus so much on the filtering system itself but on how the humans reviewing the files actually make those determinations.
So far the courts seem to lean this way. What lawyers like Will Tao are complaining about (if I am getting it right, and if I am getting it right, I agree), is that the filtering system is having more and more influence in how the information is organized and presented, and this is influencing final decisions, and not only are the filtering factors secret, but so far the decision makers are not required to so much as acknowledge information that influenced them if the stated reasons for the decision adequately support the decision.

Beyond that, whether bad, unfair decisions (by which I mean those that are not justified, not reasonable, not intelligible, that is decisions that reasonable people will recognize to be wrong) are due to laziness, lack of training or competency, personal biases or prejudices, institutional bias or prejudice, or the grading sheet (information filtering system) is badly flawed, that addresses questions about how to fix it. But you can't fix what you don't see is wrong, and the way to identify what is wrong is real oversight and fair, reasonable review (again, administrative or judicial). That demands a reasonable disclosure, not necessarily a full disclosure but sufficient to oversee and evaluate the extent to which decision makers are appropriately considering relevant, probative information and not influenced by arbitrary, capricious, unintelligible, let alone nefariously discriminatory information.
 
  • Like
Reactions: armoured

dpenabill

VIP Member
Apr 2, 2010
6,533
3,294
dpenabill said:
In regards to the former measure, the risk criteria, we do not know, and for the foreseeable future we will not know to what extent the criteria flags real risk, or to what extent it is appropriate (not based on religion for example).
For me this is more of a semantics argument. IRCC has always had risk criteria and as far as I know they have never fully made that public. Thats why certian applicant profiles simply often have a lot longer processing time than others. I don't think it makes much difference whether a human is applying those, instead or a spreadsheet.
Decision-making standards not special to AI/AA:

What criteria IRCC employs to distinguish applicants and their applications, and distinguish in a way that discriminates (as in has a detrimental impact on those distinguished), is not semantics, not at all. It is real and it makes a difference.

What is semantic, or worse, disingenuous, is framing the issue in terms of what would constitute disclosure that "fully" makes risk oriented screening elements public, rather than how the lawyers whose podcasts I have referenced and linked frame the issue, and what I think is amply evident in how I address this, which is about REASONABLE disclosure, particularly as to information that influences outcome decisions that is not provided in the decision-maker's explanation of reasons for the decision (which thus avoids review, administrative or judicial review).

Independent oversight and judicial review are the primary ways in which a just and fair society (which it seems to me is what Canada aspires to be, doing a fair job getting there, which is a big part of why I am here, glad to be here, and not where I was born) insure the fairness of how government agencies make decisions that have a real impact on people's lives. The Supreme Court of Canada has repeatedly, with much emphasis, and in extraordinary detail (see cases like the oft cited Vavilov decision) addressed the extent to which administrative decisions must be subject to review, and the requirements of procedural fairness in particular. Sure, this gets weedy (short for recognizing it involves some not easily navigated concepts, some unwieldy terminology, more than a few slippery linguistic slopes, along with more technical jurisprudence, notorious legalese, than most people have the patience or attention span to wrestle with), but that is NOT semantics.

It is not difficult to identify some historically all too common but (at least now, for now) unquestionably inappropriate criteria, like race, religion, sexual orientation, among others. If these criteria influenced which sponsorship applicants got fast track approval, and who did not, that would undoubtedly be unfair, unreasonable, and a violation of the Charter. And if that grouping had an impact on the outcome, far more egregiously so.

Make no mistake, some criteria is entirely legitimate, makes good sense, and reputable lawyers (and many others, those who are not a Canadian lawyer, me too) not only accept but support, applaud even, appropriate and reasonable screening, including elevated scrutiny and investigation, including adverse inferences. It is entirely legitimate, and desirable, to employ screening tools that better identify those who engage in fraud, who are perpetrating criminal enterprises, who pose security risks to Canadians . . . and, very importantly, those who simply do not qualify.

The difference is not semantic. Some criteria is fair. Some is not. Some criteria is rationally related to this or that real risk. But one does not need to dive too deeply into judicial decisions or government records to find scores of instances in which reasons for decisions have NOT been justifiable, or transparent, or intelligible.

Identifying which is which, therein lies the rub (so some say the bard says) . . . and who gets to do the identifying matters.

That is, what assurances are there that the reasons influencing decisions having an impact on people's lives are reasonable, rationally related to legitimate issues? versus arbitrary, capricious, or simply not much relevant?

Illustration:
Was full blown RQ issued to citizenship applicants who submitted identification issued within the previous ninety days (briefly an actual part of citizenship application triage criteria adopted by Harper's government in 2012, secretly adopted but leaked) reasonable, rational, when it resulted in a burdensome and lengthy process which at the time took around two years, compared to routine applications (like mine) going from submission to oath in six to eight months? By the way, this criteria, along with others that were grossly disproportionate to what was actually probative, like any period of unemployment resulting in full blown RQ, did not last long (less than a year) for good reason, but tens of thousands of citizenship applicants suffered for it before changes were made.​

"Trust us" does not cut it. A fair and just society demands more verification than that.

By the way . . . NO, it is not true that risk criteria has historically never been publicly accessible, not even close.

There was a major migration removing risk factors or criteria from public view during the period of time that Stephen Harper had a majority government (2011 to 2015). Among the more salient examples was replacing the publicly available operational manual CP 5 Residence, regarding assessment of residency for citizenship eligibility, which explicitly described "reasons to question residency." That was replaced by Operational Bulletin 407, much of which was redacted (as in kept secret) including the contents of the File Requirements Checklist, which in turn prescribed the "triage criteria" which would render an application for citizenship a "residency case" subject to full blown RQ (note that there was a widely distributed leaked copy of the FRC). That criteria (I mentioned some above) was so grotesquely disproportionate to actual risks that it was dramatically revised in well less than a year . . . but not because it was imposing an inordinately excessive burden on a large number of qualified applicants (that sort of thing never seemed to bother the Harper government), not because judicial review pushed back, but because it swept up so many applicants, for no good reason, the cost was more than the government would bear. That should not be the standard.
 
  • Like
Reactions: armoured

rogersfail

Newbie
Apr 12, 2025
3
3
Or in simple terms for IRCC decisions about various cases: I think anyone would recognize that (for example) ten seconds to read and consider a file would absolutely be insufficient. Any accelerated decision that doesn't leave time to read the file's documents would be entirely dependent on the data input process / characterization of the file's contents being accurate and correct (easier nowadays with online, etc).

And staff under pressure to meet difficult quantitative targets ('productivity') that are also given some kind of 'scorecard' summary (eg of risk factors or other) are effectively going to feel pressure to render decisions that are (in effect) 100% in accordance with those scorecard measures. (There are some ways to deal with this - eg 'blind' file evaluation, where the analysts are not given the scorecard summaries - but they're expensive in terms of productivity).

Bottom line: it can't just be put to 'the workers.'
Which is exactly what i said. I actually deal with this all the time in my line of work (working with overseas vendors) and the number one issue question we have to our suppliers is what systems they have in place to keep worker errors from actually making it into the product they ship to Canada. We know mistakes and laziness happens, it come part and parcel of having humans do a job. However them as manufacturers and supposed experts in their field and they should be able to quality check their outputs as well as the customer (me) should. The same way you would expect IRCC staff who are supposed experts in the domain of applying immigration law to not make flat out incorrect determinations that a lawyer can easily overturn.

Sure the public can press IRCC to give more disclosure on how their AI systems work and give more disclosure on what the things flag but it all falls apart if humans do not correctly process the files the system flags. I had the same exact situation in my experience, supplier was making bad parts, we determined their inspection machine wasnt adequeate and got them to buy a more expensive machine that would measure all the parts properly. But bad parts still kept arriving. Turns out the machine was measuring the parts and was finding them bad, but the sfaff on the ground would still just ship the parts anyway because no one told them how to review the measurement data.

That's why it goes back to how IRCC trains and monitors their staff, are they actually trying to find people who cut corners or who have bias? If there is a person processing 30% more files than average is that looked at to see if their doing it at the same quality as everyone else? If there is a person rejecting 20% or their files while people with similar applicant demographics profiles are denying 5% is that being tracked or looked into at all? those are question I would be more interested to see IRCC answer, because this is stuff immigrants have been dealing with long before Natural Language Models was even a PHD subject.
 
Last edited:
  • Like
Reactions: armoured

armoured

VIP Member
Feb 1, 2015
18,843
9,961
Turns out the machine was measuring the parts and was finding them bad, but the sfaff on the ground would still just ship the parts anyway because no one told them how to review the measurement data.
Overall I don't think our views are all that different, so just minor points:
-on your experience above, and the staff on the ground not [doing the thing] - not lazy or ignorant or whatever staff, the system hadn't been changed to use the inputs. (Which was the fault of other staff and the management system, of course).

-I think your situation above shows that when there is oversight (purchaser checking, etc) and some degree of transparency (even if only to some limited group such as yours), it provides the accountability and corrections to bad directions that - ultimately - @dpenabill was referring to.

It seems to me that IRCC does not have that - or at least if there is something like that, it's not shown / shared to the public in even some limited way. And that's the kind of situation where over-reliance on systems processes can result in some form of what we see, non-accountable responses about what the reasons for rejection were (and other potential).

The hire-a-lawyer and appeal and see if an overworked court system can figure it out is not - in my opinion - up to the task. I don't have an easy answer. I think increased use of these systems (whatever form) will require a better system to monitor and provide accountability than kicking it out to the courts.