McGill Law Journal https://lawjournal.library.mcgill.ca/ <p>The <em>McGill Law Journal</em> contributes to legal research and scholarship on topics of significant importance through the publication of outstanding peer-reviewed articles, case comments and book reviews. The <em>Journal</em> publishes the work of professors, judges, researchers and practitioners. As a student-run organization, the <em>Journal</em> provides a meeting point for lively exchange between students and members of the legal community by way of annual events, such as symposia and conferences, and through its podcast channel.</p> McGill Law Journal en-US McGill Law Journal 0024-9041 AI’s Democratic Challenge https://lawjournal.library.mcgill.ca/article/view/1625 <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Artificial intelligence (AI) is variably identified as “job killer,” as “inhuman,” as “unpredictable” and “ungovernable,” but also as the greatest technological innovation in generations. Lawyers struggle with AI across a host of legal fields, including consumer protection, constitutional, employment, administrative, criminal, and refugee law. While AI is commonly discussed as a question of technological progress, its core challenge is a political one. As AI is used as a tool to review employment recruitment files, assess loan, mortgage, or visa applications and to collect and process data of “suspicious” actors, it deepens existing inequalities and socio-economic vulnerability. Given the rapidly expanding reach of AI into most facets of social, economic, and political life, AI shapes people’s access to democratic life in an unprecedented and increasingly precarious manner. Efforts to address its promises and perils through a lens of “AI ethics” can therefore hardly capture the scope of challenges which arise from AI. Seen from a historical perspective, then, AI accentuates and reinforces trends of inequality, social alienation, and political volatility which began long before AI’s implications in society’s daily lives.</p> Peer Zumbansen Copyright (c) 2024 Peer Zumbansen https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 373 394 10.26443/law.v69i4.1625 Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI https://lawjournal.library.mcgill.ca/article/view/1626 <p>&nbsp; &nbsp; &nbsp; This article explores the evolution of Canadian criminal and civil responses to non-consensual synthetic intimate image creation and distribution. In recent years, the increasing accessibility of this type of technology, sometimes called deepfakes, has led to the proliferation of non-consensually created and distributed synthetic sexual images of both adults and minors. This is a form of image-based sexual abuse that law makers have sought to address through criminal child pornography laws and non-consensual distribution of intimate image provisions, as well as provincial civil intimate image legislation. Depending on the province a person resides in and the age of the person in the image, they may or may not have protection under existing laws. This article reviews the various language used to describe what is considered an intimate image, ranging from definitions seemingly limited to authentic intimate images to altered images and images that falsely present the person in a reasonably convincing manner.</p> <p>&nbsp;</p> Suzie Dunn Copyright (c) 2024 Suzie Dunn https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 395 416 10.26443/law.v69i4.1626 Responsible AI https://lawjournal.library.mcgill.ca/article/view/1627 <p>&nbsp; &nbsp; &nbsp; &nbsp; This article examines responsible AI as a public law-like movement that seeks to (self)regulate the design and use of AI systems. Using socio-legal methods, and the <em>Montréal Declaration for a Responsible Development of Artificial Intelligence</em> as an illustrate example, it explores responsible AI’s upshots for digital government. Responsible AI initiatives, this article argues, rely on two binary distinctions: (1) between artificial and natural intelligence, and (2) between the future and present/past effects of AI systems. These conceptual binaries “bind” such initiatives to an impoverished understanding of what AI systems are, how they operate, and how they might be governed. To realize justice and fairness, especially in digital government, responsible AI projects must reconceive of AI systems and their regulation infrastructurally and agonistically.</p> Jennifer Raso Copyright (c) 2024 Jennifer Raso https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 417 439 10.26443/law.v69i4.1627 The New Jim Crow https://lawjournal.library.mcgill.ca/article/view/1644 <p>&nbsp; &nbsp; &nbsp; &nbsp;Despite its purported neutrality, AI-based facial recognition technology (FRT) exhibits significant racial bias. This paper critically examines the integration of FRT within the Canadian immigration system. The paper begins with an exploration of the historical evolution of AI in border control—once rooted in physical barriers—which now relies on biometric surveillance that risks replicating historical patterns of racial discrimination.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The paper further contextualizes these issues within the broader discourse of algorithmic racism, highlighting the risks of embedding historical racial injustices into AI-powered immigration systems. Drawing a parallel between FRT and Jim Crow laws that segregated and marginalized Black communities in the United States, it argues that biased FRT systems function as a modern mechanism of racial exclusion, risk denying Black and racialized immigrants access to refugee protection, and exacerbating deportation risks. It warns against the normalization of AI use in immigration decision-making without proper oversight, transparency, and regulatory safeguards.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The paper concludes by calling for enhanced government transparency and adherence to procedural fairness in the deployment of FRT within the Canadian immigration system. It further advocates for a “technological civil rights movement” to ensure that AI technologies, including FRT, uphold human rights and promote equity rather than perpetuate systemic racism.</p> <p>&nbsp;</p> <p>.</p> Gideon Christian Copyright (c) 2024 Gideon Christian https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 441 466 A Reasonable Apprehension of AI Bias https://lawjournal.library.mcgill.ca/article/view/1645 <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; In 1997, the Supreme Court of Canada rendered a split decision in a case in which the central issue was whether an African Nova Scotian judge who had brought her lived experience to bear in a decision involving a confrontation between a black youth and a white police officer had demonstrated a reasonable apprehension of bias. The case, with its multiple opinions across three courts, teaches us that identifying bias in decision-making is a complex and often fraught exercise.</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Automated decision systems are poised to dramatically increase in use across a broad range of contexts. They have already been deployed in immigration and refugee determination, benefits allocation, and in assessing recidivism risk. There is also a growing use of AI-assistance in human decision-making that deserves scrutiny. For example, generative AI systems may introduce dangerous unknowns when it comes to the source and quality of briefing materials that are generated to inform decision-makers. Bias and discrimination have been identified as key issues in automated decision-making, and various solutions have been proposed to prevent, monitor, and correct potential issues of bias. This paper uses <em>R. v. R.D.S.</em> as a starting point to consider the issues of bias and discrimination in automated decision-making processes, and to evaluate whether the measures proposed to address bias and discrimination are likely to be effective. The fact that <em>R. v. R.D.S.</em> does not come from a decisional context in which we currently use AI does not mean that it cannot teach us—not just about bias itself—but perhaps more importantly about how we think about and process issues of bias.</p> Teresa Scassa Copyright (c) 2024 Teresa Scassa https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 467 487 Algorithmic Price Personalization and the Limits of Anti-Discrimination Law https://lawjournal.library.mcgill.ca/article/view/1646 <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; As much attention is turned to regulating AI systems to minimize the risk of harm, including the one caused by discriminatory biased outputs, a better understanding of how commercial practices may contravene anti-discrimination law is critical. This article investigates the instances in which algorithmic price personalization, (i.e., setting prices based on consumers’ personal information with the objective of getting as close as possible to their maximum willingness to pay (APP)), may violate anti-discrimination law. It analyses cases whereby APP could constitute <em>prima facie</em> discrimination, while acknowledging the difficulty to detect this commercial practice. It discusses why certain commercial practice differentiations, even on prohibited grounds, do not necessarily lead to <em>prima facie</em> discrimination, offering a more nuanced account of the application of anti-discrimination law to APP. However once <em>prima facie</em> discrimination is established, APP will not be easily exempted under a <em>bona fide</em> requirement, given APP’s lack of a legitimate business purpose under the stringent test of anti-discrimination law, consistent with its quasi-constitutional status. This article bridges traditional anti-discrimination law with emerging AI governance regulation. Pointing to identified gaps in anti-discrimination law, it analyses how AI governance regulation could enhance anti-discrimination law and improve compliance.</p> Pascale Chapdelaine Copyright (c) 2024 Pascale Chapdelaine https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 489 513 Front Matter - v. 69, no 4 https://lawjournal.library.mcgill.ca/article/view/1624 Nicole Leger Copyright (c) 2024 Nicole Leger https://creativecommons.org/licenses/by-nd/4.0 2024-10-01 2024-10-01 69 4 10.26443/law.v69i4.1624