F16 - Public Opinion and the Politics of AI and Digital Technologies / Opinion publique et politique de l’IA et des technologies numériques
Date: Jun 5 | Heure: 08:30am to 10:00am | Salle:
Chair/Président/Présidente : Anne Imouza (McGill University)
Co-Chair/Président/Présidente : Tyler Romualdi (Western University)
Discussant/Commentateur/Commentatrice : John McAndrews (McMaster University)
Theorizing Artificial Intelligence Acceptance: The role of Social Identity and Intersectionality: Anne Imouza (McGill University)
Abstract: This paper introduces a critical approach to studying the acceptance of artificial intelligence (AI) by examining the differential benefits accrued from emerging technologies. This paper aims at establishing an extension of existing frameworks such as the Technology Acceptance Model (TAM2), designed to elucidate the factors influencing individuals’ acceptance or scepticism toward new technologies. Unlike traditional models, I argue that these models should incorporate an intersectional and critical race perspective to explore variations in technology acceptance across diverse demographic and social identity categories. While previous models account for various core constructs and predictors of individual behavior towards technology, this study argues for a more fine-grained analysis that examines how intersectional identities-based concerns can lead to subgroup variations regarding the acceptance of AI systems. Such an approach enables a comprehensive evaluation of how intersecting identities impact technology acceptance levels. Drawing from critical race theory, feminist theory, and technology studies, this interdisciplinary framework provides a robust theoretical foundation for understanding AI acceptance in society and in survey-based research.
Risk and the Gender Gap in Attitudes Toward Artificial Intelligence: Sophie Borwein (University of British Columbia), Beatrice Magistro (California Institute of Technology), Michael Alvarez (California Institute of Technology), Bart Bonikowski (New York University), Peter Lowen (Cornell University)
Abstract: The potential for artificial intelligence to profoundly disrupt life and work has prompted governments to consider how best to regulate the technology. Against this backdrop, this paper examines the gender gap in attitudes toward AI, with a focus on how gendered differences in risk perceptions drive support for its adoption and regulation. Analyzing closed- and open-ended survey questions from a survey of approximately 3,000 respondents across Canada and the United States, we show that women are more skeptical than men of AI’s economic benefits, and more likely to emphasize economic risks such as job displacement. Our analysis identifies two key drivers behind this gender gap: women’s higher levels of latent risk aversion and higher exposure to AI-related risks. To establish a causal relationship between risk and AI attitudes, we further show experimentally that as the perceived benefits of AI become more uncertain, women’s support for companies adopting AI falls more sharply than men’s, and their support for government intervention against AI increases. Considering technology’s capacity to reinforce gender inequalities, we conclude that AI policies and regulations that fail to incorporate women’s perspectives risk perpetuating these disparities in the workplace and society.
Who Rages Against the Machine? Dread, Controllability, and Public Backlash Toward AI Advances in the US, Canada, and Japan: Tyler Romualdi (Western University), Tyler Girard (Purdue University), Mathieu Turgeon (Western University), Yannick Dufresne (Université Laval), Takeshi Iida (Doshisha University), Tetsuya Matsubayashi (Osaka University)
Abstract: What explains citizens’ apprehension toward and backlash against emerging artificial intelligence (AI) technologies? Public concerns about AI and its associated risks are well-documented. Yet, substantial disagreement exists regarding the factors that explain individuals’ unease and the reasons behind their opposition to AI development. While recent research suggests that feelings of dread – the magnitude of the risk associated with the technology – and concerns about controllability – the ability to manage its development and consequences – are distinct concerns that shape AI attitudes, it remains unclear if they can play a causal role. We build on this research by investigating whether these two dimensions – dread and controllability – causally affect attitudes toward AI. To do so, we focus on the contentious issue of AI in organ transplant allocation and patient prioritization decisions. Using survey experiments conducted in the United States, Canada, and Japan, we manipulate how AI is framed in terms of its potential risks (dread) and controllability within this application domain. The findings highlight how varying concerns about dread and controllability shape citizens’ apprehension toward controversial AI applications, their unease about the implications for human expertise, and their calls for restrictions on AI developments cross-nationally.
Understanding How the Public Judges Online Misinformation: The role of partisanship.: John McAndrews (McMaster University), Ori Freiman (McMaster University), Esteban Robles Dávila (Universidad ICESI), Clifton van der Linden (McMaster University)
Abstract: Misinformation is a complex social challenge – one made more difficult by the rise of new digital technologies. Amidst a proliferation of research activity and public debate, however, we argue that the public’s voice has been largely missing. With a handful of important exceptions, few scholars have asked: What do citizens want done – or not done – about misinformation online? To help answer this question, we focus in this paper on the potential role of partisanship and partisan bias in structuring individuals’ judgments. We present two novel, between-subjects survey experiments – each fielded in 2024 to opt-in panel samples of Canadians. In the first experiment, participants are presented with a hypothetical scenario in which a person posted a message to social media containing a false statement about an election candidate; the partisanship of the person posting is varied. In the second experiment, participants are presented with a hypothetical scenario in which a person posted a message to social media containing a false statement about the next federal election; in this instance, both the partisan orientation of the message and the national origin of the person posting were varied. Post-treatment, participants in both experiments were asked to assess the harm of the hypothetical message and to indicate their support for, or opposition to, the government requiring the social media website to remove the message. By estimating the scope of partisan bias in this domain, the paper yields important implications for current policy debates about government responses to misinformation in the digital era.
Artificial Influence? Comparing AI and Human Persuasion in Reducing Belief Certainty: Natasha Goel (University of Toronto), Thomas Bergeron (University of Toronto), Blake Lee-Whiting (University of Toronto), Thomas Galipeau (University of Toronto)
Abstract: People often resist updating their beliefs even when those beliefs are contradicted by strong evidence, making efforts to persuade them seem futile. While some new research suggests AI could be a solution to this problem, its persuasive capacity remains under explored. This pre-registered study tests five hypotheses by examining whether Large Language Models (LLMs) can reduce belief certainty for a sample of N=1,690 Americans recruited through CloudResearch Connect, all of whom hold at least one false, or unsupported, belief. All treated participants engaged in up to five rounds of conversation with ChatGPT-4o, but the treatment manipulated who they believed they were talking to: ChatGPT, an expert on the topic, or a fellow survey respondent who disagreed with them. Across all conditions, we found that AI reduced participants’ certainty in their false or unsupported beliefs, with 29% of participants even switching to the accurate counterpart of the belief post-treatment. Interestingly, in considering source effects, ChatGPT was not significantly more persuasive than a fellow survey taker, but the expert condition was. We did not find that perceptions of AI objectivity and knowledgeability served as moderators for the AI condition, and neither did anti-intellectualism affect for the expert condition. In shifting the focus to the messenger, our results contribute to our understanding of effective strategies for persuasion. We show that AI can indeed be persuasive, even in the face of strongly held beliefs; however, when source identity is considered, human experts hold a much stronger appeal.