F16 - The Politics and Consequences of AI
Date: Jun 14 | Heure: 08:30am to 10:00am | Salle: 680 Sherbrooke St. West 1085
The Multidimensional Structure of Risk: How Dread and Controllability Perceptions Shape Opinions About Artificial Intelligence: Mathieu Turgeon (University of Western Ontario), Tyler Romualdi (University of Western Ontario), Tyler Girard (Purdue University)
Abstract: Studies of public opinion about new and emerging technologies are gaining momentum. The rise of ChatGPT and other artificial intelligence (A.I) programs has raised meaningful concerns about academic integrity, personal security, and the spread of misinformation. However, questions about how one's judgement about the seriousness or pervasiveness of new technologies affects public acceptance persist. Previous work suggests that individual risk evaluations have become increasingly multidimensional, with beliefs about familiarity and the technology’s degree of danger often serving as primary concerns. Yet, two overlooked dimensions with meaningful implications for opinions about the acceptance and support of new technologies in society include perceived dread and controllability. These refer to beliefs about the perceived magnitude of the risk posed by the technology (e.g., dread) and its controllability – the suspected capacity to control the growth and outcome of the technology. We leverage an original cross-national survey with an embedded experiment to examine three primary research questions. First, what is the extent of dread and controllability concerns regarding A.I. technology in Canada and Japan? Second, who is most susceptible to dread and controllability concerns posed by A.I. technology in these contexts? Lastly, how do frames showing varying degrees of the perceived magnitude and controllability of technological risks impact public opinion about adopting A.I.-based technology in society? And, does it vary by policy domain? The results demonstrate the importance of evaluating the multidimensional nature of citizens' technological risks and how these threats get communicated to the public.
Contingent Public Support for Artificial Intelligence? Evidence from 6 survey vignette experiments: John McAndrews (McMaster University), Ori Freiman (McMaster University), Jordan Mansell (McMaster University), Clifton van der Linden (McMaster University), Anwar Sheluchin (McMaster University)
Abstract: Citizens—who are both potential users of AI and potentially subject to public and private decisions made with AI—have an important role to play in the emerging conversation about how to regulate AI. This paper contributes to this fast-developing public conversation about regulation by exploring how public support for AI, as well as the restrictions placed on it, may depend on three factors: domain of use, motivation, and degree of autonomous decision-making. To test these three factors, we designed six vignette experiments that were fielded in 2023 as part of an online survey to a large opt-in sample of Toronto residents.
The paper extends existing research in several ways. First, it unpacks the motivations for AI adoption that the public finds most compelling—comparing support for adoption prompted by accuracy, speed, or cost-cutting. Second, it leverages the large survey sample to explore the interactions between factors—specifically whether the effects of motivation and autonomous decision-making on public support depend on a wide range of public and private domains, allowing for a more nuanced assessment of the generalizability of public attitudes across contexts. Third, it integrates individual-level psychological traits—such as optimism and openness to experience—allowing the evaluation of how effects may be moderated by individual dispositions.
Who Benefits and who Loses? The perceived effects of generative AI on labour markets: Sophie Borwein (University of British Columbia), Beatrice Magistro (Caltech), R. Michael Alvarez (Caltech), Peter Loewen (University of Toronto)
Abstract: The rapid diffusion of generative artificial intelligence (AI) has the potential to transform labour markets, yet it will take time to uncover the broader impacts of this technology on labour productivity and inequality. In the meantime, how governments and workplaces approach the use and regulation of generative AI will depend on how people perceive its benefits and costs. Given the broader uncertainty around these technologies, this paper asks: how do people reason about the effects of these new technologies on labour markets? Who do people perceive will be the beneficiaries of these technologies, and moreover, can providing them with information about the possible benefits for certain groups of workers shift their attitudes? Finally, this paper asks what policies people support in response to generative AI. We answer these questions by drawing on two pre-registered survey experiments of respondents in Canada the United States.
The Third Sector entering the Era of Artificial Intelligence: A longitudinal study of early adoption in Canadian registered charities.: Gboyega Ojo (Carleton University), Callie Mathieson (Carleton University), Paloma Raggo (Carleton University)
Abstract: The recent introduction of ChatGPT in 2022 has brought artificial intelligence (AI) to the forefront, sparking discussions and creating opportunities in various sectors. While governments have heavily invested in digital governance, the nonprofit sector has largely been on its own to adapt and adopt new technologies. Our team at Charity Insights Canada Project sought to understand the perspectives within Canadian nonprofits by conducting surveys among approximately 1000 registered charities across Canada.
There are mixed feelings about AI; some individuals are concerned about the biases present in these technologies, fearing that certain groups may be marginalized. On the flip side, this new “AI era” is perceived as an opportunity for growth and advancement, yet it is imperative to address knowledge gaps about digital tools like ChatGPT and understand the current state of data literacy in the Third Sector.
In this paper, we share insights from two surveys conducted six months apart with a panel of 1000 registered charities across Canada, beginning right after ChatGPT’s initial launch. Our data show that initially, the sector displayed a lack of understanding and a high level of skepticism regarding the potential benefits of AI for their operations. The most significant concern was about job obsolescence and whether organizations could adapt quickly to the required skills. Half a year later, the outlook had started to change. There was a noticeable shift towards a more positive view of AI, yet there was also a stark realization that many organizations were not prepared to fully integrate these technologies.
To interpret these changes, we applied the Diffusion of Innovation (DOI) Theory by E.M. Rogers, focusing on how the Third Sector adapts to technological innovation. We also examined different types of organizations based on size, age, and location to discern whether their reactions to AI varied at the initial stages of adoption.
Our findings lay the groundwork for understanding the early adoption phases of AI in the Third Sector that could inform public policy in how to best support this technological transition. We plan to conduct two additional rounds of surveys in 2024 to continue tracking these trends. This paper is the first phase of a research agenda that aims to compare public servant’s adoption of AI in their work and understand its impact on both public and private organizations.