Comportement politique/sociologie



F19(a) - Methodological Issues in the Study of Political Behaviour II

Date: Jun 14 | Heure: 01:45pm to 03:15pm | Salle:

Chair/Président/Présidente : Mathieu Lizotte (Université d'Ottawa)

Discussant/Commentateur/Commentatrice : Mathieu Lizotte (Université d'Ottawa)

Comprendre l’interaction entre les enjeux politiques, la cognition et le vote: Marc A. Bodet (Université Laval), Lauriane Blanchette (Université Laval), François Vachon (Université Laval)
Abstract: Il existe une abondante littérature en science politique et en psychologie sur le rôle des enjeux dans le comportement électoral. Bien que les objets d’étude soient souvent les mêmes, les politologues tendent à mobiliser des concepts théoriques issus des travaux classiques en opinion publique (ex. saillance, biais partisan, etc.) alors que les psychologues approchent l’étude des enjeux en s’intéressant davantage aux processus cognitifs mesurés à l’aide de méthodes appliquées (ex. oculométrie, réactions physiologiques, etc.). Lors des campagnes électorales, plusieurs applications mobilisent le positionnement des partis et de l’électorat sur des enjeux de politiques publiques pour éclairer les individus appelés à voter. Dans le cadre de cette expérience en ligne, nous étudions les conséquences sur le vote de partager ce type d’information dans un environnement contrôlé. Après une série de questions pré-traitement touchant notamment leurs préférences politiques et partisanes, les répondant.es sont amené.es à réfléchir à leur positionnement politique relatif à ceux des partis provinciaux et surtout à la distance potentielle entre leurs propres positionnements sur deux dimensions (identitaire et socio-économique) et le positionnement de leur parti préféré. Ces travaux s’inscrivent dans une nouvelle démarche en science politique visant à intégrer les recherches et les méthodes des sciences cognitives dans l’étude de l’opinion publique.


INATTENTIVES AND HOW TO FIND THEM: William Poirier (University of Western Ontario)
Abstract: How can one measure inattention in second-hand data cases where attention checks and response time are unavailable? This research explores the performance of Guttman errors, Manhalanobis distance and the novel Downsian errors from Fowler et al. (2023)’s Moderates in emulating RTAC scores. We leverage the reproduction data from Read, Wolters and Berinsky (2022)’s work in order to construct a vast array of cases from which metrics are computed. We find poor performance across all metrics and suggestive evi- dence that response pattern strategies do not seem to capture satisficing behavior. Rather, it seems more appropriate for alternative forms of inattention.


Beyond Multiple Choice: Capturing Nuanced Public Opinion with Large Language Models: Laurence-Olivier M. Foisy (Université Laval), Hubert Cadieux (Université Laval)
Abstract: Traditional survey analysis often relies on the limited scope of closed-ended questions, potentially neglecting the rich, nuanced insights that open-ended responses provide. Addressing this gap, our study introduces an innovative use of Large Language Models (LLMs) to analyze and encode open-ended survey data. This methodology enhances the quantification of qualitative feedback, revealing real-time shifts in public opinion and reducing biases inherent in question design. We demonstrate how LLMs can capture the subtleties of respondent sentiment, leading to a deeper and more adaptable comprehension of population trends. The study also advances the application of AI for response weighting and data imputation, offering a refined analysis that more accurately reflects the collective viewpoint. By comparing traditional sampling methods with our LLM-augmented approach, we expect that integrating LLMs with nonprobability samples markedly improves their precision, challenging previous reservations about the validity of online, opt-in surveys. Additionally, we examine the role of poststratification adjustments, and how they can be effectively enhanced by the analytic power of LLMs. Our contribution is twofold: we provide a model for incorporating LLMs into survey methodologies, and we introduce an R package that operationalizes LLMs for survey data interpretation within the R environment, utilizing open-source models to facilitate access and integration. This tool empowers researchers to apply LLM capabilities directly in their existing workflows, reducing the costs, thus democratizing the advanced analysis of open responses. This work reimagines the potential of open-ended survey questions, advocating for a new standard in survey research that prioritizes accuracy and temporal sensitivity, thereby informing more effective policy and strategic decisions.


Will You Please Participate? Strategies and Pitfalls in Recruiting Committed Survey Participants: Callie Mathieson (Carleton University), Paloma Raggo (Carleton University)
Abstract: The long-standing needs to develop and evaluate evidence-based solutions, programs, and services to effectively address complex social and environmental challenges became more urgent following the COVID-19 pandemic. The data collected by governments and researchers on the nonprofit sector and the needs of those it serves is subject to a stark publishing lag – typically being released 18 months after their initial collection – and thus offering limited insights on real-time issues and trends affecting charities’ activities, especially in times of crisis where they often serve as front line responders. Researchers at the Charity Insights Canada Project based at Carleton University recruited a representative rapid response panel of over 1000 charities across the country aimed at providing weekly insights on the needs of the charitable sector in Canada. The randomly selected participants, i.e. the highest-ranking executive available for each organization contacted, agreed to answer weekly surveys about their activities, challenges, and the trends they saw emerging for one full year. The literature offers limited insights on panel recruitment outside STEM-related fields and even less on surveying nonprofit organizations and their staff. How do we convince extremely busy, often under-resourced staff of organizations to participate in a yearlong study without any monetary compensation? In this paper, we review the various recruitment strategies we tested and reflect on the efficiency of surveying nonprofit/civil society organizations. Unexpectedly, we have been able to sustain an average response rate of 67% (n=948). Recruitment strategies and commitment have varied considerably based on the formal nature of the organization, their perception of the nonprofit sector as well as our ability to reach people within some of the hard-to-reach target groups. The lessons from our recruiting effort speak to scholars interested in studying hard-to-reach organizations and more generally suggest finding innovative ways to increase participant commitment to our studies.