W309 - Workshop 3 - Artificial Intelligence in the Classroom - Friend, Foe or Frenemy?
Date: Jun 4 | Time: 08:30am to 10:00am | Location:
Chair/Président/Présidente : Saad Khan (University of Winnipeg)
Discussant/Commentateur/Commentatrice : Fred Cutler (University of British Columbia)
A Framework of Technological and Societal Shifts as Drivers of Change in Political Science Education: Samantha Twietmeyer (Carleton University)
Abstract: In the past decade, the practice of teaching and learning in political science departments across Canada has been rocked by a series of societal and technological shifts. The online transition during the pandemic and relatively sudden onset of Artificial Intelligence (AI) and Large Language Models (LLMs) are just the more recent of a series of critical junctures in educational practice. In Canada, these recent technological shifts were preceded and accompanied by more gradual changes around Indigenous knowledge, decolonial pedagogies, and the introduction of Universal Design for Learning (UDL) and Equity, Diversity, Inclusion (EDI) principles. These are not the first historical junctures for teaching and learning, the onset of the internet being one of the most significant, however we know from studies of technological acceleration that more such changes, particularly in the realm of technology, will continue to occur and at an increasingly more rapid pace. This paper examines the extent to which each of these aforementioned shifts in political science education are indeed critical junctures and how this impact can be measured. Using Paul Pierson’s (2004) time horizon of causal accounts framework with the additional variables of exogenous and endogenous drivers of change (Streeck & Thelen 2005; Gerschewski 2020), the study seeks to evaluate these shifts on three parameters. The temporal modality of the shifts themselves in terms of their sudden or gradual onset, the exogenous or endogenous nature of the shifts, and the response of political science departments in responding to or implementing changes in their teaching and learning. The paper seeks to improve understanding of the nature of change in teaching and learning and how to manage sudden technological and societal change in the future.
AI teaching in IR: participation, writing and citation in the era of LLMs: Dan Bousfield (Western University)
Abstract: The transformation of teaching alongside the proliferation of large language models (LLMs) has been rapid and continues to accelerate. Of particular importance is the way that generative technologies like LLM chatbots challenge some of the basic assumptions of academic work, such as writing, referencing and citation of texts and material. This paper examines the challenges of LLMs to teaching International Relations (IR), emphasizing how generative technology poses opportunities and challenges to IR teaching. The paper explores three different ways that llms can contribute to IR teaching: the first is the democratization of assessment in the context of multiple-choice exams; second is the synergy between assisting students with prompts and developing academic voice in their papers; and the third is challenges posed to citation, paraphrasing and authorship in the classroom. The paper focuses on experiences in the classroom as a way for us to critically assess the ways LLMs can contribute to IR teaching.
We're Only Human: Educative Frameworks for Artificial Intelligence, Academic Integrity, and Writing in Political Science Courses.: Andrew Owen (University of British Columbia)
Abstract: Generative AI (GenAI) poses a fundamental challenge to a cornerstone of political science teaching and learning– take home essays. This paper reports findings from the first year of a study examining how instructors can maintain the pedagogical value of writing assignments while adapting to GenAI technologies. Using focus groups, surveys, and classroom observations, our team evaluated new teaching strategies and assignment designs across multiple departments in the Faculty of Arts at UBC. We are testing materials grounded in an educative approach to academic integrity that provides an alternative to the standard punitive model. Specifically, our interventions aim to educate students on why we assign writing, how skills they develop can translate into their future endeavours, and what constitutes inappropriate GenAI use. We also test student-facing materials that address key concerns including privacy, intellectual property, bias, and Indigenous data stewardship. We conclude with a summary of evidence-based best practices.
Preparing Students for Law School: the Impact of AI.: Clare McGovern (Simon Fraser University)
Abstract: In a world where AI is widely-used - what types of skills will students need to succeed in legal careers? This paper will report on a Scholarship of Teaching and Learning project integrating AI exercises into law and politics courses. The goal is to help students develop skills using AI critically, transparently and ethically.
The project builds on interviews with current law school students, on the skills they need for their programs, and the impact of AI on their studies. I used these insights to develop low-stakes assessment tools developed low-stakes assessment tools for two courses. One is a large, 1st year politics and law course where students are considering career tracks in government and law. The second is a 4th year human rights law seminar, where many students are applying for law school.
These exercises draw on advice from my interviewees on preparing for law school. They help students consider the trade-offs between using AI and traditional sources for legal research and writing. The aim is