Skip to main content
Digital Education Studio

Rethinking Academic Integrity in the Age of AI: ethical approaches, guidance and policies to inform an inclusive education

Haylee Fuller, Head of Appeal, Complaint and Conduct Office

Rethinking Academic Integrity in the Age of AI: ethical approaches, guidance and policies to inform an inclusive education

Haylee Fuller shares some of the work done at Queen Mary to explore the opportunities and challenges to teaching, learning and assessment since the boom of generative AI, sharing updates to policies, recommended approaches and useful resources.

 

When generative AI exploded into popular use, there was an obvious and disruptive impact on many traditional ways of thinking about academic integrity and assessment security. There is little need to recount fears about massive numbers of misconduct cases or complete loss of control over assessment security, followed by short-lived hopes for technological solutions such as AI detectors. I joined Queen Mary as the Head of the Appeals, Complaints & Conduct Office in March 2023, so my first challenge in the role was to think about how we would respond to this new environment in practice. I quickly received emails from colleagues in Schools/Institutes reporting that according to GPT Zero (or other ‘detectors’), between 30-60 students in their modules had written assessments with AI. Within the Appeals, Complaints & Conduct Office, we had to reflect carefully about what advice to give to Module Organisers (MOs) and our Misconduct Panel (who reach decisions about the cases).

Traditional ways of thinking about academic integrity or misconduct are mostly deontological (a rules-based approach), with policies including lists of do/don’t. This kind of approach is ill-suited to new technologies which are both innovative tools for future careers and success, and the source of ethical or integrity concerns. Flexible approaches to match discipline and context are necessary, so that the right balance is achieved between innovative learning, and the importance of integrity and ethics in our research and education. From a practical perspective, we still need guidance and clarity about how to respond to concerns. If there are no coherent and consistent rules, we need to consider other frameworks. Teleological approaches to ethics challenge us to think about the outcomes and consequences of actions, not just compliance with finite rules.

Rethinking our policies and processes to focus on what we are trying to achieve (world-class, innovative and inclusive education and research) is helpful in this context.

Returning to the practical, and the concerns raised, this means:

  • We shouldn’t be using technologies to make important decisions (like potential student misconduct) when we have concerns about reliability, data governance, poorly understand how they work, and have good reason to worry about indirect discrimination of certain demographics (like GPT Zero or Turnitin’s AI detection).
  • We should focus on whether students have engaged with learning, and ways we can be assured of this. In practice, this means we recommended that instead of relying on ‘detectors’, that MOs meet with students to ask about their submissions and the process they went through to produce them. In most cases, this resulted in MOs leaving satisfied that their students genuinely understood the material (regardless of whether AI was/wasn’t used). In the few cases that did progress to misconduct, this occurred only when it seemed clear that students had not genuinely engaged with learning.
  • Our practical procedures support the work of colleagues promoting innovative assessment design.
  • We collaborated to produce guidance for students about generative AI, and to think critically about how/when to use it.
  • Our Academic Misconduct Policy was amended to make clear that technology use would constitute misconduct (only) when it was not appropriate to use it, or when not used with transparency.
Back to top