Academic misconduct, generative AI and authentic assessment

laptop glowing in the dark

Photo by Ales Nesetril on Unsplash

Digital education reading group: March 2025

The rapid rise of generative AI tools presents both challenges and opportunities for educators, particularly in the realm of assessment design and academic integrity. As AI becomes more sophisticated and widely used by students, traditional methods of detecting misconduct—such as proctoring and AI detection tools—are proving increasingly limited. This raises fundamental questions about the validity, reliability, and fairness of assessments in higher education.

In this month’s digital education reading group, Kirsty Branch, Learning Technologist (Digital Assessment), invites us to reflect on how we might move beyond punitive responses to AI use and instead explore how assessment can be redesigned to align with authentic learning experiences. Authentic assessment — incorporating real-world relevance, higher-order thinking, and continuous feedback — offers a compelling alternative to traditional exams, but is it scalable? Should institutions embrace AI as a learning tool, or does this risk compromising academic standards?

Looking forward to hearing your thoughts and ideas! Let me know if you have any questions or need help accessing the materials.

List of topics discussed at previous meetings