I enjoy technology. I enjoy talking about it, I enjoy using it, and I enjoy thinking about its potential to shape our future. Using technology helped me learn many valuable skills and lessons; one of the most important lessons is to be wary of hype.
Throughout my career in education, I have seen how exciting innovations can often be regarded as a perfect solution, leading to unrealistic expectations and disappointment. This is certainly not an experience unique to me, to this institution, or even to this century – Some imagined a world in which traditional primary education was abolished and replaced with TV screens in the cafeteria, beaming knowledge directly into the brains of the next generation.
Hype can also blind us to important considerations. In their paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, Emily M. Bender and colleagues assess the current trend of increasingly large language models and ask what potential risks exist (not least the perpetuation of harmful stereotypes) with this technology and what solutions are available to mitigate them. You can watch Professor Bender speak more about this with the Alan Turing Institute:
Of course, tech alone will not save us. However, I continue to be inspired and enthused by the possibilities it offers to improve what we do. All the while trying to remain productively sceptical when we consider whether the University of London should adapt to or mitigate for EdTech’s latest solutions, and to ensure these solutions benefit our students equitably.
For example, there’s significant concern around the use of ChatGPT and how students could potentially use it to avoid engaging with curriculum and assessment in a way that undermines the deep learning we aim to provide. But, as I discussed in January at the Metropolitan College‘s Future of Education webinar, students are already using it, and in less worrying ways. What we need to do is to understand how these new sets of tools are being used and how we can support that use, when right to do so. This is, in part, one of the missions of our Product Innovation team – the team that I work in.
Which leads me to Walter. We’re piloting the use of chatbot AI Tutors in some of our programmes. We hope that this will be a complimentary tool in supporting students through their journey with us, and we’re very proud of the diverse cohort of learners we cater to. We will be surveying them thoroughly about their experience – and they’ll provide the most important judgement.
I’ll be recording some of that experience with you on this blog over the coming months. If you’d like to talk about the pilot, feel free to reach out!
- Hype, or the future of teaching and learning? 3 Limits to AI’s ability to write student essays by Clare Williams for the LSE Impact Blog, 12 January 2023. Should ChatGPT should prompt instructors to revisit how students are assessed if rigorous, robust assessment of knowledge and understanding is to be continued in the era of AI?
- Adapting college writing for the age of large language models such as ChatGPT: some next steps for educator by Anna Mills and Lauren M. E. Goodlad for the Critical AI blog, 17 January 2023
- How AI adds value for education institutions and for learners by Tom Moule for Jisc’s National Centre for AI blog, 17 September 2021
- Our Obsession with Cheating is Ruining Our Relationship with Students by Marc Watkins, for his Rhetorica blog, 26 January 2023
- AI governance and human rights: resetting the relationship paper by Kate Jones for Chatham House, 10 January 2023.
Senior Product Innovation Manager