
Exploring how AI study assistants can create meaningful interaction in self-paced courses
In previous articles, colleagues have explored how our microcredentials are shaping curriculum architecture, life-long education and learner journeys at the University of London. As part of the team working on this project, we have been thinking about a related question:
What should learning feel like inside a microcredential?
It matters because the learning conditions are different from our degree programmes. When learning is cohort-less, self-paced, and on demand, a lot of the things we love about online learning, such as discussion forums, peer exchange, collaborative activities, don’t work in the same way. They depend on people moving through material together, roughly in sync. In a microcredential, that’s not a safe assumption.
Rethinking connection without a cohort
We’re still debating how to incorporate cohort-style features that don’t feel hollow, but we’ve also looked at other opportunities to make independent learning feel more connected. We’ve developed experience with doing this over the last few years:
- in 2023-24, our first pioneering pilot of an AI teaching assistant in our undergraduate and postgraduate Laws programmes
- in 2025, embracing Coursera’s range of AI-powered functionality to enhance our MOOCs
- in 2025, creating thought-provoking AI roleplay scenarios for our PGCert Learning and Teaching in Higher Education students.
Implementing our AI study assistant
That’s what led us to AI-supported activity design in our microcredentials. Specifically, using Noodle Factory’s study assistant (officially called Walter, though we went with “AI study assistant” because it does exactly what it says on the tin). The assistant is integrated into each microcredential and is trained on the course’s own academic content and learning outcomes. Learners can use it in several ways:
- to clarify concepts
- to test their understanding through real dialogue
- to prompt their own reflection
- to work through scenario-based practice in professional contexts.
Across different activities, scenarios and stages of the course, the assistant takes on different roles – from assistant, to coach, to simulation partner. The flexibility of the tool gives learning designers freedom to build activities that suit the context.

Positioning AI as a learning partner
Something we felt strongly about from the start was that the assistant shouldn’t position itself as an authority. We introduced it as a learning partner, so something designed to help learners think. We consulted with our Student Voice Group for their opinion on an AI Study assistant, and they asked us to be clear and transparent about what it can do, what it can’t, and how data and privacy work. One of the lines we wrote for the introductory activity captures it well:
Think of your AI study assistant as a trusted learning partner, not a replacement for your judgement or expertise.
What excites us pedagogically is that it creates forms of interaction that are otherwise hard to sustain in self-paced courses. Learners get something to push back against, something that responds to their thinking rather than presenting a fixed path.
Designing meaningful AI-supported activities
It also pushed my learning designer colleagues. Designing a good AI-supported activity requires a different approach from writing a discussion prompt or a quiz. They were shaping scenarios, thinking through how a learner might respond, defining what role the assistant should play at each moment, and making sure the whole thing is doing something useful rather than being a novelty.
How does this work in practice?
We worked closely with colleagues at Noodle Factory to ensure that the learner experience in-platform was as seamless and straightforward as possible. This meant building deep links into the course page so learners progressed to the AI study assistant activities in the same way they did for the rest of the course content, and that once they were there the instructions were clear.

In this activity, the learner is asked to take on the role of an analyst providing advice to a senior manager (something many of our learners will no doubt have to do in a professional setting). The value of this is that rather than the learner writing the advice and keeping it in their notes, we’re able to provide real-time feedback that encourages reflection and can be repeated as many times as the learner needs.
Early feedback and iteration
So far, feedback from the academics involved in the microcredentials has been positive, and this, alongside feedback we’ve received from students on previous pilots of the tool and our Student Voice Group, suggests that we’re on the right track when it comes to using AI in thoughtful and valuable ways.
We’re under no illusions, though, that this is the final form of AI in our courses. It’s one of many iterations, and of course good course design evolves. But the early experience has been encouraging. The more interesting challenge (and honestly the more interesting design question) isn’t whether AI can replace what human teaching does. I don’t think it can, and that’s not really the point. It’s whether it can help us build independent online learning that stays genuinely engaging, even at scale. We think it can.
To learn more about our work in this area, read our other posts in the microcredentials series.
