Reading group insights: How AI is reshaping thinking, learning and ourselves

In our April 2025 Digital Education Reading Group, we explored the ideas posed in our pre-reading post: How does our relationship with technology shape humanity and education?

This follow up post captures the key themes, questions, and reflections that emerged from the session, from AI as an extension of the mind to the quiet disappearance of human quirks. Whether you were there or just curious to dip into the conversation, read on for key insights and questions we’re still pondering.

Are today’s digital tools just helping us survive, or are they changing what it means to be human?

With generative AI increasingly embedded in educational life, the discussion explored the relationship between tools, cognition, and identity—and what we might be losing along the way. 

Why this question, and why now? 

Ahmet opened with a moment of personal honesty. After years working in learning technology, and seeing hype cycles rise and fall—Wikipedia, social media, VR—he felt overwhelmed by the generative AI boom. But unlike past fads, this one felt different. 

The tools weren’t just assisting us; they were participating in our thinking, shaping our choices, even flattering us. “Where does our cognition end and theirs begin?” he asked. 

From tool to collaborator: Are we thinking with thinking things? 

Drawing on The Extended Mind, McLuhan’s The Medium is the Message, and a recent article on AI and cognition, Ahmet argued that our tools are no longer static. Generative AI doesn’t just extend our thinking like a notebook or calculator—it shapes it, modifies it, even suggests ideas and emotions. 

We explored the idea that generative tools might now be “members of society,” acting as agents in our communication and cognition. 

When efficiency erases experience 

Ahmet shared a poignant example. A human-made student welcome video—carefully prepared, emotionally invested in, and shared with pride—was later replaced by an AI-generated avatar. It was efficient, cost-effective, and technically “better.” But gone were the nerves, the retakes, the compliments, the care. The presence of the person had been erased. 

This sparked reflection on what we lose when emotional labour and human effort are replaced with automation. Are we optimising too much—and forgetting what it means to connect? 

The em dash test: Can we spot the machine? 

A light-hearted but revealing conversation emerged around em dashes. Ahmet admitted he couldn’t type one on his keyboard—but noticed AI-generated texts use them liberally. This became a kind of informal “AI detector,” prompting others to wonder what else gives away machine-authored writing. 

Are there linguistic fingerprints we associate with AI? Overly polished phrases? Repetitive sentence structures? And how does that change how we read and trust what we see? 

Whose mind is it anyway? 

A big tension emerged: when students use generative AI to write essays, ask questions, or generate ideas, can we still evaluate their thinking? Are we teaching students to think, or just to produce? 

The group debated whether metaphors like “extended mind” are useful—or misleading. While tools support cognition, they don’t have minds. But as AI tools get better at mimicking human thought, that boundary is blurring. 

The risk? Mistaking a polished answer for deep understanding. 

Medium shapes message 

One participant recalled watching a 1960s interview with Malcolm X: slow, considered, eloquent. By contrast, today’s media favours speed, brevity, and immediacy. Another shared how watching longform interviews with thinkers like Chomsky can still be enthralling—reminding us that depth and complexity haven’t lost their value. 

This resonated with McLuhan’s idea: it’s not just what we say that matters, but how the medium makes us say it. If platforms push us to be fast, tidy, and agreeable—what happens to ambiguity, pause, and nuance? 

Is generative AI today’s burnt stick? 

Ahmet offered a powerful metaphor. He imagined an early human drawing a lion on a cave wall to protect future generations. That act—a medium, a message, a tool—was about survival. So is generative AI our modern equivalent? A tool to help us navigate a new, complex world? 

Or are we misusing it, allowing it to distort identity, amplify disinformation, or erode trust? 

The group considered whether AI literacy should now be seen not just as a digital skill—but as a survival literacy

What does this mean for educators? 

We left the session with more questions than answers—but deeper, richer ones. Here are just a few to take forward: 

  • How can we design learning that values process, not just product
  • What are we doing to support authenticity in a world of assistance? 
  • How do we embrace the benefits of AI without losing the human messiness that makes learning meaningful? 
  • What will we regret not doing now, when we look back on this moment? 

Final reflections 

Ahmet closed by reflecting on the late introduction of the printing press in the Ottoman Empire; a decision often cited as one reason for its decline. Are we at a similar crossroads today? If we ignore the transformational impact of AI, or adopt it uncritically, what future are we building? 

At DERG, we don’t claim to have all the answers. But together, we ask better questions. And that, too, is a survival skill. 

This summary was co-written with generative AI, drawing directly from the session transcript; one more reminder that the medium shapes the message.