
As generative AI tools like ChatGPT and Claude become more widely used across higher education, many teams are exploring how to make the most of their potential in professional services and student support. In this blog post, the University of London’s Student Life team shares how they moved from focusing on prompt engineering — the skill of crafting effective instructions for AI — to context engineering: designing the broader information environment in which AI operates.
This shift emerged during a recent AI skills session, where the team demonstrated how thoughtful prompting strategies can yield more accurate and useful outputs. But, more importantly, they showed that the real value often lies in shaping the data, tools, and workflows around the AI, creating a context that enables it to deliver genuinely actionable insights.
Prompt engineering: Useful but limited
Prompt engineering remains a valuable starting point. It involves crafting clear, purposeful instructions to guide AI tools effectively; a skill that can significantly improve the relevance and accuracy of AI-generated outputs. The Student Life team demonstrated how using structured prompt patterns (common templates or formats) can help produce more consistent and useful results.
Six prompt patterns to try
The team shared six prompt patterns that have been particularly effective in their work:
| Approach | How to do it | Example prompt |
|---|---|---|
| 1. Alternative approaches | Ask the AI to generate several different ways to solve a problem or address a question. This allows you to compare options, spot strengths and weaknesses, and choose the most suitable outcome. | Suggest three different ways to organise this student feedback data to identify key themes. |
| 2. Fact checklists: | Instruct the AI to include the key facts it used to generate its answer. This provides transparency, reduces the risk of hallucinations, and makes it easier for users to verify accuracy. | Summarise this article and include a bullet-point list of the main facts used to support your summary. |
| 3. Context enrichment | Supply the AI with detailed situational information, such as roles, constraints, or audience. The more tailored the context, the more relevant and useful the response. | You are a university support officer planning an online induction event for international students. Suggest a timetable of activities, taking into account time zones and student wellbeing. |
| 4. Prompt chaining | Break down a complex task into a sequence of smaller steps, each with its own prompt. Linking these steps helps the AI handle sophisticated tasks that cannot be solved in one go. | List five possible topics for a study skills webinar series. → Expand the second one into a session outline. → Generate promotional text for staff to share with students. |
| 5. Persona prompting | Direct the AI to act as if it were a professional with specific expertise (for example, a digital learning specialist). This shapes the tone, style, and detail of the output. | You are a digital learning specialist at a UK university. Write an email to academic staff explaining the benefits of using generative AI for assessment design. |
| 6. Multimodal prompting | Ask the AI to go beyond text, generating visuals, tables, interactive tools, or even games. This expands the range of outputs and makes results more engaging and practical. | Design a simple interactive maths game for a nine-year-old using AI. Include instructions for how it works and the learning outcomes. |
These techniques help us stay in control of the process, iterating with the AI much as we would with a colleague. They also build confidence among those without a technical background, showing that no coding is required to design useful AI interactions.
But focusing on prompt wording alone can miss the bigger picture.
Context engineering: Designing the bigger picture
Several voices in the field, such as Tobi Lutke, Phil Schmid, and the LangChain community, have noted that the real skill might be context engineering. This is the discipline of designing the information environment around the AI so it has the right data, in the right format, at the right time. It is worth noting that, in our context where there are no programmers but three non-technical professional staff, we use the term ’context engineering’ more loosely than those who are in more technical fields.
In practice, this means thinking beyond a single prompt. It involves:
- Curating information sources so the AI sees only what is relevant
- Structuring workflows where tools and data feed into the model
- Formatting large or complex data into a digestible form
- Delivering outputs in ways that humans can act on, such as reports or slide decks
To illustrate this shift from prompt to context engineering, here’s how the Student Life team tackled a real-world challenge using generative AI.
Case study: Analysing student feedback from social media
This is not abstract theory. To support a project we wanted to base our actions on student insights. Thanks to the developments of generative AI tools, we built a multi-step pipeline to process authentic feedback data from a public social media platform.
The flow unfolded step by step:
- Ask AI to suggest a process: Using various prompting strategies, we first brainstormed with ChatGPT on how to better understand students’ needs and concerns. It suggested several options, one of which was to scrape public social media posts where students share their experiences of studying with the University of London. Prompt engineering has already proved useful at this stage.
- Ask AI to write a script: We began with ChatGPT, using it to draft and refine Python code for scraping social media posts. This gave us a working script without needing to write everything manually.
- Use AI to analyse the data: The code was then executed in Google Colab, with Gemini helping us to run and prepare the data collection. This stage ensured the data could be handled at scale. We then used a detailed prompt to conduct a discourse and content analysis of the students’ posts
- Minimising hallucinations: To minimise hallucinations during discourse and content analysis, we used OpenAI’s GPT-4o mini through the API. The model was applied cell by cell on the spreadsheet, which improved accuracy and transparency.
- Use AI to revise the analysis process: The dataset eventually grew to around 6000 rows, which was overwhelming to read and interpret directly. At this stage, we returned to ChatGPT to brainstorm alternatives (prompt engineering) and rethink how the analysis could be managed.
- Use AI to build a user friendly interface: The outcome was the idea to design an interface where professional staff could type in prompts to interrogate the Excel file themselves. Using Google’s AI Studio, we built an interactive tool so staff could generate tailored insights without wrestling with the raw data.

7. Use AI to make the information easy to interpret: For quicker and more actionable communication, we then used an AI presentation app. This tool transformed the analytical outputs into concise, visually clear slide decks, which, for instance, colleagues in the wellbeing team could easily understand and act upon.
Turning raw feedback into insight
By combining these tools, we were able to listen to students’ voices, concerns, requests, questions and opinions from an unfiltered, rich social media platform so that we can create a better student experience. Also, our colleagues no longer had to juggle large spreadsheets or complex data. Instead, they received clear, context-rich insights in usable formats.
The AI prompts were important, but the real achievement lay in designing the context: selecting, shaping, and delivering the information and the right tools so that colleagues could act on it effectively.
This example shows how prompting played a role, but the real impact came from designing the entire workflow around the AI.
Prompt vs context engineering: A comparison
| Category | Prompt engineering | Context engineering |
|---|---|---|
| Focus | Prompt wording (instructions) | Information environment (data, tools, workflow) |
| Scope | Micro level (single query/task) | Macro level (system architecture) |
| Unit of work | Isolated input → output | Pipeline and multi-turn flows |
| Main skill | Linguistic clarity and iteration | Curation, compression, retrieval, orchestration |
| Applied example | Designing model prompts for analysis tasks. | Workflow: data preparation, analysis pipeline, user interface, actionable presentation |
Why this matters for higher education professionals
The distinction matters because higher education work is not just about one-off answers. It involves complex ecosystems of data, stakeholders, and decisions. Prompt engineering helps us phrase questions well, but context engineering helps us design systems that make AI genuinely useful for professional services and student support. For example, instead of asking, ‘Summarise this student feedback’, context engineering asks: ‘What workflow ensures that feedback is collected, analysed, contextualised with past trends, and presented in a way staff can act on?’. This shift, from wording to environment, changes AI from a clever assistant into a partner.
Why context engineering matters
Context engineering is not just a technical tool but a mindset. In a digital environment saturated with AI-generated content, it is important to remain critical and reflective about how we use these systems. Effective prompting helps staff guide outputs rather than accept them passively, but context engineering extends this further by shaping the entire information environment.
Iteration is a central feature. Just as conversations with colleagues involve refinement and clarification, AI interactions benefit from repeated adjustments. This iterative process not only produces better outputs but also develops AI literacy and critical awareness among staff.
Prompting can also be seen as a form of structured inquiry. By using approaches such as Cartesian questions (for example, ‘What would happen if…?’, or ‘What is stopping me from…?’), we can use AI to explore scenarios, test assumptions, and support decision-making. These strategies can also be shared with students to help them develop reflective and critical thinking skills.
Importantly, none of this requires coding expertise (until a certain threshold!). Tools we explored show that with thoughtful design, anyone can build useful workflows, whether through creating interfaces with AI Studio or turning analysis into accessible presentations with LLMs. Prompts can even be treated as design tools themselves, guiding AI to produce effortless, creative and polished outputs such as visually engaging slide decks.
In short, context engineering matters because it ensures that AI serves our goals rather than the other way around. It turns AI from a clever assistant into a reliable partner, enabling higher education professionals to work more efficiently, think more critically, and design systems that genuinely support staff and students.
As AI continues to evolve, context engineering will be a vital skill for higher education professionals who want to use these tools responsibly, critically, and effectively.
Further reading
Lutke, T. (2025) 19 June. Available at https://x.com/tobi/status/1935533422589399127
LangChain (2025) The rise of context engineering, 23 June, LangChain blog
Gupta, M. (2025) Context engineering vs prompt engineering, Why design leadership needs to see the bigger picture 28 June, Medium Articles blog
Schmid, P. (2025) The new skill in AI is not prompting, it’s context engineering 30 June, PhilSchmid blog
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J. & Schmidt, D.C. (2023) A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT arXiv preprint arXiv:2302.11382. Available at: https://doi.org/10.48550/arXiv.2302.11382
