Putting GenAI to work with writing rubrics
Dr Jamie Cawthra, FHEA Lecturer (Education and Practice Development) discusses how to use Generative AI to create marking rubrics
27 January 2025
Watch on Media Central.
Download the transcript (docx).
Only got a minute? Jump straight to Jamie's top tip.
Introduction
Analytic marking rubrics are an unsung hero of assessment and feedback. They describe performance at different grade levels for each of the marking criteria (see below for examples). They help clarify expectations for both students and markers but are time-consuming and tedious to create.
In mid-2023, inspired by work at the University of Sydney, I used GenAI to help prepare assessment rubrics for my undergraduate business ethics module. It significantly reduced time consumption and tedium, and made it easier to adapt the rubric next time the assessment was updated.
Recently, I was inspired to revisit my experiments by the workshop ‘Rubrics: Removing the Glitch in the Assessment Matrix’, led by Claudia Cox-Thiemann and Nadia Hussain of the Digital Assessment team. If you’re not already familiar with marking rubrics, you can find some examples on the types of rubrics here.
The role of rubrics
An issue often highlighted in the NSS is the need for clarity on marking criteria (UCL sits in the fourth quartile of the sector here). Rubrics provide this clarity but at a significant time cost. This is often a cost worth paying, as a good rubric offers:
- An opportunity to consider marking criteria and link them to the assessment task.
- Common language for feedback, showing why an assessment received a particular grade.
- Clarity on performance expectations for different grades, reducing anxiety and promoting self-efficacy (Taylor, Kisby & Reedy, 2024).
Here’s a step-by-step guide to reaping these benefits without hours of writing. You can see the full process and outputs in the Word document at the end of this post.
Generating your rubric
Step 1: Choose your tool. I recommend Copilot, as UCL staff have institutional access to the purportedly privacy-enhanced Enterprise Data Protection model. Other GenAIs are available (Futurepedia maintains a directory of AI tools), but be cautious about data policies.

Using Copilot? Make sure you see this when you hover over the shield in the top right
Step 2: Stay vigilant. GenAI is sneaky. While it mimics human responses, it approaches tasks differently. Philippa Hardman offers a two-part manifesto for staying alert to GenAI’s weaknesses:
- Don’t trust GenAI: Assume it’s wrong until verified. GenAI can provide biased results or outright errors, as explained on UCL’s Generative AI Hub.
- GenAI needs you: Output quality depends on the input quality (Hardman, 2023).

GenAI can be unreliable. It doesn’t ask for clarity on the vague ‘sociohistorical studies’, and it invents reasonable-sounding causes of the Third Great Haddock Extinction – so convincingly that I had to look it up in case I’d accidentally invented a real phenomenon.
Alt text: A screenshot from Microsoft Copilot. I prompt it: “You are an expert academic in sociohistorical studies. Come up with three small group exercises for first year university students to complete during a seminar which will help them understand the Third Great Haddock Extinction.” Copilot offers the exercises – the only one visible in the image is a Timeline Creation exercise where students take event cards with significant occurrences and arrange them on a timeline. Copilot suggests environmental changes, human activities and haddock population data, despite this event being fictional.
Step 3: Structure your prompt. Danny Liu’s RTRI model is useful for quality prompting, and you can see me using it in the attached doc (Liu, 2023). I’ve skipped the I (Instructions), as it’s superfluous here:
- Role: Give the GenAI a persona to adopt (e.g., a student, a module leader, a programme design expert…).
- Task: Summarise what you want it to do (e.g., ‘Write a call for abstracts for a conference on assessments.’).
- Requirements: Specify what it should achieve or avoid (i.e., all the extra information that turns the prompt from generic to specific).
The more detail, the better! Copying directly from your marking brief, where appropriate, can save time.
Step 4: Don't take the first offer. GenAI’s first draft may parrot your input or have a clunky format. Copy the output into a Word/Pages document for proper review. Examine the rubric for strengths and weaknesses, then provide feedback to the tool: what needs changing? You can type instructions or enable voice-to-text (Windows key + H) for convenience.
Refining your rubric
As you can see in the attached Word doc, giving Copilot room for manoeuvre resulted in a stronger rubric. GenAI excels at repetitive, formulaic tasks, making it a valuable tool for drafting and tweaking resources like these. However, both GenAI and rubrics require conscientious input to be effective.
For GenAI, the key is constructing careful prompts. For rubrics, it’s proactive engagement with students. Rather than hiding rubrics away on Moodle, integrating them into learning activities can significantly enhance their perceived value (Taylor, Kisby & Reedy, 2024). After all, higher education still needs the human touch.
References
Hardman, P. (2023). ‘ChatGPT for Educators’. Substack [online]. Available at: https://drphilippahardman.substack.com/p/chatgpt-prompt-engineering-for-educators?. Accessed 27/1/25.
Liu, D. (2023). ‘Prompt engineering for educations – making generative AI work for you’. LinkedIn [online]. Available at: https://www.linkedin.com/pulse/prompt-engineering-educators-making-generative-ai-work-danny-liu/. Accessed 27/1/25.
Taylor, B., F. Kisby & A. Reedy (2024). ‘Rubrics in higher education: an exploration of undergraduate students’ understanding and perspectives’. Assessment & Evaluation in Higher Education 49(6). 199-809.
Further resources
Learn about what GenAI means for teaching, learning and assessment at UCL on our Generative AI hub