A three-tiered categorisation of AI use in an assessment for staff to guide students on expectations
Staff have questions about how generative AI can be integrated into assessment design in ways that prepare students for their AI-supported futures – while still preserving academic integrity. Students report being confused about what constitutes academic integrity in different types of assessment and parts of the learning process, especially when AI tools are becoming so widely available. This guidance will help staff and students to address these issues in their specific contexts.
Like many sectors in society, higher education is both challenged by recent developments in AI, and intrigued by its possibilities. Impact on assessment is an issue that has generated numerous questions in our sector.
There are no simple answers and our responses will require constant revision as generative AI continues to evolve at pace.
Using AI to support learning
As a general principle, we need to recognise that AI will be used by students at many different stages in their learning process, including preparing for assessments.
“Our goal, therefore, is to ensure that students are using AI in ways that support their learning, enhance their ability to achieve their Programme Learning Outcomes, and prepare them to succeed in their future careers. Inappropriate use of AI will undermine all of these benefits and damage their learning.
A key element of such an approach is communication with students so that they are fully aware of the parameters of the assessments, particularly in relation to the use of AI.
Therefore, the use of generative AI does not automatically constitute academic misconduct. Whether its use in assessment is acceptable needs to be made clear to students. This guidance seeks to support staff in considering and clarifying what is, and is not, acceptable. In this guidance, when we refer to AI, we are specifically referring to generative AI technologies.
Note that UCL is not investing in AI detection software. However, staff must not submit student work into such tools as this may compromise students' intellectual property and personal data rights. It must also be made clear to students that they must not upload any personal data to AI systems without taking into account UCL's data protection requirements. Approvals and consents are likely to be required. As such personal data usage should be overseen by a staff supervisor. Reference must be made to UCL's separate data protection guidance.
If AI is misused in assessment, this would be considered under the category of plagiarism or falsification, not contract cheating.
Keeping up with developments
Given the speed at which generative AI tools are being developed, and with which educational uses of them is evolving, it’s important that staff also keep checking for new developments. There may, for example, be some emergent tools that are particularly relevant for your subject area. It is important to note that issues may emerge with some AI tools which mean they are no longer considered to be acceptable to be used in certain ways.
UCL’s Academic Integrity Policies and Processes are currently being reviewed and updated, and will be available shortly. Student-facing guidance on academic integrity and academic misconduct is available in the Academic Integrity web pages.
Colleagues from across UCL have pooled their expertise to offer a broad three-tiered categorisation of AI use in an assessment. The categorisation can support staff to design and set assessments, and students to complete assessments in ways that will optimise – rather than damage – their learning. The three categories are not defined rigidly; rather they are a tool for ensuring that for each piece of assessment, staff and students have a shared understanding of whether generative AI tools can be used and, if they can, how, how much, and where in the assessment process.
Staff should discuss with students the category that their assessments fall into at beginning of each module, and again as specific assessment deadlines approach. Whilst colleagues may be tempted to ban use of AI tools (Category 1), they might want to consider whether they really want to prohibit AI use as an assistive (as opposed to cheating) technology and whether any ban can be enforceable. If you have questions, please discuss them with your Faculty Education Teams, who can advise along with the HEDS Faculty Partnership Team.
Where students are using generative AI in assessed work, they should acknowledge how they have used it as part of their assessment submission. Find guidance for students on how to acknowledge the use of generative AI.
Students should always be strongly encouraged to take a critical approach to the use of any output from a generative AI tool, as these tools can generate superficial, inaccurate and unhelpful outputs.
The categories of assessment
The purpose and format of these assessments makes it inappropriate or impractical for AI tools to be used
Assessments where the use of AI is wholly inappropriate for the delivery of the specific learning activities or skills to be assessed might include, for example, demonstrating foundation level skills such as remembering, understanding, independently developing critical thinking skills, and applying knowledge or demonstrating fundamental skills that will be required throughout the programme.
Such assessments are likely to be designed to support the development of knowledge and skills that students will require in order to be able to study successfully and effectively, including with the use of AI tools in other contexts and in future assessments. Discussion with students will be required to explain the rationale for this category (for example, pedagogy, employability, etc).
- Examples of assessments where AI might not normally be used could include:
- In-person unseen examinations
- Class tests
- Some online tests
- Some laboratories and practicals
- Discussion-based assessments
Students believed to have ignored the categorisation will undergo the standard academic misconduct procedure.
Note that in UCL’s Language and Writing review in the Academic Manual (9.2.2b), it is permissible for a third party to “check areas of academic writing such as structure, fluency, presentation, grammar, spelling, punctuation, and language translation.” However, “this may be considered Academic Misconduct if substantive changes to content have been made by the reviewer or software or at their recommendation.”
- *Students with a Summary of Reasonable Adjustments (SORA)
Students with a Summary of Reasonable Adjustments (SORA) may still be permitted to use other assistive technology required. Staff should clarify if there are AI tools that are exempt.
Students are permitted to use AI tools for specific defined processes within the assessment.
AI tools can be utilised to enhance and support the development of specific skills in specific ways, as specified by the tutor and required by the assessment. For instance, students might use AI for tasks such as data analysis, pattern recognition, or generating insights.
Here the tutor should support and guide the students in the use of AI to ensure equity of experience, but the use of AI is not in itself a learning outcome. There will be some aspects of the assessment where the use of AI is inappropriate.
- Examples of where AI might be used in an assistive category include:
- drafting and structure content;
- supporting the writing process in a limited manner;
- as a support tutor;
- supporting a particular process such as testing code or translating content;
- giving feedback on content, or proofreading content.
AI can be used as a primary tool throughout the assessment process.
Students will demonstrate their ability to use AI tools effectively and critically to tackle complex problems, make informed judgments, and generate creative solutions. The assessment will provide an opportunity to demonstrate effective and responsible use of AI. The tutor should support and guide the students in the use of AI to ensure equity of experience.
- Examples of where AI tools could be used as an integral part of the assessment include:
- drafting and structuring content;
- generating ideas;
- comparing content (AI generated and human generated);
- creating content in particular styles;
- producing summaries;
- analysing content;
- reframing content;
- researching and seeking answers;
- creating artwork (images, audio and videos);
- playing a Socrative role and engaging in a conversational discussion;
- developing code;
- translating content;
- generating initial content to be critiqued by students.
UCL guidance on acknowledging use of AI and referencing AI
Generative AI is evolving rapidly and there is not yet consensus on how to acknowledge and reference it. This guidance will therefore continue to be reviewed and updated.
Guidance can be found on the Library Skills pages.