Encouraging students to complete feedback questionnaires
Over 20 members of UCL staff shared the pros and cons of different methods of gathering student feedback during the Annual Monitoring process. We summarise their thoughts on the subject.
6 August 2013
The Digital Department project, funded by JISC, the Office of the Vice-Provost (Education and Student Affairs) and ISD, collated information on processes and systems used by teaching administrators (TAs), particularly those aspects of the TA role that impact on the student learning experience.
The research was conducted by Mumtaz Abdul Ghafoor (Mathematics), Viv Crockford (Economics), Paloma Garcia-Paredes (ICH) and Angela Poulter (IfWH), and has been summarised here by Stefanie Anyadi (Psychology and Language Sciences).
UCL has a number of mechanisms in place to monitor and continuously improve the quality of teaching provision.
The Annual Monitoring process is one of these mechanisms and requires departments to collect feedback from students for each module. Practice between, and within, departments varies widely, with many instances of good practice.
This case study explores the use of Student Evaluation Questionnaires (SEQs) as part of the Annual Monitoring process in a range of departments.
The challenges explored in this case study
- Devise a system for collecting feedback which allows for efficient collection of feedback and production of useful reports
- Encourage students to complete feedback questionnaires and to provide reliable and useful feedback
Different methods for collecting feedback
Virtually all departments canvassed use a standard set of evaluation questions across modules and programmes, with some small variations incorporated where necessary, to facilitate comparison.
The tools and media used vary; some departments use paper-based evaluations whilst others make use of more tailor-made, online media including Moodle and Opinio.
The methods used bear some correlation to the number of students; smaller groups enable a more personal interaction (often preferred) but for larger groups, departments are increasingly turning to online methods in an attempt to manage the process.
There are three main evaluation methods used at UCL, and we have included the more rarely used group feedback session:
Method | Advantages | Disadvantages |
---|---|---|
Hard copy |
|
|
Moodle |
|
|
Opinio |
|
|
Group feedback session |
|
|
Response rates
There are concerns that ‘survey fatigue’ reduces response rates as well as a lack of appreciation on the part of the student as to how seriously feedback is taken and the difference it can make. Response rates are almost universally reported as being lower for online evaluations compared to in-class, hard copy questionnaires.
Options reported to address this issue include:
- reverting to paper or opt for a mixed approach, i.e. rotating which modules are assessed online and by paper questionnaire; others vary the method across different years and groups to try to get a range of feedback in a manageable fashion
- adopting a ‘carrot and/or stick’ approach, e.g. release information on Moodle only after completion of SEQ
- asking TAs/academics to remind students face to face and by email of the importance of completing feedback and the real impact it can have
- using Personal Response Systems (PRS) for some evaluation: this provides immediate in-class high response rates and electronic data but does not gather qualitative feedback.
Communicating negative feedback
The use of online questionnaires has seen an increase in negative feedback, not in itself a bad thing since part of the purpose of evaluation is to discover what can be improved, but this can be upsetting for the individual being evaluated if online anonymity leads to overly critical and/or personal abuse towards staff.
This issue requires tact and sensitivity on the part of the TAs who process the evaluations before presenting them to the tutor.
Indeed, as well as just collating information, the TA’s role frequently involves summarising comments and dealing sensitively with negative feedback when disseminating evaluations to academics.
In some departments, students are given information on how they should approach giving feedback.
Timing: during or at the end of modules
Student feedback could be collected both in the middle of a module and at the end.
Mid-module evaluation
This should be used to identify what needs to be improved.
- Students prefer mid-module feedback because it benefits them directly, as changes can happen before the end of the module.
- It's helpful because “at the end of a module it is easy to confuse/forget the earlier lectures”.
- A mid-module feedback method called ‘Stop, Start, Continue’, whereby students are asked what they would like the tutor to start doing, stop doing and continue doing, has been recommended. This could easily be set up anonymously in Moodle and would enable tutors to know at a glance how the course is going and to consider corrective actions.
End-of-module and end-of-programme
These questionnaires combining rated and open-ended questions are equally important as these tell us more about the overall student learning experience.
Key points for effective practice
- Plan distribution of SEQs and feedback received within timelines of Annual Monitoring framework.
- Consider issue of response rates and quality of feedback in context of your modules and programmes and experiment to find out what works.
- Review methods of gathering feedback, the information being sought and the format and frequency of evaluation.
- Make questionnaire as short as possible.
- Ensure module coordinators and programme directors are involved in setting up questionnaires and understand the importance of closing the feedback loop.