It’s hard to imagine that only a few weeks ago the London e-Learning Reading Group met, face to face and onsite at Queen Mary’s Mile End Campus, to discuss readings and experiences of digital solutions to high stakes exams. As a sign of the times, there was antibacterial hand-gel on the table along with a variety of snacks to keep the group fuelled and healthy during the evening. Indeed, many participants had arrived straight from Coronavirus contingency planning meetings and all came with the knowledge that discussions about online assessment were gaining urgency across the sector.
The starting point for the reading group was Jisc’s report on ‘The Future of Assessment,’ which argued for five principles of digital exams: authentic, accessible, appropriately automated, continuous and secure.
To take the last point first, the group agreed that security was essential for traditionally constructed exams. Practical solutions for this in an online setting were hard to envisage – conventional invigilation can turn into extreme surveillance and invasion of privacy when moved online. For example, the idea of virtual proctoring, where students’ webcams film them taking the exam, was seen as deeply problematic, both because of the potential for student workarounds and also because of the data protection implications. The hypothetical suggestion that students could take secure exams outside of exam centres was barely discussed, dismissed in early March as not possible with current university practices and the contemporary products for online exams and proctoring. Simpler times.
As a side note, I feel that with one hand I’m trying to write about the session as if it was yesterday, untainted by subsequent events and on the other hand I can’t refrain from referring to current debates and policies around the very topics we were discussing. The past sentence is also an awkward segue into our conversations on typing. Though online exams can naturally take a variety of forms (WISEFlow for example, which is used in QM’s Digital Exams and Assessment Project, can be set up for video submission among other options), typing is the default for the assessment described in the papers and discussed as current practice in UK HE. This naturally lead to consideration of typing as a key factor in planning for online exams. Would students be disadvantaged if they were poor typists, and is this different from those with poor or slow handwriting in traditional exams? What about giving students a choice of format, as in ASCILITE 2018 Conference Proceedings [PDF 17,458KB] – a choice that is rarely afforded exam takers in HE, where personal preference is not generally considered more significant than the principle of standardisation in assessment format. Evidence from pilots and research undertaken by those present challenged assumptions that ‘digital native’ students would necessarily prefer to type exams. It was posited that this could be a result of changes in policy for UK pre-university education. Now that the coursework component for most GCSEs and A-levels has been reduced, if not wholly removed, undergraduates may in fact be far more used and better trained for handwriting than typing assessments.
With the lock-down, we can see how coursework and other forms of continuous assessment have hugely relieved pressure on students and staff who have already managed to complete the work for a percentage of their final award rather than waiting for the summer exam period. As for what we actually discussed – the benefits of continuous assessments were acknowledged by pretty much everyone. Continuous assessment was seen as more authentic and a good way to combat the excessive pressures of high stakes exams. The only proviso was that continuous assessment will not, of itself, reduce the pressure on students and could even increase it if not managed sensitively and with attention to workloads and deadlines.
Moving seamlessly from pressures on students to pressures on staff, appropriate automation was one of Jisc’s principles for the future of assessment. The report gave examples of automated or semi-automated assessment and our discussions took things further, imagining wordclouds and AI for grading essays. We tried to keep things grounded however, and also discussed our own experiences and the practicalities of online assessments, such as the accessibility implications for marking online exams. Under the new accessibility regulations, online marking needs to be as accessible for staff as online submission is for students. In hindsight, we barely scratched the surface of accessibility issues for staff and students that we are now seeing. The imaginary stakeholders discussed all had access to secure internet as well as appropriate devices and space in which to complete their part of the assessment cycle.
Throughout the entire evening, we continually returned to the idea of authenticity as the gold standard for assessment. It was clear throughout that discipline specific, theoretically-grounded and well-designed assessment will mitigate the challenges of the other four principles, for example, even security may be less of a concern when assessments replicate authentic professional or academic experiences such as working in groups and presentations. Authenticity of course will require innovation – so now, as we are unexpectedly and at short-notice changing so many traditional ways of working, could be the perfect time to experiment. Given everything that has changed since we met face to face last month, the future of assessment no longer looks quite so far away.