When you feel comfortable in your teaching environment, things typically go as planned and life feels pretty good. But, when the proverbial comfort zone carpet gets yanked out from under you, all of your teaching and assessment “go-tos” get thrown out the window and it can feel like a punch to the gut. So, when you’re forced to change the way you deliver your content from an in-person environment to a virtual one, it means rethinking the way you assess your students as well. It’s important to understand the different types of assessments and when or how to use them, but it's equally important to acknowledge the need for a shift in when or how you deliver assessments.
There was a time (a few years ago) when the words “formative” and “summative” were mostly thrown around in K-12 and higher ed circles, but in medical education these terms are still coming of age, so to speak. Many medical educators are familiar with them, but plenty still aren’t. Based on a study conducted a few years ago regarding faculty perceptions of formative assessment, it seemed that medical educators were not super comfortable using regular formative assessments and feedback. However, summative assessments were something much more familiar to them and fit nicely within their wheelhouse (Close, 2017). There is still some shred of truth to this, even though formative assessments are much more prevalent in medical education today.
Formative assessment- a process used during instruction that provides useful feedback for adjustments in ongoing teaching and learning to improve student outcomes (Bennett, 2011)...in other words, assessments that are delivered ‘along the way’ so that necessary changes can be made to help both teachers and students perform better. (You may want to check out this article: 5 Teaching Strategies To Deliver Formative Assessments)
Summative assessment- used to evaluate student learning at the conclusion of a defined instructional period (“Summative Assessment”, 2013)...in other words, a high-stakes assessment used to measure overall student performance at the end of a unit, course, semester, etc. (i.e. standardized tests)
Figuring out how to adapt your assessments to a new virtual environment is the situation many are currently experiencing. Planning for physically proctored, in-person assessments can raise one’s blood pressure when trying to account for exam security loopholes, or ways to minimize cheating opportunities. So, imagine the anxiety that can spike when asked to plan for a virtual, online exam!
There are so many cool, user friendly, and even free, ed tech tools that you can use to deliver assessments online. The key to choosing the right one depends on the purpose of the assessment: high-stakes vs low-stakes.
When the instruction is delivered online, pre-recorded, etc., these assessments can use the same questions and possibly even be delivered with the same tool. But, here are a few things that you could or should prepare to address or change:
The first thing to remember is that you will never, ever, ever be able to account for all possible situations that can occur when you’re not physically present to monitor your exams (anxiety should be decreasing now). But, that doesn't mean you can deliver your assessments in the same way that you have before.
Online low-stakes, or formative, assessments do not need a proctor and are often open-book, -note or -peer. This actually opens up a variety of opportunities to get creative. For example, what was once an in-class, multiple choice quiz, can now become an online small group discussion or an online team-based learning (TBL) exercise. Using appropriate ed tech tools to help deliver your online assessments can provide the formative feedback that is so important for you and your students to help improve student outcomes.
On the other hand, online high-stakes, or summative, assessments pose a bigger challenge. These assessments are typically very secure, with in-person proctors monitoring the testing environment. Many schools either have proctored paper-based exams, a secure online exam platform, or an integration with some sort of browser lockdown for these exams. However, as I mentioned above, when you are no longer physically present to monitor the exam, the expectation of a secure online summative exam is essentially out the window. When the teaching and learning becomes completely virtual, developing an effective assessment of learning can be challenging. In this situation, I think students would definitely benefit from a shift in your expectations or focus: from trying to prevent academic dishonesty to encouraging engagement with the content.
It’s true, deeper learning occurs when students engage in activities that involve their peers and assessments that are authentic, or non-rote memory (Nelson, 2014). And, moving from an in-person, secure exam to a virtual assessment does not have to comprise these authentic assessment opportunities. Even with online assessments where the formality of the learning environment is not the same as the physical classroom, your learning objectives should still drive all of your assessment decisions. So, depending on your objectives and the purpose of your assessment, there are several ways you can deliver summative assessments in a virtual environment. There’s nothing wrong with using an online multiple choice exam (even one that closely mimics your in-person exam), but you can no longer have the same expectations of a secure, closed-book, in-person exam. You must create an exam that engages students in the content with the expectation they will use available resources to determine their answers.
If you are up for deviating away from the multiple choice exam design, here are several other creatives ways to deliver summative assessments by using a rubric for assessment reliability:
(To Rubric or not to Rubric: Johns Hopkins University):
I mentioned above that moving to a virtual learning environment means making your assessments virtual as well. This takes some thought and often means meeting a learning curve head-on. It also means possibly taking common digital tools like email and discussion boards and re-conceptualizing them for the creative delivery of virtual assessments (Perera-Diltz & Moe, 2014). Encouraging a collaborative experience as a summative assessment is possibly out of your comfort zone, but research is pretty compelling for how valuable it can be when used with an effective rubric (Perera-Diltz & Moe, 2014; Stuart, 2004; Russell et al, 2006; Williams, 2006). Whether you decide to stick to an online exam in the traditional sense or call an audible and be more creative, your commitment to the ‘virtual assessment shift’ is commendable.