Here at iQualify we strive to put learners at the centre of all things, including assessment. If you make assessment a tool for learners (rather than a tool for reporting), you’ll find learners get more out of it, but you just might find you get more out of it too.
Assessment is generally either feedback on progress towards outcomes (formative) or evidence of outcomes (summative). You might hear this described as assessment for learning or assessment of learning.
The benefit for organisations is generally around evidence.
The benefit for learners is generally around learning.
We argue it's most advantageous (for all involved) to make assessment about the learner, and make it about learning.
In education, we often hear the complaint that learners only care about what’s in the assessment. They get the assessment done and forget about it. Well yeah… wouldn’t you do the same if there was nothing you were really going to get out of completing the assessment?
If we make the assessment first and foremost about us and needing evidence for our course, there’s little room for it to be valuable to learners. Instead, we should involve learners in assessment, make it about them first and foremost. Make all assessment, assessment for learning and learners might be more likely to see it, not as an end point for a course, but more as a stepping stone on their journey.
Below we’ve got some reflective questions you can ask yourself to start thinking about how you can make assessment about learners.
If your language around assessment includes things like score, grades, passing - you’re focusing learners on assessment existing to “tick the box” of a grade for the course, not for any broader purpose (including their purposes).
If you’re making all the decisions about assessment, how are learners able to say what would be valuable exercises and artefacts for them?
If your learners are defaulting to saying that the purpose of assessment is proof for you and your organisation (e.g. “It’s so I can show I know my stuff..” “So I can show I’ve completed the course…”), you know that’s not of value for them. It’s value for you.
If your learners don’t stop and reflect on how and if they’re achieving, and “where next?”, the assessment isn’t focused on learning - it's an end-point. When assessment is for learning, learners will be continually reflecting and refining as “life-long learners”.
Is your feedback about how to get the next grade up or what to stop/start doing to get better marks? If so, your focus is on grades. Some learners may ask for that feedback because they’re motivated by the possibility of getting a better grade. But that’s not necessarily correlated with better learning.
As with most learning technology, ideally, pedagogy should lead technological choices rather than the other way around. So, once you’ve made the choice to make assessment learner focused, you might look at how technology could enable that. So, how can iQualify and our tasks enable more learner-centric assessment?
One of the first things you can do to set the scene for assessment being about learning, is to work with learners to have them set their own learning goals. Goals that are relevant to them.
Written and recorded tasks (e.g. Essay, Audio recording, Video recording, File upload) are a great way to step learners through the process of recording goals and setting out a structure for learning conversations around reaching those goals. For instance, this might involve tasks that step them through the process of:
With this approach, you’re talking with your learners about outcomes, assessment and encouraging them to think about assessment in terms of their learning, not in terms of grades.
In iQualify, learners have access to their course and responses in perpetuity. This means you could help learners treat the course like a kind of portfolio/learning journal. This supports the view of assessment being theirs and for them, rather than something that gets submitted to the system or organisation, and never seen or used again.
We’ve got two different template courses for portfolio/learning journal type courses: One for more vocational learning, and another with more of a theoretical leaning.
You might think making assessment about the learner and learning is all well and good, but… “Hey I don’t set these outcomes for the course and I’ve got to measure them somehow!” We get it. Sometimes outcomes arrive at your door already set in stone.
Perhaps there’s a way to use automarked tasks at lower cognitive levels? If you can set a handful of smaller tasks as evidence of parts of the outcome, this might allow you to open up other parts of assessment to get learners to create an artefact of value to them.
Having a handful of lower cognitive level tasks also has the added benefits of scaffolding learners to other more complex tasks and letting them know if they’re on the right track.
We’ve got 20+ different tasks types (and we’ll still be adding more). This allows for a wide range of options and variation in responses, giving learners greater freedom and choice. That way you can provide options for tasks and they can choose the task that best fits them or that they see the most value in. And if you're stuck for ideas, check out our blog post Many with tasks (which includes a "cheat sheet" to kick start the process).
In iQualify, with all task types, authors have the option to include automatic feedback that shows as soon as a learner hits submit on a task. Some of the best uses of this feedback that we’ve seen are when the automatic feedback supports the learner to self-assess. This could be supporting learners to compare their answer with a model answer (note: this should be more than just presenting a model answer), providing a rubric, or following up a task with further reflective questions, or suggesting next steps.
Writing great automatic feedback is absolutely worth the effort. It can never completely replace 1-1 personalised feedback, but writing feedback to support learners to self-assess will save facilitators time and allow them to focus their efforts where it really matters most.
You could follow this general approach to help learners self-assess.
Learners can complete this process through written and recorded tasks for their submissions, text highlight or image annotation for their exemplar analysis, and multiple choice matrix or rating for their rubrics.
Here are two examples of how you can create rubrics using tasks.
The facilitator role in this self-assessment approach is mainly asking questions (via feedback on their task submissions) and helping learners to orient in the right direction.
Peer marking is a really valuable learning and teaching technique. First, it encourages self-assessment as a learner compares their work to their peers’ work. Secondly it supports a better understanding of standards as they have to assess others’ work to say how and if the work meets, exceeds or falls short of standard(s). Lastly it also supports learners to understand that standards can be met in a range of ways as their peers may have met the outcomes in a very different way.
In iQualify you might choose to use group talk channels or a webinar to support gathering peer feedback.
You may have just started taking another look at your assessment. Or you may already be on the way to implementing some of these strategies. Either way, we hope the reflective questions have given you something to think about and then given you some ideas to try. You don’t have to try to include each of these all at once. If you’ve got experienced learners, you might just start with letting learners suggest an assessment that would meet the outcomes and be useful to them.