How far can we take (self-)assessment? ( #LTHEchat )

Over the past few years I’ve developed a series of assessments that fit together in an arc, building on each other. They build in a gentle way as some are in compulsory modules and some in optional modules for different groups of students. There’s plenty guidance for those who’ve missed a bit (because of being on a different programme/different module choice) but useful benefit to those who’ve done them all. They have come together over several years into a very nice running order but I suspect few people would notice that! I presented this a few years ago and have since added more assignments (https://www.slideshare.net/kjhaxton/calamity-creativity-in-chemistry).

1st SEM 1st year: information retrieval exercises [‘What Am I?’ or ‘Someone was wrong on internet’ assignments]

2nd  SEM 1st year: screencast/video presentations  AND essays

1st SEM 2nd year: in person presentations AND group business report

1st SEM 3rd year: infographic

In addition, some elements are formative (in person presentations in 2nd year) and the rest are summative individual or group work. The linking themes throughout are finding, digesting* and  presenting chemical and scientific information in a variety of formats; developing graphic, written and oral communication skills; communicating with different audiences.  I’d do a nice we table showing how it all works but tables are irritatingly difficult in WordPress so I’ll move on to my main point:

It’s really tough coming up with appropriate marking rubrics for all of these different assessment kinds.

Within the screencast/video presentations we’ve had narrated powerpoints, animations, filmed ‘talking heads’ with no visual aids, videos with visual aids, and sometimes audio or text only. Without resorting to assessment language so vague as to be worse than useless, how do you accommodate that and ensure equity?

I dug out a paper that’s been in my ‘to-read’ pile since October: “Multimodal assessment of and for learning: A theory-driven design rubric“[1]. The paper acknowledges that literacy is an evolving concept, particularly with digital technologies and the potential for incorporating them into assessment. It focusses on how to design a rubric for a presentation with slides where multiple aspects beyond the written word must be considered. From a theoretical framework, the following design elements were used: linguistic   visual  gestural  auditory  spacial.  Linguistic referred largely to words on the slides while visual referred to the overall appearance (use of or absence of colour for example), gestural was the use of animations (animated powerpoint klaxon!) while auditory referred to narration or the use of music or sound effects. Spatial design was how the other elements related to each other on the slide. The context of the work was English language learners so the emphasis on design, and probably English language conventions of design is appropriate. Anyway, it’s an interesting take on how to develop assessment rubrics that encompass a wide range of aspects beyond ‘just writing’.

After LTHEchat last night (https://lthechat.com/2017/06/01/lthechat-no-86-teaching-learning-and-assessment-now-the-floodgates-are-open/), I’ve been thinking about the self- and peer- assessment opportunities provided or that could be provided.

1st SEM 1st year: information retrieval exercises [‘What Am I?’ or ‘Someone was wrong on internet’ assignments]

– summative, with potential for self-assessment if carefully designed to build some assessment literacy.

2nd  SEM 1st year: screencast/video presentations  AND essays

– summative screencast/video presentations are assessed through self-assessment (on submission), peer-assessment (of 4 students’ work, and you know who you are marking), then a final self-assessment including reflection prompted by seeing other students’ work.

1st SEM 2nd year: in person presentations AND group business report

– in person presentations are formative, assessed through peer-assessment and some feedback from the tutor running the session.

1st SEM 3rd year: infographic

– last year used comparative judgement** to get the students up to speed on what an infographic was and the type of thing I was looking for.  I also asked students to self-assess their own work on submission.  I presented this: https://www.slideshare.net/kjhaxton/developing-conceptual-understanding-through-alternative-assessment

I’m thinking that in the next academic year I will find a way to build in self-assessment for all of these assignments. I need a good way to do it – it’s a shame that the student can’t just apply the Turnitin/Grademark rubric*** that I use at the point of submission. A ‘good way’ to do it means that the tutor cannot view the students self-assessment grades before performing tutor assessment, but that there is a way to reconcile the grades if necessary afterwards. When I did this with the 3rd year infographic, I was pleased with how close the students’ grade and my grade were. There were a few that differed significantly and so I made extra effort to explain why the grade awarded was higher or lower. This was either encouragement that the student was capable of more than they believed them self to be (imposter syndrome? false modesty? genuine lack of awareness of ability?) or constructive advice on why their mark was significantly lower than they anticipated (generally caused by a significant misinterpretation of the requirements of the task).

The question of how to reconcile the marks is important and also how to motivate the students to engage with an additional element of an assessment. Goodness only knows they’ve got enough to do! (www.possibilitiesendless.com/2017/02/spherical-students). I’ve considered offering the higher mark provided the self- and tutor- mark are within 5 marks of each other. I’ve considered the average (but find that many ‘game the system’ by giving themselves 100%). I’ll have to think about it some more – suggestions welcome.

I will build this in to the 1st year assignments – I think it’s important to get the students thinking about how we are assessing their work. I dislike the notion that I’m tutoring them to the test but can reconcile the idea in my mind if I view it as promoting reflective practice!

Notes, references and the like:

[1] Hung et al. “Multimodal assessment of and for learning: A theory-driven design rubric” British Journal of Education Technology 44 (2013)400 – 409

 

*digesting information, I dislike synthesis as a term because of the confusion with the chemistry type. Digest: read, think about, mix together with other stuff, churn out in a different format. You get the idea.

 

**comparative judgement, not adaptive comparative judgement. Essentially I asked the students to rank 5 items of sample work from ‘most fulfils assessment criteria’ to ‘least fulfils assessment criteria’.  For a good demo, see: https://www.nomoremarking.com/. Anyway this takes the stress of assigning a grade out of it.

 

*** or can it? Any ideas of whether this can be done through Grademark, and if not, what I might use? GoogleForms are toping the list at the moment.

Comments please!