I gave a seminar at Keele University (my own uni) yesterday. I decided to present two works in progress (WIPs) rather than any complete thing. I read somewhere that presenting before you’re ready with the completely polished final thing was a useful way to get some feedback and drive your thinking about a project so I thought I’d give it a go.
From Tesla to TESTA: Meanderings in Chemistry Education Research
This seminar comes in two parts. The first looks at the use of diagnostic tests to evaluate the knowledge of students at the start of a block of teaching. Over the past 3 years, two diagnostic tests have been developed, one evaluating 1st year’s knowledge of topics in spectroscopy on entry to 1st year, and the second evaluating 2nd year’s knowledge of NMR and related topics in preparation for a multinuclear NMR course. The prevalence of key misconceptions is determined and areas to be addressed in subsequent teaching identified. The second part looks at the use of the TESTA (Transforming the Experience of Students Through Assessment) process to catalogue changes in the Chemistry course from 2010 to 2017. The TESTA process provides metrics to evaluate the nature of assessment and feedback processes, however is deficient in a key regard: impact of assessment deadlines on student workload. Assuming an ‘ideal spherical student’, a student workload model is proposed and considered in the context of having sufficient time to participate in assessment for learning activities.
If you’ve seen any of my poster presentations in the past year, you’ll be familiar with some of the diagnostic test stuff. I’ve been plugging away developing diagnostic tests for about three years now and I’m getting fairly close to happy with the question sets. I find they are very useful in informing my teaching and a couple of changes I made this year seem to have added clarity to the responses received. Firstly, I moved the tests from paper-based (MCQ, confidence scale, free-text response bit for explanation) to online (MCQ, confidence scale, studied before yes/no/maybe). This allowed for automatic marking and meant I didn’t need to spend hours typing in the answers. I also changed the confidence scale from a 1 – 7 scale with 1 being ‘not at all confident’ and 7 being ‘highly confident’ to a categorical scale. I intended this to make it easier for students to select an answer (I found myself becoming tied up in the difference between a 5 and a 6 whereas the difference between ‘neither confident nor unconfident’ and ‘a little bit confident’ seemed easier to grapple with). Never the less, I do employ NSS style groupings to split the responses into low, medium and high confidence.
The second part is based in part on the ‘Spherical Students’ post and is consideration of a student workload model as well as the use of the TESTA (transforming the experience of students through assessment) process to evaluate a curriculum review. When I looked at our TESTA data from several years ago with a view to seeing how things had evolved since the modules were shiny and new, I realised that while it accounts nicely for type and number of assessment and goes into great detail on feedback, there’s little about considering the timing of assignments. This, combined with my on-going bewilderment about how more ‘active’ forms of learning requiring not insubstantial amounts of pre-sessional activity should be accounted for in module proposals (where hours are broken down) and in timetabling (if you’ve got 8 lectures a week, can every lecture come with an hour of prep?), has all lead me to start reading about student workload models. It’s fascinating stuff and seems fairly under researched in HE. The basics are obvious: it’s really difficult to pin a meaningful metric on the task of evaluating student workload and methods vary. They include word counts and proxy word counts (e.g. a 15 credit module is 4000 – 5000 words) which obviously doesn’t work that well for calculations and the like, time on task which is objective (how much time should be spent) and subjective (how much time is spent) and further complicated by the idea that researching and writing 1000 words at FHEQ level 4 (1st year UK uni) is very different to researching and writing 1000 words at FHEQ level 6 (3rd year UK uni). An effective student workload model must make some allowances for level of difficulty of material but really should be so much richer than that. So far I’ve only got as far as figuring out that where we put our deadlines is really important in whether students appear overloaded in any given week or not but I’ve got more reading and thinking and modelling to do on this one.
As it was about work in progress, I’m not uploading the slides. I got some really good questions at the end and need to think some more about some of the issues.