Monday, 28 May 2007

Reflective Commentary: Course Development Document - Assessment & Feedback (Part 3; Column 7)

Just going back a little to reflecting on the learning tasks, I've only just realised that my CDD draft has presented the learning design in a content/domain centred way, rather than in the Learning Task centred way as requested. I would plan to change this for the final as in reading over it I realise that the tasks that I've envisaged are certainly not crystal clear.

Now onto assessment.

On reading Phil Race's (Race, P. 2003) article on assessment, several things have resonated with me. I've read it several times, that good education emphasises assessment-centred learning (Including: Anderson, T. 2004). I'm not sure of all of the arguments for this, but I know that with the volume of information needing to be memorised- usually with little processing going on- the challenge and the strategy of med school became 'how to learn about what we were tested on'; there was little desire to 'learn for the sakes of learning'; peer discussion was centred on 'what you need to learn', rather than on 'what does this mean' and 'why is this important. The aforementioned article quotes the following:

“Assessment is the engine which drives student learning” (John Cowan). “And our feedback is the oil which can lubricate this engine”

To me, this simply reiterates the importance of aligning learning outcomes with assessment. There seem a myriad of other reasons for 'assessment-centred learning', but I'm yet to figure out their place amongst 'student-centred learning' and the like.

1.  The form of assessment I have chosen for each learning activity is consistent with its learning objectives, and is integrated into the learning activity.

The assessment tasks are somewhat of a compromise. As such, I haven't integrated all learning tasks with assessment. In fact, I'm not too sure whether I have changed the course in this respect at all.

In module 1, it's displayed quite ambiguously but the learning tasks are self-directed PBLs (problem based learning exercises), combined with peer-assisted mini-CEX ('mini' clinical examinations). The assessment in this later module is the peer-submitted evaluations- i.e. the task is also the assessment. I'm not quite sure how to explain how this will be assessed, perhaps because I'm not quite certain myself- I think this needs to be finalised as part of the discussions with the committee, i.e. NOT finalised as part of the proposal.

This task is most definitely consistent with the learning outcomes (Pinnock, R. 2007):

Domain: Professional, Clinical and Research Skills
Evaluate paediatric patients presenting with a range of clinical problems...

I think both the task and this kind of assessment are a key aspect missing from the existing course, and that the teaching perspectives of development and nurturance are thoroughly expressed in this task. I think that students will receive formative feedback from each other on not only clinical skills but also wider professional skills and communication. I think it will also contribute to an environment where students feel that peer-feedback is part of professional development rather than a 'summative' criticism.

As far as the PBLs go, these are not assessed as part of the module. This was probably because as part of my original conception of the course, transmission would be through didactic and linear materials. As this idea has evolved toward active learning tasks, the assessment aspect has lingered. I'm still not sure how I could assess this part of the course without increasing the tutor workload too heavily. I also don't know how you could mark for completion, without having it as part of the LMS- also a considerable undertaking. Although if we were to integrate the PBLs into the LMS, I think we could probably mark on a grade-based system, given that if you take the time to collaborate, this doesn't necessarily mean that you've "cheated", perhaps this could even be considered part of the task. I'm not sure if the PBLs could be submitted collaboratively? Going in circles now, I'm also not certain what, other than the logistical benefits, the benefit of LMS submission would be unless the PBL answers were in a MCQ form and instantly analysed by the LMS.

The second form of summative assessment mentioned in my CDD is the end-of-course OSCE (Objective Structured Clinical Examination). This is the ?'main-stay' of the current assessment. The OSCE seem to have developed an air of absoluteness about them; that they are valid, reliable, transparent and authentic (Race, P. 2003):

Valid. Assessment should measure what it purports to measure, namely the intended and published learning outcomes for a given module or course.

Reliable. Assessment should be objective, and consistent across students and assessors.

Transparent. Students (and assessors) should know exactly which aspects of a task will be assessed, and what will constitute a satisfactory or a poor performance.

Authentic. Assessment should measure a student's own, non- plagiarised work.

I've not read extensively on the literature around the OSCE, although I'm planning on enrolling in Jennefer Weller and Alison Jone's Assessment Course next semester (ClinEd 704). My understanding is that the OSCE becomes ?more reliable with an increased number of stations.

To me the existing OSCE suffers from several issues: The reliability suffers because the station numbers are restricted by the available staff. It is also affected by inter-rate differences, and by the subjective nature of some of the questions- it's odd how despite a complete jungle of opinions surrounding many clinical questions in practice, that many of these questions are expected to be answered in a binary way.

The OSCE suffers from issues of validity, in that many of the tasks are abstractions. It suffers from decreased transparency, because for some reason there is a fear that if students know what they will be assessed on, this will not distinguish between levels of 'ability'. As a result of the reduced transparency it suffers from issues of authenticy, since students from previous groups compile lists of 'remembered questions'- I was forwarded a copy (Anon. 2005) as a student of the same course, but unfortunately gadn't checked my emailprior to sitting, however my disappointment with the OSCE was not with the issues of validity or reliaiblity, but with the fact that other students had been prior to what I see as a basic requirement of any assessment- i.e. being told what you're going to be assessed on.

Thirdly, the logbooks/vignettes. To me the task should be that the student takes a history, examines a patient, then attempts a formulation, then compares that formulation with what the patient's team thought and reflects on the differences and similarities. In reality there will likely be a difference in order- the team will say, 'go see that you boy with x', but the essence is the same- practice of the 'hypothetico-deductive' method of formulation, filling gaps in knowledge, comparing ideas with peers and superiors.

Thus, ideally these aspects would be reported in fluent language (potentially preceeded by rough, hand-written notes), and assessed/reflected on by the students, their peers and tutors- throughout the course. Note that the students themselves would have fairly specific instructions on how to formulate the reflection/instruction. The final grade would consist of a combination of cumulative assessments, and global grading.

To me, this is different from the existing 'case-history' style of assessment. These are long, and somewhat detached from clinical practice; they allow the student to avoid any responsibility for combining the information in the paper in any way other than in print presentation, ordered in the recognisable, "presenting complaint, history of presenting complaint...".

Furthermore, I think the assessment relates to the learning outcomes in the sense that it not only serves as an exercise in developing the cognitive skill of formulation- But that it develops the skill of fluent and efficient presentation (given a maximum word count).

Finally, the community and Hauora Maori collaborative tasks. I think that formulated and presented in the optimum way, these problem-based learning tasks could prove to be quite useful and innovative. The existing tasks are centred on the individuals reflections on aspects of community health. But to me, the task is rather abstract and therefore not engaging. I think that these tasks could improve on this state-of-affairs by testing the students assumptions around their ability to effectively and efficiently problem-solve 'social' issues.

The assessment is intertwined with the learning task, and I feel is quite successful in its consistency with the learning outcomes (Pinnock, R. 2007):

Domain: Hauora Maori
Identify key health issues for Maori children and adolescents and explain the approaches to addressing the issues;

Domain: Population and Community Based Practice
Summarise the roles, responsibilities and collaborative processes of child health professionals.

2. Students will have opportunities to undertake self-assessment and peer critique as well as receiving instructor feedback.

I think it's become quite evident, my attempt to involve peer-assessment wherever possible. I recently read a quote from J. M. Coeztee's Disgrace (Coetzee, J. M. (2000), which sounded a note with me:

The irony does not escape him: that the one who comes to teach learns the keenest lessons, while those who come to learn learn nothing.

To me the main reason for wanting students to assess each other is not for the increased learning as it relates to the subject content, but to develop awareness of the skill of communicating clinical information between each other- i.e. fluid presentation.

I think that there are several areas that would particularly benefit in terms of 'increased learning'. Clinical examination skills and patient communication, are areas which peer supervision could help not only to evaluate and feedback, but also to socialise into this 'routine'.

3. The strategy underlying the assessment approaches I have chosen reflects the view of teaching and learning evidenced by my Teaching Perspectives Inventory results, but also reflects new insights I have gained into assessment and e-learning.

Development and nurturance were the original two key perspectives I felt I held. However, following my recent reflection on teacher role, I realised the significance of what was originally an equivalent 'score' for apprenticeship in the Teaching Perspectives Index (TPI).

To me, the underlying strategies of my assessment approaches, are to assist in development in key cognitive skills, to do this in a way which is nurturing and socialises peers into the requirements of the task rather than the (often mis-aligned) expectations of the (often absent and/or disinterested and/or overcommited) clinical supervisor.

Thus, the way the student interacts with the teacher is to clarify issues which cannot be easily dealt with by suitable alternatives- i.e. text-book or peers.

I think these definitely reflect insights I have gained into assessement and learning, but not necessarily e-learning. What I am quite confident of it that e-learning provides the excuse to implement many of these advantageous course changes.

References

Anderson, T. (2004). Toward a theory of online learning. In T. Anderson & F. Falloumi (Eds.), Theory and practice of online learning (pp. 273-294). Athabasca (AB): Athabasca University.

Anonymous. (2005). Paediatric OSCE Cheat Sheet.

Coetzee, J. M. (2000). Disgrace. London: Vintage.

Pinnock, R. (2007). 5th Year Book: University of Auckland Department of Paediatrics Undergraduate Curriculum Committee.

Pratt, D. D., & Collins, J. B. Teaching Perspectives Inventory. Retrieved 14 March, 2007, from http://teachingperspectives.com/

Race, P. (2003). Why fix Assessment? – a discussion paper [Electronic Version], 9. Retrieved 14 May 2007 from http://www.scu.edu.au/services/tl/why_fix_assess.pdf.

1 comment:

Nadya Dwi said...

Can you explain how the assessment tasks in the CDD align with the learning objectives? Greeting : Telkom University