Teaching and Learning Forum 2013 Home Page

Category: Research

Teaching and Learning Forum 2013 [ Refereed papers ]
Peer reviews: What can we learn from our students?

Daniel Boase-Jelinek, Jenni Parker and Jan Herrington
Murdoch University
Email: d.boase-jelinek@murdoch.edu.au, j.parker@murdoch.edu.au, j.herrington@murdoch.edu.au

This paper describes lessons learnt whilst using an online peer review system in an undergraduate unit for pre-service teachers. In this unit students learn to use information technologies as part of their future teaching practice. The unit aims to foster graduates who become life-long reflective educators by providing opportunities to explore and reflect on how they might use technology in authentic learning situations. Whilst peer review is an appropriate activity for supporting critical thinking and reflective practice in this kind of unit, it requires a number of decisions to be made in relation to student preparation and support, implementation strategy, and technological infrastructure to make it work in specific contexts. Much research has been conducted in recent years to inform educators in making these decisions. However, there are still gaps in the research, particularly in how to improve the quality and consistency of feedback that students give to each other in their feedback. This paper describes the experiences of implementing an online peer review system aiming to improve quality and consistency of feedback. This exploration has revealed that we can learn much about ways to improve our teaching practices by giving students an opportunity to review each other's work and give each other feedback.


Introduction

Many universities aim to produce graduates who are life-long learners capable of assessing their learning and monitoring their performance (Boud, Cohen & Sampson 1999; Rolheiser & Ross 2003). Peer assessment and review is an authentic real-world approach to assessing student learning and achievement that contributes to development of these attributes by fostering students' capabilities for critical thinking and self evaluation (Rolheiser & Ross 2003; Wood 2009).

Peer assessment and peer review are processes whereby students grade each other's work for either summative or formative purposes (Bostock 2006). The term 'peer assessment' is often used to describe the process of giving summative assessment, whereas 'peer review' is used for giving and receiving non-summative formative feedback (Wood & Kurzel 2008).

Students generally experience peer review as a non-threatening process that benefits their learning by providing suggestions from their peers about how to improve their work and by helping them understand the criteria that will be used for their summative assessment (Wood & Kurzel 2008). The peer review process may extend over a period of time, and may involve students in developing the marking criteria as well as applying those criteria to their own, and others' work. Students are encouraged to develop higher level awareness of the task through this extended engagement with the teacher in the assessment process (Wood & Kurzel 2008).

The context

This paper describes how a peer review process was implemented in a first-year unit in a Bachelor of Education course for prospective primary school teachers. The unit had over 300 enrolled students, with about one third being external (entirely online) students. The focus of the unit is on how technology can be used by teachers to facilitate student learning. The unit takes an Authentic Learning approach that engages students in complex real-world tasks that result in production of artifacts that represent their learning (Herrington et al. 2009). In one particular assignment in this unit students were asked to plan a social event, and think about how technology might be used to enable that event and then set up an online resource to facilitate this event. This assignment is a precursor to a subsequent assignment in which students choose a teaching context (lesson, experiment, or investigation) and plan a learning environment where technology might be used to facilitate and improve student learning. These assignments have been problematic in the past because students have misunderstood the task and tended to focus on technology as a goal in itself rather than as an enabler for the event. The peer review process was introduced to help students focus on the marking criteria that asked students to review the context of the event and the suitability of the technology for the context. Students also reviewed each other's online reflective journals in which they reflected on their learning and how that learning contributed to the development of the resource.

The challenges

Despite the potential contribution that peer review processes can make to helping students achieve their desired learning outcomes, there are numerous challenges that must first be overcome. The first challenge relates to the attributes of students. Students may lack the skills, motivation, and knowledge required to engage constructively in a peer review process (Sluijsmans 2002). The required skills relate to being able to identify the strengths and weaknesses of another students' work and give constructive feedback on how to improve that work. Lack of motivation is also a major issue because the whole process depends on students making the effort to carefully look each other's work and evaluate it in sufficient depth to provide useful feedback.

The second challenge relates to strategies underlying design of the peer reviews. It seems that peer assessments for summative purposes needs to be done anonymously so that students are not prejudiced in their marking by personal relationships with their fellow students (Lu & Bol 2007). Even with peer reviews students are more prepared to make critical and constructive comments when they are anonymous (Howard et al. 2010). However, there are advantages in peer reviewers being identified because students need to learn to give each other honest feedback when working as a team (Guilikers et al. 2009).

When designing the peer reviews it is necessary to decide how many peer reviews each student performs. The quality of feedback given by students to each other tends to be quite variable (Robinson 1999), so ideally each student should receive feedback from multiple reviewers. The decision about how many reviews each student needs to perform involves balancing student and teacher workload issues against the benefits of multiple reviews. Furthermore, decisions need to be made on the degree of monitoring reviews to ensure they are not over-critical and provide useful feedback to students (Pearce et al. 2009).

The third challenge relates to the practical management of a peer review process (Mostert & Snowball, 2012). Students need to be given access to each other's work and a forum for exchanging assessments. All of these assessments need to be monitored and possibly assessed by teachers, which can be a logistical challenge where large numbers of students are involved.

Implementing the peer feedback system

Two key decisions were made in developing a peer review system for this unit. The first decision was to assess students on the quality of feedback they provided to their peers. This decision was based on the importance of students engaging seriously with the peer review process to maintain the desired quality and consistency of feedback (Pearce et al. 2009). Ideally, each student would have assessed three or more students' products. However, because of workload issues for the teachers assessing the work, each student was asked to review only one other student's work. The second decision was to identify each reviewer to the student whose work was reviewed. Whilst the general recommendation for peer reviews is to keep reviews anonymous (Howard et al. 2010), it was felt that identifying students would make them accountable to each other and reduce the likelihood of overly critical comments. It also had the practical advantage that students could liaise with each other for clarification if they experienced problems in viewing their work, and through these communications could foster a collaborative environment with a focus on helping each other - and indeed, this was the result for many pairs of students.

Since student work was online, we developed a web-based system for facilitating the peer reviews. Whilst there are a number of online tools available for facilitating peer reviews (Pearce et al. 2009; Keppell et al. 2006; Mostert & Snowball 2012), such tools appeared to lack specific features (especially simplicity and flexibility of use) that we were looking for, so we developed a web-based tool to meet our needs. Students logged into the system using their student ID numbers and were given the name and web address of the item they were to review along with a web-based rubric specifying the criteria that they were to use. Students were advised that these same criteria were going to be used for summative marking of their work later in the semester. When students submitted their review it automatically sent an email to the reviewed student advising them that a review had been performed, and gave them the link to the review. Students could conduct more than one review of that student's work if they wished. For example if a student modified their work in response to a review they could ask for it to be re-reviewed. At the assignment submission due date, the teaching team used the same rubric to mark each student's work. The teaching team also reviewed the quality of the review each student had performed. Once marking was completed students could view both the peer review and the tutor review.

The technology used in this peer review system was kept basic to avoid technological problems such as incompatible browsers. The review page used pure HTML forms, with no cookies, Flash or Java applets required to complete the reviews.

Since the purpose of implementing the peer review system was to address misunderstandings by students about the marking criteria and to clarify the purpose of the assignment, we devoted one lecture to working through the marking guide and describing how it would be used in assessing student work. This involved discussing what we were looking for in each component of the marking guide and describing what different levels of performance might look like on each criterion.

Observations

Students kept a reflective journal that was part of their assessment for the unit, in the form of a blog. The observations recorded in this paper are based on comments made by the students in their reflective journals.

Virtually all students completed peer reviews as requested. Some experienced problems in conducting a peer review when the student whose work they were reviewing did not submit work in time for a review. The teaching team wanted students to be able to modify their work following the peer review, so the assignment submission deadline was one week after the peer reviews were conducted. Some students did not take advantage of this opportunity, and only submitted their work at the assignment deadline without having it reviewed. However, most students were able to conduct the review without difficulty.

Students were anxious about giving feedback to their peers. They expressed concern about offending a peer with critical comments, but noted that giving honest feedback could be of benefit to their peer. A number of students commented that having a marking rubric helped them concentrate on the important aspects to focus on when giving feedback. One student developed a strategy of sandwiching negative comments between positive ones. Another student consulted a teacher on how to give feedback. It seems that students did benefit from the experience "... it taught me a lot about reviewing, judging, and marking".

Students found that receiving feedback from their peers was a mixture of positive and negative experiences. One student summarised this with the comment "... I began to look through my work and compare it to the peer review, and I was able to see what the student was saying. I repeatedly told myself ... this isn't a personal attack, it is designed to help you get better marks, so stop being upset and improve your work". In general, students found the review process highly beneficial, both in terms of helping them improve their submitted work, and also in terms of learning how to assess their own work. For example, one student commented " ... next I want to review my own site and see how I would assess my work against the marking rubric ... this should be interesting!". Another commented "... by reflecting on the work of someone else it has forced me to reflect on my own work".

Despite using the most basic HTML forms for conducting the reviews, one or more students still had technical difficulties "... I got a little frustrated when I went to submit the review because I had it all disappear into cyberspace ...". Fortunately, these events seemed to be rare.

We were pleased to note that by not making the reviews anonymous, and giving reviewers and reviewees each other's email addresses we helped facilitate communication and cooperation between students. One student wrote " ... through a few emails I ended up helping her out with a few things ... and over the next few days were both were discussing how we were [having technical problems] ... I think we supported each other very well".

What can we learn from our students?

To see what we can learn from our students about the peer review process, we have compared the review each student received from a peer with the review they obtained from their tutor (each student only received one review from a peer).

Each review assessed student submissions on seven criteria (description of the context of use of the resource, quality of instructions for using the resource, degree to which the resource made reference to the unit materials and content, references to students' attitudes and feelings while constructing the resource, exploration of learning strategies to develop the resource, use of networking and communicating with peers while constructing the resource, and the visual appearance of the resource). Each of these criteria was marked using a Likert scale from 0 to 5, so the maximum possible deviation of tutor marks from student marks was 5. The distributions of deviations is shown in Figure 1 below.

Figure 1

Figure 1: Deviations of tutor marks from student marks

It seems from Figure 1 that approximately one quarter to one third of students gave the same mark to their peer as the tutor. It also seems that the marking is slightly skewed, with students giving a higher mark than tutors.

The correlations between tutor marking and student marking on each of the eight criteria are very low, as shown in Figure 2.

Figure 2

Figure 2: Correlations between tutor marks and student marks

The correlations for two criteria (Context of use, and Quality of instructions) are approximately 0.26. The correlations for the remaining criteria are less than 0.1, which is barely better than random.

It seems from these observations that students are not well calibrated in their reviews of their peer's work on their first attempt, and they are interpreting the marking rubric idiosyncratically. Our learning from this observation is that students need to practice using the marking rubric to perform reviews and getting opportunities to see how the tutor interprets the marking rubric.

Another observation we made was that students were generally positive towards each other in their reviews, complimenting their peers on the clarity of their instructions, the attractiveness of their sites, and the quality of their reflective blogs. Their critical feedback tended to focus on minor spelling errors or problems with links. The reviews consequently tended to focus on superficial features rather than critiquing each other's understandings of the role of technology, and their use of technology as an enabling resource. This highlighted a misconception that many novice teachers hold about the role of technology in education. Many students assumed that technology is something that teachers use to facilitate student learning, whereas the philosophy of this unit is that technology is something that students use to facilitate their own learning. As part of this philosophy, each student's use of technology must be set into a learning context. To help students address this misconception, the marking rubric used to assess their work ranged from: 'No context provided of the social setting where the resource or social networking tool would be used' (fail) to 'Very detailed description of context provided, plus clear evidence of experimentation and suggestions of a range of ways to use the resource' (high distinction).

A number of students gave feedback in their review to their peer that there was no context for the resource that the peer had created. However, (and surprisingly), those same students also had no context for their own resource. This suggests that at least some students were recognising that there was a problem in their conceptualising of the role of technology in the assignment.

Students generally commented positively on the peer review process in their reflective journals during the course of the unit. They valued the opportunity to view each other's work and were especially appreciative of the comments and suggestions they received from the peers on how to improve their own work.

Conclusion

The key lesson learnt from this investigation is that despite the use of the peer review process and devoting a lecture to explaining the marking guide in detail, many students did not interpret the marking rubric they used for assessing each other's work in the same way as their tutor. This suggests that the peer review process and describing the marking guide in detail is not sufficient to improve understanding the assessment requirements. It seems that students need to be given an opportunity to calibrate their interpretation of the marking rubric to bring their understanding of the rubric closer to that of their tutor. This calibration process should help students overcome their misconceptions about the topic, for example many pre-service teachers appear to hold misconceptions about the role of technology in education that are consistent with research into pre-service teachers' assumptions about the role of technology in education (Znamenskaia 2000). It seems that such assumptions are resistant to change and require teachers (and in this context, pre-service teachers) to make a conceptual shift in the ways they think about technology and their teaching practices.

One strategy suggested for bringing about this conceptual shift is for teachers to engage with their students in a dialog in which the marking rubrics for the peer review and assessment are developed collaboratively (Sluijsmans 2002). Such dialog can deepen student understandings about the content and goals of the unit and hence address conceptual misunderstandings. Such negotiations also give students a sense of empowerment and ownership of their learning, and may thus motivate them to participate more fully in the peer review process (Pearce et al. 2009). Using a dialog to enhance motivation to engage with the review process may reduce the need to motivate students by assessing the quality of their reviews, and thus free up the review process to allow students to conduct more reviews, and thus benefit from a greater diversity of feedback (Pearce et al. 2009).

The generally positive comments from students about the peer review process in their reflective blogs suggests that it is a worthwhile activity, and one from which their learning clearly benefitted. This is especially the case for students studying the unit online who would otherwise tend to be isolated and lack opportunities to obtain feedback about their work to help them keep on track.

Finally, by giving students a voice, even if that voice is primarily directed to their fellow students, there is much that we can learn about what our students are understanding - and importantly, not understanding - that can help us to improve our own pedagogical practices.

References

Bostock, S. (2006). Student peer assessment. [viewed 7 Jun 2012; verified 8 Jan 2012]. http://www.keele.org.uk/docs/bostock_peer_assessment.htm

Boud, D., Cohen, R. & Sampson, J. (1999). Peer learning and assessment. Asessment and Evaluation in Higher Education, 24(4), 413-426. http://dx.doi.org/10.1080/0260293990240405

Guilikers, J. et al. (2009). Becoming a teacher educator: Theory and practice for teacher educators. In A. Swennen & M. van der Klink (Eds), Becoming a teacher educator: Theory and practice for teacher educators. Springer.

Herrington, J., Reeves, T. C. & Oliver, R. (2009). A guide to authentic e-learning, 1st ed., New York: Routledge. http://www.routledge.com/books/details/9780415998000/

Howard, C. D., Barrett, A. F. & Frick, T. W. (2010). Anonymity to promote peer feedback: Pre-service teachers' comments in asynchronous computer-mediated communication. Journal of Educational Computing Research, 43(1), 89-112. http://dx.doi.org/10.2190/EC.43.1.f

Keppell, M. Au, E., Ma, A. & Chan, C. (2006). Peer learning and learning-oriented assessment in technology-enhanced environments. Assessment & Evaluation in Higher Education, 31(4), 453-464. http://dx.doi.org/10.1080/02602930600679159

Lu, R. & Bol, L. (2007). A comparison of anonymous versus identifiable e-peer review on college student writing performance and the extent of critical feedback. Journal of Interactive Online Learning, 6(2), 100-115. http://www.ncolr.org/jiol/issues/pdf/6.2.2.pdf

Mostert, M. & Snowball, J. D. (2012). Where angels fear to tread: Online peer-assessment in a large first-year class. Assessment & Evaluation in Higher Education, 1-13. http://dx.doi.org/10.1080/02602938.2012.683770

Pearce, J., Mulder, R. & Baik, C. (2009). Involving students in peer review: Case studies and practical strategies for university teaching. Melbourne: Centre for the Study of Higher Education, The University of Melbourne. http://www.cshe.unimelb.edu.au/resources_teach/teaching_in_practice/docs/Student_Peer_Review.pdf

Robinson, J. M. (1999). Anonymous peer review for classroom use: Results of a pilot in a large science unit. In Teaching in the Disciplines/ Learning in Context. Proceedings of the 8th Annual Teaching Learning Forum. Perth: The University of Western Australia. http://otl.curtin.edu.au/professional_development/conferences/tlf/tlf1999/robinson-j.html

Rolheiser, C. & Ross, J. A. (2003). Student self-evaluation: What research says and what practice says. http://www.cdl.org/resource-library/articles/self_eval.php

Sluijsmans, D. (2002). Establishing learning effects with integrated peer assessment tasks. Lancaster, UK: The Higher Education Academy: Palatine. http://78.158.56.101/archive/palatine/files/930.pdf

Wood, D. (2009). A scaffolded approach to developing students' skills and confidence to participate in self and peer assessment. In ATN Assessment Conference 2009: Assessment in Different Dimensions. Melbourne: Australian Technology Network. http://emedia.rmit.edu.au/conferences/index.php/ATNAC/ATNAC09/paper/view/203/5

Wood, D. & Kurzel, F. (2008). Engaging students in reflective practice through a process of formative peer review and peer assessment. In ATN Assessment Conference 2008: Engaging students in assessment. Adelaide. http://www.ojs.unisa.edu.au/index.php/atna/article/download/376/252

Znamenskaia, E. (2000). Future teacher misconceptions concerning educational technology. Dissertation. University of Connecticut. http://digitalcommons.uconn.edu/dissertations/AAI9991599/

Please cite as: Boase-Jelinek, D., Parker, J. & Herrington, J. (2013). Peer reviews: What can we learn from our students? In Design, develop, evaluate: The core of the learning environment. Proceedings of the 22nd Annual Teaching Learning Forum, 7-8 February 2013. Perth: Murdoch University. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/boase-jelinek.html

Copyright 2013 Daniel Boase-Jelinek, Jenni Parker and Jan Herrington. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.


[PDF version] [Refereed papers] [Contents - All Presentations] [Home Page]
This URL: http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/boase-jelinek.html
Created 7 Feb 2013. Last revision: 7 Feb 2013.