Teaching and Learning Forum 99 [ Contents ]

Showing students you're listening: Changes to the student survey system at Murdoch

Christina Ballantyne
Teaching and Learning Centre
Murdoch University
Student evaluations of teaching are now being used in our universities to collect information for diverse and increasing purposes. What was initially instigated as a tool to provide staff with feedback on their teaching, is now also being used to collect information for personnel and management decisions.

This semester Murdoch University has introduced a new system of student surveys which separates the surveys of units from that of teaching. While this is by no means unique, in fact differentiating between units and teaching is done in many institutions' student survey systems, Murdoch has attempted to give a more formative slant to the surveys by running the teaching ones mid-semester. Staff are provided with results before the end of the teaching period and are strongly encouraged to respond to the students. Previous research at Murdoch has shown that while students are happy to complete questionnaires on teaching and units, they become disenchanted with the process if given no evidence that their views are taken into account. This paper looks at how this new system has worked and at staff and student views on the process.


Introduction

Student evaluation of teaching is not a new phenomenon From earliest times up to the universities of the Middle Ages, students paid their lecturers according to what they considered them to be worth (Doyle, 1983; Travers, 1981). There was no problem in weeding out bad teachers, as they were not paid enough to live! Interest in the use of student ratings in the twentieth century grew slowly through the twenties, thirties and forties in the United States (Aleamoni, 1981; Centra, 1987; Cohen, 1990; Doyle, 1983; Remmers & Gage, 1955) . These early systems were often initiated by student bodies and their main purpose was to allow students to select or reject their courses based on the views of previous students (Arreola, 1987; Ory, 1990). During the sixties there was an upsurge of interest in student feedback due to the student movements, when students began to demand a voice at all levels of society (Beard & Hartley, 1984; Centra, 1993). This coincided with the beginnings of calls for more accountability in publicly funded areas, such as universities. It was at this time in the US that student survey systems became more institutionalised and the results began to be used for promotion and tenure decisions (Ory, 1990) . This increase in interest has continued to the present, with the seventies noted as the decade when a considerable body of research was undertaken. Economic concerns, due to funding cuts and increased competition among universities for a declining number of potential students, and the need for valid, reliable evidence have further shaped the student rating systems about the quality of teaching used today (Doyle, 1983; Seldin, 1984) . By the mid nineteen-eighties student surveys were reported as being the principal source of information used in promotion and tenure decisions in universities in the USA (Aleamoni, 1987).

In the last ten years an increased focus on good teaching in Australian universities has, in many instances, resulted in the increase in use of student surveys of teaching results, and in changes in their purposes. In 1994 a mandatory student survey system, where unit and teaching were reviewed on a three year cycle, was introduced at Murdoch University. Its original purposes were to guide academic staff development and curriculum improvement in individual units. Over the years several changes have occurred in the uses for the data collected, eg as one of the university's performance indicators, for promotions, etc. By 1998 these changes in the use of the information collected, together with the development of clearer standards for units and teaching by the university and the move to annual surveys of units, highlighted a new scenario for student surveys.

How the new system works

The information collected by the new system needed to be both formative and summative, i.e teachers required information which is diagnostic and can be used to make improvements, but data was also needed for the university's performance indicators, for promotion and other personnel decisions. The system was designed to separate the collection of information about the unit from that related to an individual's teaching.

Questions on the unit surveys were constructed with reference to the 'Key Quality Standards for Units', which were adopted by Murdoch at the same time[1], those included on the teaching survey relate to the criteria for assessing items of evidence of teaching that promotes effective learning as identified in Table 2 of the Guidelines for the Presentation of a Teaching Portfolio (Murdoch University, 1998).

As with the introduction of the original mandatory system there were some concerns amongst staff that this increase in the number of surveys, particularly as two separate questionnaires would now be undertaken in a number of units, would result in respondent fatigue amongst students. While the literature on student evaluations of teaching is immense - more than 1,300 articles were counted in 1990 (Cashin, 1990), studies of student views are meagre. Studies which have been undertaken have shown that students rate these surveys positively, feel they are important and have realistic expectations of their teachers' willingness and ability to change their teaching (Costin, Greenough et al. 1971; Miron and Segal 1986; Marlin 1987; Dwinell and Higbee 1993). Previous research undertaken at Murdoch indicates that while students are keen to provide their views, it is essential that they have some evidence that these views are taken into account (Ballantyne, 1997). With these issues in mind the new survey system was organised so that unit surveys would be undertaken in the last two weeks of the semester, with the teaching surveys being run mid-semester. This offered an opportunity to give a more formative slant to the teaching surveys with the results being returned to staff by week 10 of a thirteen teaching week semester, allowing them time to respond to student concerns. Student feedback which is collected mid-semester has been shown to be useful for the improvement of teaching (McKeachie, 1994)

Operation of the teaching surveys - semester 2, 1998

While unit surveys were mandatory in all units, teaching surveys were conducted on a voluntary basis for this first semester and surveys were requested by 95 teachers. The option of using either the newly developed teaching questionnaire or an adapted version of the old questionnaire was given[2]. Staff choice depended basically on previous use of the survey, ie those who already had results intended for a promotion application were advised to continue using the old format. Forty-three opted for the new questionnaire, the remainder for the old.

Student opinion

Members of Murdoch Student Guild expressed some concerns on the issue of student comments from the teaching surveys being returned to staff before the grades for the unit had been finalised. Guidelines for student evaluation systems generally advise the return of comments after the grades have been finalised to preserve the anonymity of the students (Centra, 1993; Arreola, 1995). Staff were asked to ensure that students were aware of this change to the policy, and the information was given on the student comment sheets. To gauge some idea of student opinion of this change, a 'yes/no' question was added to the comments sheets where the new version of the teaching questionnaire was being used. This asked:
'Did you feel that you could make only positive comments because they were being returned to the staff member before the final assessment period'
1556 students completed a new teaching survey, representing 49 per cent of the enrolled students in these classes. Only 1017 (65%) students, however, answered this extra question, with 87 per cent indicating that this early return did not restrict them to positive comments.

Students were also provided with an opportunity to comment. Only a minority of students made a comment on the new procedure. The dominant themes of the comments were -

Staff opinion

Eighty-three of the teaching surveys were undertaken in the middle teaching block, ie week 5 to 9. As the majority of staff administered the survey in week 9, it was impossible to provide results for everyone before the next teaching week. However, around half did receive the results as promised. An email survey was sent to all staff, requesting opinions of the new questionnaire and procedures and whether they had taken the opportunity to respond to the student feedback. Unfortunately the response to this survey was very disappointing with only thirteen staff members replying. Of these, five had undertaken a teaching survey in the middle block, with two having an opportunity to respond to the students. One staff member was able to make some changes within the running of the unit and felt that the students were appreciative of her response to them. The other thought making changes in the last few weeks was a somewhat unrealistic expectation. Comments from other staff members also addressed the issue of whether changes could be made within a short time span and problems relating to multiple teachers. The questions on the new questionnaire relate to one teacher only, therefore each teacher in a unit (whether lecturer or tutor or demonstrator) requires a separate questionnaire. The issue of 'team-teaching' is not one which has been adequately addressed in the student survey literature and requires some further investigation.

Conclusion

While students appear to be happy to give feedback on teaching and units, provided they have some evidence that staff pay some attention to it, care still needs to be taken that these surveys are not used indiscriminately. The new system had a number of teething problems, eg not all staff receiving their results in time, what to do with units where there are several lecturers, etc. Staff are also somewhat resistant to the idea of responding to students in cases where they are unable or unwilling to make the changes suggested. Feedback from students, however, suggests that it is the evidence that they have considered what the students have said that is of importance.

Endnotes

  1. The Key Quality Standards can be seen at:
    http://cleo.murdoch.edu.au/asu/evaluation/survey/unit.html#questions
    and the unit survey questionnaire at:
    http://cleo.murdoch.edu.au/asu/evaluation/survey/draft.html

  2. These questionnaires can be seen at:
    http://cleo.murdoch.edu.au/asu/evaluation/survey/teachsurvey.html

References

Aleamoni, L. M. (1981). Student ratings of instruction. In J. Millman (Ed.), Handbook of teacher evaluation (pp. 110-145). Beverly Hills, California: Sage.

Aleamoni, L. M. (1987a). Some practical approaches for faculty and administrators. In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction. New directions for teaching and learning, (No. 31, pp. 75-78). San Francisco: Jossey-Bass.

Arreola, R. A. (1987). The role of student government in faculty evaluation. In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction. New directions for teaching and learning, (No. 31, pp. 39-46). San Francisco: Jossey-Bass.

Arreola, R. A. (1995). Developing a comprehensive faculty evaluation system. Bolton MA. Anker Publishing Co. Inc.

Ballantyne, C. (1997). Improving university teaching: Responding to feedback from students. Conference paper, Adult Learning Cultures: Challenges and Choices, Wellington Polytechnic, New Zealand, February 1998. http://cleo.murdoch.edu.au/asu/evaluation/survey/respfeed.html

Beard, R., & Hartley, J. (1984). Teaching and learning in higher education. London: Paul Chapman.

Cashin, W. E. (1990). Assessing teaching effectiveness. In P. A. Seldin (Ed.), How administrators can improve teaching. San Francisco: Jossey-Bass.

Centra, J. A. (1987). Formative and summative evaluation: parody or paradox? In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction. New directions for teaching and learning, (No. 31, pp. 47-55). San Francisco: Jossey-Bass.

Centra, J. A. (1993). Reflective faculty evaluation. San Francisco: Jossey-Bass.

Cohen, P. A. (1990). Bringing research into practice. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice. New directions for teaching and learning, (No. 43, pp. 123-132). San Francisco: Jossey-Bass.

Costin, F., W. T. Greenough, et al. (1971). Student Ratings of College Teaching: Reliability, Validity and Usefulness. Review of Educational Research, 41(5): 511-535.

Doyle, K. O. J. (1983). Evaluating teaching. Lexington, Mass.: Lexington Books.

Dwinell, P. L. and J. L. Higbee (1993). Students' perceptions of the value of teaching evaluations. Perceptual and Motor Skills, 76: 995-1000.

Marlin, J. W. J. (1987). Student perception of end-of-course evaluations. Journal of Higher Education, 58(6): 704-716.

McKeachie, W. J. (1994). Teaching Tips. (9th Ed.). Lexington: D.C. Heath & Company.

Miron, M. and E. Segal (1986). Student opinion on the value of student evaluations. Higher Education, 15: 259-265.

Murdoch University (1998). Guidelines for the presentation of a teaching portfolio. http://wwwadmin.murdoch.edu.au/hr/traindev/teachportfolio.html

Ory, J. C. (1990). Student ratings of instruction: ethics and practice. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice. New directions for teaching and learning, (No. 43, pp. 63-74). San Francisco: Jossey-Bass.

Pritchard, R. D., Watson, M. D., Kelly, K. & Paquin, A. R. (1998). Helping teachers teach well. San Fransisco, The New Lexington Press.

Remmers, H. H., & Gage, N. L. (1955). Educational measurement and evaluation. (Revised Edition). New York: Harper & Brothers.

Seldin, P. (1984). Changing practices in faculty evaluation: A critical assessment and recommendations for improvement. San Francisco: Jossey-Bass.

Travers, R. M. W. (1981). Criteria of good teaching. In J. Millman (Ed.), Handbook of teacher evaluation (pp. 14-22). Beverly Hills, California: Sage.

Please cite as: Ballantyne, C. (1999). Showing students you're listening: Changes to the student survey system at Murdoch. In K. Martin, N. Stanley and N. Davison (Eds), Teaching in the Disciplines/ Learning in Context. Proceedings of the 8th Annual Teaching Learning Forum, The University of Western Australia, February 1999. Perth: UWA. http://lsn.curtin.edu.au/tlf/tlf1999/ballantyne.html


[ TL Forum 1999 Proceedings Contents ] [ TL Forums Index ]
HTML: Roger Atkinson, Teaching and Learning Centre, Murdoch University [rjatkinson@bigpond.com]
This URL: http://lsn.curtin.edu.au/tlf/tlf1999/ballantyne.html
Last revision: 24 Feb 2002. The University of Western Australia
Previous URL 28 Jan 1999 to 24 Feb 2002 http://cleo.murdoch.edu.au/asu/pubs/tlf/tlf99/ac/ballantyne.html