Teaching and Learning Forum 97 [ Contents ]

How can we get useful unit feedback from students?

Peter Radloff
Behavioural Health Science, School of Nursing
Curtin University of Technology


Introduction

Student evaluation of units and of instruction is one important source of information about the effectiveness of a course. A long history and extensive list of research publications on the use of student evaluation is easily accessible through previous coverage of this work available in Aleamoni (1987), Doyle (1975, 1983), Falk and Dow (1971), Gibbs and Habershaw (1989), McKeachie (1986), March (1987), Miller (1972, 1975) and Ramsden (1992) to mention only a few sources. (There are also presentations at previous teaching/learning Fora: Hutcheson (1992), Wijesundera and Hicks (1993), and Crawford and Leitman, (1995)). It is Centra's (1993) evaluation which presents a context and a reminder of the history and current importance of student feedback that is worth drawing attention to. Unless feedback becomes part of the fabric of university functioning, it will remain an addendum to teaching programs, or a for promotion purposes only process. One way in which integration has been attempted is through a requirement that all units be subject to an official student feedback process. This has been done in many universities, but it has also met resistance from staff who have entrenched attitudes against such a process, and who would argue that such feedback is an unnecessary burden, and not useful.

Despite well over a thousand studies supporting the usefulness of student feedback, a relatively large number of staff still seem to have reservations as to its value. It is interesting to note that the very extensive evidence on the value of such feedback goes with sceptical attitudes among many staff, whereas the relative absence of studies on the measures, methods and outcome research productivity and quality receives no comment. Research will continue to be awarded more unconditional positive regard even when there is little work on reliability and validity of methods used to evaluate research activity, and when the link between research and the core function of universities (which administrators under economic duress now say has to be teaching) may be zero or negative in terms of a contribution to student progress (Azrin, 1993; Pascarella and Terenzini, 1991; Volkwein and Carbone, 1994).

For student feedback to become commonplace it needs to be recognised as an intrinsic component of any teaching program, built into unit outlines, into the timetable, and used in planning learning activities and assessment procedures. It also needs to be made so easy to use that it can form part of everyday teaching activities. This can be done by having a variety of feedback methods available ranging from the formal questionnaire, through the one-minute paper (or half-sheet response) to a range of other methods such as nominal group methods, interviews, to journalling. The organised use of such methods can yield detailed material on the impact of units, which can also be diagnostic and prescriptive (see Gibbs and Habeshaw, 1989, Angelo and Cross, 1993).

The fact that formal student evaluation of teaching has been mandated in many universities does not mean that other methods are no longer needed, and should not be used. One measurement occasion during a semester may not provide useful feedback, and it will not allow changes to be made during the semester. A number of different forms of feedback is to be preferred. Adopting the quality agenda makes it possible to integrate student feedback benchmark measures into a Total Quality Improvement (TQI) process. This will also contribute to staff portfolios and provide a continuous record of teaching effectiveness using these and other measures. The basic approach in TQI follows a behavioural research design, since the measurement of a benchmark provides both the baseline, and indicates which intervention strategies lead to improvement (Macchia 1993). If used by individual instructors this provides personalised information relating to their particular situation, and is very likely to enable them to improve their particular teaching impact. It is this which promises to be of most value in supporting improvement.

One observation about the feedback process, made by Centra (1993), is that there is surprisingly little research attempting to locate the source of student progress and success in the areas of institutional management, and structural-functional procedures for the day by day operation of the university. Once one recognises that it is the leadership, and their vision, that has a great deal to do with the success of a university, it becomes necessary that the precise way in which that impacts upon the core functioning of the university needs to be addressed. Any factors contributing to the climate of an institution deserves attention (Radloff 1995) since it will influence educational outcomes.

A number of staff also express dissatisfaction with the psychometric emphasis of current student evaluation forms. They frequently elect to include a separate sheet asking for specific written feedback on elements of the unit which they consider important (see for example Morrison 1995). Some feedback systems do include written components which tend to cover general issues. The argument here is for expansion of this written feedback to be inclusive of both a agreement scale and the opportunity to provide written feedback on the same items. This can also be expanded to cover the whole semester. Obtaining scale judgements formally and informally throughout the semester may also enable students to become more accustomed to making such judgements on issues of concern to them, and able also to evaluate their own performance on assignment assessment forms which use the same format (Radloff 1996).

Structure of evaluation instruments

Most student feedback forms involve a set of questions to be answered using a rating scale. The most common is a 5 point agreement (or Likert) scale. Good measurement arguments against using anything less than a seven point scale, and consideration that a six or eight point scale would obviate bias towards the non-committed midpoint may be resolved by using a ten point scale.

The particular unit feedback form being considered here has evolved over years of use to a stage where it is now used in conjunction with the usual SAT (Student Assessment of Teaching) and the CEQ (Course Evaluation Questionnaire) for summative assessment of units. This is useful since it becomes possible to obtain statistical results based not just on the standard questionnaires, but also to relate these outcomes to the results from questions more directly related to the particular unit being assessed. This often allows diagnostic conclusions which are helpful to an instructor, and suggest strategies for improving the impact of the teaching programme.

The following is an example of questions from the current version of this student assessment form:

Example of questions from student assessment form

This shows that the space for comments goes with an agreement scale for the same item. Students may comment as well as respond to the scale by circling a number on the eleven point scale (although occasionally they circle the extreme descriptor itself). It is unusual to have all spaces available for comment filled in, but then this itself is a measure which can yield information on the reaction to the unit on any particular occasion. Once this form is in use, the same structure can also be used less formally.

One such use has it incorporated into a half sheet response form. Printed ahead of a class, this allows students a written answer, but also includes a judgement scale. The same can be done quite informally in a tutorial, for example, where students can be asked what they would award an activity they have just completed. This has proven a useful method of obtaining rapid feedback on activities, cooperative learning methods, or CATs (Classroom Assessment Techniques) Angelo and Cross, 1993). It also appears to improve the learning climate, since it is quite clear that student opinions are being quite publicly sought, and taken into account. It has the added advantage of stressing the importance, to students, of the teachers concern for affective factors. Another strategy which appears to have a quite positive effect is to provide access to the results of the student feedback to all students involved, preferably by having a summary version distributed.

One limitation of the usual way in which student evaluation is managed is that there is no way in which the anonymous forms may be linked to the actual performance of students on the unit. Claims may be made that it is the really high performers that rate say a lecture highly, while other students may not show such enthusiasm. Some information is available suggesting that such a relationship may be studied, and this will be reported on. One matter of concern to those who are relying on the CEQ for evaluating the relative merits of courses of instruction across universities in Australia is that it appears from some preliminary factor analyses of representative student samples that it is not possible to duplicate the factor pattern obtained by Ramsden (1992), Long , (1992). There may well be an argument, based on the very nature of student evaluation processes, for feedback to be individualised for particular units. Relying on factor structures which are ephemeral, and which may vary with the instruction format, may be unsafe. Examining such results in the context of direct information one the response to particular elements of a unit, may lead to more stable conclusions.

Reflecting on the nature of student feedback questionnaires makes it clear that only the immediate instructional situation is being references in most of them. This is unfortunate when other factors can be shown to have a more dominant role in determining student success. The question as to how this occurs is compelling: how would it be possible for teachers in higher education to really deliver gains in learning, encourage and empower each other and their students? If that were possible of achievement the attraction of implementing appropriate strategies would be irresistible. Various estimates suggest that in-class activities contribute less than one quarter of the variance contributing to student success. What academics need to do is to start paying attention to all the other variables, to which students are exposed, which serve to promote, or even to obstruct their learning. It is worth reading Peterson (1994) who finds it puzzling that organisations devoted to fostering student learning pay no attention to which aspects of their practices foster learning. Almost no research examines the relationship between organisational variables and learning outcomes treated as dependent variables. Correlations between, for example, research climates and outcomes, are important in that they operate against student learning, but how about documenting what forms or structure promote learning? Were one to take the quality initiative seriously, one would have to assign students a foundational role in setting and supporting the mission and purposes of the institution, especially with regard to ways in which that impinges upon their own progress.

Conclusions

Finding more convenient ways to generate student feedback on teaching is a worthwhile goal. A number of suggestions have been made which could prove useful to others.

It seems important to repeat the observation that in-class activities contributes only 25% of the variance contributing towards student progress. Some strategies used in teaching will contribute to the influence which out-of-class activities exert; but other variables must be included so that a true evaluation of the influence of a university programme can be made.

Arranging for regular feedback, and taking feedback results seriously, models the real world, provides credibility, and should promote active involved students. Such involvement has been shown to be one major factor in student success.

References

Aleamoni, L. M. (Ed.). 1987). Techniques for evaluating and improving instruction. New Directions for Teaching and Learning, no 31. San Francisco, CA: Jossey-Bass.

Angelo, T. A. and Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers, (2nd. ed.). San Francisco, CA: Jossey-Bass.

Astin, A. W. (1993). What matters in college: Four critical years revisited. San Francisco, CA: Jossey-Bass.

Centra, J. A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco, CA: Jossey-Bass.

Crawford, F. and Leitmann, S. (1995). Masqued meanings: Student evaluation of teaching. In Summers, L. (Ed), A Focus on Learning, p42-52. Proceedings of the 4th Annual Teaching Learning Forum, Edith Cowan University, February 1995. Perth: Edith Cowan University. http://lsn.curtin.edu.au/tlf/tlf1995/crawford.html

Gibbs, G. and Habeshaw, T. (1989). 53 interesting ways to appraise your teaching, (2nd ed.). Bristol: Technical and Educational Services.

Hutcheson, K. (1992). Student assessment of teaching: The performance indicators. In Latchem, C. and Herrmann, A. (Eds.). Higher Education teaching and learning: The challenge. (pp. 75-82). The proceedings of the Teaching Learning Forum. Perth, WA: Teaching Learning Group, Curtin University of Technology.

Long, M. (1992). Report of responses to the course experience questionnaire: 1992 graduates. Report prepared for the Graduate Careers Council of Australia. Canberra: Australian Council for Educational Research.

Macchia, P. (1993, May). Assessing educational processes using total-quality-management measurement tools. Educational Technology, pp. 48-54.

Marsh, H. W. (1987). Students' evaluations of university teaching: Research findings, methodological issues and directions for future research. International Journal of Educational Research, 11, 255-278.

McKeachie, W. J. (1986). Teaching tips: A guidebook for the beginning college teacher, (8th ed.). Lexington, MA: Heath.

Miller, R. I. (1972). Evaluating faculty performance. San Francisco, CA: Jossey-Bass

Miller, R. I. (1975). Developing programs for faculty evaluation. San Francisco, CA: Jossey-Bass

Morrison, A. (1995). Analysing qualitative responses on student evaluations: an efficient and effective method. In Z. A. Zelmer, (Ed.). Higher Education: Blending tradition and technology. Proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia. Canberra, ACT: HERDSA.

Pascarella, E. T. & Terenzini, P. T. (1991). How college affects students: Findings and insights from twenty years of research. San Francisco, CA: Jossey Bass.

Peterson, M. W. (1988). The organizational environment for student learning. In J. S. Stark & I. A. Metz (Eds.), Teaching and learning through research, New directions for institutional research, no. 57, San Francisco, CA: Jossey-Bass.

Radloff, P. (1995). Students as allies in the learning community: A question of design and structure. pp. 610-614. In Z. A. Zelmer, (Ed.), Higher Education: Blending tradition and technology. Proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia. Canberra, ACT: HERDSA.

Radloff, P. (1996). Can assignment assessment sheets promote learning and performance? pp. 135-139. Learning within and across disciplines. The Proceedings of the 5th Annual Teaching Learning Forum, Perth, WA: Academic Services Unit, Murdoch University. http://lsn.curtin.edu.au/tlf/tlf1996/radloffp.html

Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.

Volkwein, J. F. & Carbone, D. A. (1994). The impact of departmental research and teaching climates on undergraduate growth and satisfaction. Journal of Higher Education, 65(2), 147-167.

Wijesundera, S. and Hicks, O. (1993). Student evaluation of teaching. In Herrmann, A. and Latchem, C. (Eds.). Higher Education teaching and learning: Sharing quality practice. (pp. 45-50). The proceedings of the Teaching Learning Forum. Perth, WA: Teaching Learning Group, Curtin University of Technology.

Please cite as: Radloff, P. (1997). How can we get useful unit feedback from students? In Pospisil, R. and Willcoxson, L. (Eds), Learning Through Teaching, p285-289. Proceedings of the 6th Annual Teaching Learning Forum, Murdoch University, February 1997. Perth: Murdoch University. http://lsn.curtin.edu.au/tlf/tlf1997/radloff.html


[ TL Forum 1997 Proceedings Contents ] [ TL Forums Index ]
HTML: Roger Atkinson, Teaching and Learning Centre, Murdoch University [rjatkinson@bigpond.com]
This URL: http://lsn.curtin.edu.au/tlf/tlf1997/radloff.html
Last revision: 8 Apr 2002. Murdoch University
Previous URL 16 Jan 1997 to 8 Apr 2002 http://cleo.murdoch.edu.au/asu/pubs/tlf/tlf97/radl285.html