|[ Teaching and Learning Forum 2001 ] [ Proceedings Contents ]|
In an attempt to bridge the gap between student feedback on teaching and the hoped for teaching/learning improvement, the University of Western Australia (UWA) and the Adelaide University (AU) are collaborating on a two year project that is funded by the Committee for University Teaching and Staff Development (CUTSD). The project aims to provide suggestions of targeted teaching strategies to academic staff members based on their students' feedback. Details of the project's plans were presented at the Higher Education Research and Development Society of Australasia conference (Black, Cannon & Hicks, 2000). This paper provides a brief outline of the project, and discusses some of the challenges that have surfaced and the lessons that have been learnt so far, in the hope that what has been learnt in this collaborative effort will aid future efforts of this nature.
The questionnaire used in UWA to obtain student feedback on teaching is known as Student Perceptions of Teaching (SPOT) and that used in AU is the Student Evaluation of Teaching (SET). Each instrument has a 5-point rating scale and academics can choose from a collection of items or questions. While SPOT has three generic type items that are included in most questionnaires and another eleven that are 'optional default' items for a particular teaching situation, SET has seven 'standard' items that are included in all teaching evaluation questionnaires. In both SPOT and SET, academics have the opportunity to include new items ie. those designed by themselves. The similarity in the instruments used to collect student feedback allowed collaboration between the two universities.
The main thrust of the project is to encourage teaching improvement based on student feedback, for each academic staff member who obtained student feedback on his/her teaching. The project aims to identify possible areas for development, ie. the teaching/learning aspects that correspond to low rated items. Some strategies that may assist in the improvement of teaching related to those aspects will be suggested to the academic. Wherever possible, the teaching strategies will be linked to the existing resources in the staff development centres of the respective institutions. Other appropriate resources, such as information on the Web, will also be included. The four major components of this project are
Some of the challenges that have been faced were typical of any group of people trying to work for a common goal, eg. differences in team members' perspectives and skills. Other challenges were due to differences on a larger scale such as the institutional cultures and goals, or were due to unexpected changes in the staff development centres. This paper provides a UWA perspective of the challenges.
The priority of the project relative to other demands within each institution can differ. In one location (most likely the originating university) the project can be give a high priority while in another it may be just another task, or even a means to a somewhat different end. While a lead applicant (project director) exists, based in one of the institutions, it is not possible for this person to exert the same kind of leadership over team members from the other university. Similarly the leadership expectations from team members from partner institutions will vary and may not be fully met.
Asynchronous attention to the project is also a challenge. Without a full time project team presence in both localities, while one institution is busily working on the project the other may have more pressing matters to attend to. As time passes, this situation can reverse where now it is the partner institution that is attending to the project and the institution that was working solidly on the project has deflected its attentions. The consequence is a discontinuous development of the project and a certain degree of 'lurching' towards ultimate project completion. This is not an insurmountable problem but it can be disconcerting.
No project develops in isolation and the agenda of issues running in each location in which the project tasks are being addressed impact on the project. These can be as local as the informal interaction between staff working on the project and with other workers in the area. It can also relate to broad institutional priorities and to the relative status of the units.
Changes in the staff development centres also impacted on teamwork. In one institution the team leader was replaced soon after the project had started, and a few months later yet another staff change was made. Each time there was a change in team membership a new 'common understanding' had to develop. The 'old guard' had to clarify the reasons for a particular approach. Sometimes questions posed by a new team member which were aimed at seeking clarity on the project led the team to question the project's direction or procedures. This happened because the new member brought a fresh perspective unclouded by previous discussions and issues. The risk for the new team member is that established members of the team might resent the fact that the new member joins the team and immediately their suggestions are adopted and their profile lifted.
Further to the clustering, it was obvious that items within a cluster (coming from across the two item banks and within each item bank) could be conjoined if the items essentially addressed the same teaching and learning issue. This further reduced the amount of material that has to be written.
The challenge that arose in this activity was that there was not always agreement within and across the teams on where particular items should be assigned. This was due to team members' differing interpretation of the item and what that item was asking the student to evaluate. This is further compounded by the fact that team members were not involved in the initial development of the item banks. The way the items are worded can mean quite different things depending on individual interpretation.
A suggestion for the consolidation of the two sets of evaluation items was rejected because items in each set were strongly situated in the culture of the specific university. However, by clustering items from both SPOT and SET into agreed, appropriate theoretical groups, a framework was developed to create a single database of teaching strategies and references that mapped to either question set. This proved to be a more realistic approach than the initial proposal to map resource materials in the UWA to the optional default items in SPOT item bank.
As information was to be shared across computer platforms, Adelaide University chose a PC simulator for their Macintosh computers, but after discussion a Web based solution was chosen . Rather than exchange database files for an iterated process of review and update, a single database was constructed at UWA with Web access for all project participants. This site will later allow for academics from either university to view the entire collection of strategies, rather than just the strategies that have been suggested as a result of feedback from students.
The intention is that the appropriate teaching strategy will be extracted from the database during the process of creating a report for each survey of student views. As the process creating reports is different at each university, the program solution will need to fit the existing structure and is currently under development. What can be the same is the teaching strategy database structure and content. That structure can allow for differences in presentation, whether in print, on the Web or in the institution's preferred reference style. Reference style is specific to a particular university culture.
As the SPOT reports in UWA are delivered electronically in a Portable Document Format (PDF), Web addresses can be accessed directly from within the PDF file. Adelaide University delivers their SET reports in hard copy so there will be a need to manually access Web references
The degree of granularity of a specific teaching strategy reference is an ongoing challenge. Print references can be specific to a range of pages but not paragraphs. Web references normally cite a single Web document, but sometimes a section of that document only when appropriate links are provided by the Web author. Audio-visual references are cited in full even when the relevant portion for this project is a subset of the whole. The alternative to generate prescriptive content derived from primary sources was rejected for lack of time within the project scope. It was also felt that to do so diluted the authority and academic rigour of the original resource material.
One of the criteria for a team to work well is a "right mix of skills" (Katzenbach & Smith, 1994, p. 47). However, even when a team has the appropriate mix of skilled people, there is no guarantee that it will function well. Our experience indicates that a team member with specific expertise needs to develop the ability to communicate at a level appropriate for the other members of the team in order to achieve consensus. Although decisions related to a particular aspect of the project could be left to the team member with the best knowledge of that aspect, the sense of 'ownership' for all members in a team increases when consensus is sought. Another lesson that we have learnt is related to the 'baton change' ie. when team membership changes. The project leader has to ensure that the new member is updated on project details and is not left to flounder when discussions touch on previous decisions, as well as ensure that the value of the work done previously is acknowledged if new directions are taken. The Forming, Storming, Norming and Performing stages of a team's growth (Scholtes, Joiner & Streibel, 1996) are evident in the collective experiences of the UWA team members. The team has experienced, and will probably continue to experience, the roller coaster ride of collaboration.
Despite the advances in the world of technology, the limitations of technology could require a change in the project plan. Even when there are appropriate technological means to achieve a target, time limitation may dictate the adoption of a less than ideal model. We expect to not only maintain a system that will provide automated suggestions for teaching improvements to academics based on student feedback, but also to continue the system's improvement long after the project period. Thus the 'bottom-up steps' in staff development, or teaching improvement based, perhaps initially only on the low scored items in student ratings, will be evolutionary in nature.
Biggs, J. (1999). Teaching for quality learning at university: What the student does. Buckingham, UK: SRHE and Open University Press.
Black, B., Cannon, R. & Hicks, O. (2000, July). The missing link: Developing the nexus between student feedback surveys and development for teachers. Paper presented at the conference of the Higher Education Research and Development Society of Australasia, Toowoomba, Queensland. http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/abstracts/black-abs.html
Braskamp, L., Brandenburg, D. & Ory, J. (1984). Evaluating teaching effectiveness: A practical guide. Newbury Park, CA, USA: Sage.
Hatfield, S. R. (Ed) (1995). The seven principles in action: Improving undergraduate education. Bolton, MA, USA: Anker.
Katzenbach, J. R. & Smith, D. K. (1994). The wisdom of teams. New York, NY, USA: Harper Business.
Marincovich, M. (1999). Using student feedback to improve teaching. In P. Seldin (Ed), Changing practices in evaluating teaching, (pp. 45-69). Bolton, MA, USA: Anker.
Murray, H. G. (1997). Does evaluation of teaching lead to improvement of teaching? The International Journal for Academic Development, 2(1), 8-23.
Ory, J. (2000). Teaching evaluation: Past, present, and future. In M. D. Svinicki (Series Ed) & K. E. Ryan (Vol. Ed), New Directions for Teaching and Learning: No. 83. Evaluating Teaching in Higher Education: A Vision for the Future. San Francisco, CA, USA: Jossey-Bass.
Palmer, P. J. (1998). The courage to teach. San Francisco, CA, USA: Jossey-Bass.
Piccinin, S., Cristi, C. & McCoy, M. (1999). The impact of individual consultation on student ratings of teaching. The International Journal for Academic Development, 4(2), 75-87.
Scholtes, P. R., Joiner, B. L. & streibel, B. J. (1996). The team handbook (2nd ed). Madison, WI, USA: Oriel Incorporated.
Seldin, P. (1997). Using student feedback to improve teaching. To Improve the Academy, 16, 335-346.
|Please cite as: Santhanam, E., Martin, K., Goody, A. and Hicks, O. (2001). Bottom-up steps towards closing the loop in feedback on teaching: A CUTSD project. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://lsn.curtin.edu.au/tlf/tlf2001/santhanam1.html|