Lessons learned from using students' feedback to inform academic teaching practice
Shelleyann Scott and Tomayess Issa
Curtin Business School
Curtin University of Technology
This paper outlines the process that an academic within the business technology discipline adopted in using students' feedback to make ongoing developments to the curriculum and teaching practices. Sustained action research incorporating a critical friend to facilitate reflection and problem solving was adopted. The outcomes included significantly higher scores in the student feedback instrument in comparison to the school average; and increases in student satisfaction levels across the five semesters. For the academics involved there were outcomes in increased feelings of empowerment with making sound teaching decisions; increased satisfaction in observing and supporting good teaching; developing cognitively challenging assessments that were more explicit and well structured; and course work and assessments that had teamworking and critical thinking skills embedded.
Purpose of the research
This article outlines the relationship between an academic's action research program focusing on using students' feedback to improve teaching and assessment practices; and the impact on students' satisfaction with their learning experiences. The research describes a systematic approach to interpreting and reflecting on students' feedback, using a mentoring/critical friend to assist and support the academic in developing action plans and implementing changes.
Marsh (1987), Ramsden (2003) and other researchers advocate using student feedback as an informative tool to guide reflection and development of teaching practice, learning experiences and assessments. Ramsden states that "[e]valuating teaching concerns learning to teach better and exercising control over the process of learning to teach better". He took a positive view of students, wherein he posited that "students usually try to please their lecturers ... they adapt to the requirements they perceive teachers to make of them" (Ramsden, 2003, p. 62). Marsh stated that "student ratings are clearly multidimensional, quite reliable, reasonably valid, relatively uncontaminated by many variables often seen as sources of potential bias, and are seen to be useful by students, faculty, administrators" (Marsh, 1987, in Richardson, 2005, p.392). Not all academics are positive about students' evaluation of teaching, with some remaining sceptical about the value of their feedback (Richardson, 2005). Richardson posited "resistance to the use of student ratings has been expressed based on the ideas that students are not competent to make such judgements or that student ratings are influenced by teachers' popularity rather than their effectiveness" (2005, p. 407) although Marsh's research (1984; 1987) refutes these opinions.
Others feel that the link between student evaluations is too close to a market driven 'quality agenda' with inbuilt concepts of accountability for lecturers involved (Johnson, 2000). She stated "[s]tudent feedback is thus a means to monitor and evaluate activities and processes of teaching and learning such that evaluation practices and all individuals concerned are subject to internal institutional control procedures and external scrutiny and judgement (p.421). Some indicated that students are not able to provide objective analyses of their learning or suspect that students' perspectives are closely aligned with their grades (Johnson, 2000; Bhattacharyya, 2004), although Marsh "supported the validity of SETs" as a result of his investigations of the relationship between students' attainment and their SETs across different class groups (1987, in Richardson, 2005, p. 390). Richardson, referring to the work of Spencer and Schmelkin (2002, in Richardson, 2005), identified that students need to be kept informed of how their feedback was being used, otherwise they tended to dismiss these feedback mechanisms. Ramsden (2003) also advocated using student feedback as one source of valuable information that academics can use in becoming evidence-based reflective practitioner-researchers.
Professional development using student feedback as a key source
Richardson (2005, p.392) referring to Kember's (2002) work indicated that the "routine collection of students' evaluations does not in itself lead to any improvement in the quality of teaching". Hence, just obtaining student feedback data doesn't guarantee improvement in teaching, rather, academics must use the data to inform their practices, and enhance their understandings and beliefs about, and in, teaching. Additionally, students need to know that their data is being used for development, for maximum benefit to occur (Richardson, 2005). Roche and Marsh (2002) indicated that using student feedback can assist and support professional development of teachers, particularly when there was appropriate guidance and counselling incorporated into the process. Johnson (2000, p.423) provided a contrasting perspective commenting that "[t]he questionnaire format implies that evaluation process is not a reflective, mutual learning experience, nor one in which a student's or lecturer's individual views and reasoning have high value". She maintained professional development of academics using student feedback actually de-skills the individual as "s/he might ... be confused if results collected are not compatible with his or her own interests in or views about 'good teaching'" (Johnson, 2000, p.424). Ramsden took a different view:
These [accomplished] teachers do not segregate practice and theory; on the contrary, they seek productive relations between them to establish better ways of helping their students to learn .... The key to professionalism is learning how to fuse theory and practice. ...For most lecturers, this will mean working with people who are active in research and whose approach to staff development is driven by a spirit of stimulating inquiry. ... Good academic development engages us in the excitement of discovery and makes learning about teaching as exhilarating as doing research. (Ramsden, 2003, p.245)
The action research process in this study
Mills (2000), in his guide for the teacher researcher, outlined the benefits of an action research approach to professional developing teachers ... "[a]ction research is largely about developing the professional disposition of teachers, that is, encouraging teachers to be continuous learners - in their classrooms and in their practice ... [as it] provides teachers with the opportunity to model for their students how knowledge is created" (Mills, 2000, p.11). Stringer (2004) posited that action research was extremely powerful in supporting teachers in determining solutions to their own teaching-related problems. Mills expanded on this stating that data collected through action research was "persuasive" as it provided "insights into the impact of an intervention on student outcomes" (2000, p.13). Mills outlined that action research was designed to be embedded as part of normal reflective practice and not to be considered another onerous task ... as he wryly stated "[t]eachers already have too much to do and not enough time in which to do it!" (p.16). He encouraged the "investment in time" particularly as the process "may also produce unexpected positive outcomes by providing opportunities for collaborative efforts with colleagues who share a common 'area of focus'" (p.16).
Mills' Dialectic Action Research Spiral was adopted by the authors for use in this study (Mills, 2000, p.20). The key aspects of the action research spiral are outlined in Figure 1. This model was described by Mills as "research done by teachers and for teachers and students, not research done on them, and as such is a dynamic and responsive model that can be adapted to different contexts and purposes" (p.19). Woolcott (1989, in Mills, 2000, p.19) described it as "providing 'provocative and constructive ways' of thinking about their work".
Figure 1: Dialectic Action Research Spiral (Mills, 2000, p.20)
Systematic student feedback as the data source for action research
The student feedback instrument in this study was the Unit Experience Questionnaire (UEQ) which was a modified form of Ramsden's (1991) Course Experience Questionnaire. Changes were superficial and included, modifying identifiers of 'lecturer' and 'tutor' to a uniform - 'the staff member' as this instrument was administered in tutorial groups and students were informed that they were rating their tutor. They were able to give feedback on their lecturer in the open-ended sections. Five scales were included in this instrument, 'good teaching', 'clear goals and standards', 'appropriate workload', 'appropriate assessment', 'generic skills', and an 'overall satisfaction with the quality of the unit' item (see Appendix 1). They were measured against a Likert attitudinal scale of 'strongly disagree', 'disagree', 'neither disagree nor agree', 'agree' and 'strongly agree'. Additionally, the word 'course' was changed to 'unit' and three open-ended items were included; namely, "what were the best aspects of the unit?"; "what aspects of the unit are most in need of improvement?"; and, "suggest how the staff member could improve the learning experience". The survey was administered by external administrators who explained the purpose of the survey, who students were providing feedback on, and how their data was going to be used. To assist academics in interpreting and responding to their data, professional developers were employed within a central department in the business school.
Establishing the collaboration
In 2000 a mentoring/critical friend relationship was established between the two authors, in terms of their work roles of a university professional developer and a lecturer within the business technology school. The lecturer had no formal educational background or qualifications and was interested in enhancing the learning experiences and assessment tasks to increase the 1) level of educational value, 2) integration of professional skills, such as team work, 3) increase the cognitive demand of the assessment tasks, and 4) to increase his/her own understandings of best educational practice. This relationship was the core of the action research process and was commenced with a different unit (course) to that which is the subject of this paper, with the Technology Infrastructure unit (this paper) becoming the focus at the end of 2002. The relationship involved periodic discussions and material development related to a) unit content and skills, b) how to teach these, c) how to structure sound continuous assessment, d) how to design tasks which require varied levels of thinking, and e) how to better involve students in their learning and providing the lecturer/professional developer with high quality feedback with which to develop the unit.
Identifying an area of focus
The lecturer had used the UEQ to obtain feedback on other units s/he had taught and this process was carried over in the new second year level Technology Infrastructure 200 unit. From the 2002 initial UEQ data, assessment and workload were targeted as aspects requiring immediate attention (see Figure 1).
Summative evaluation data were collected in this unit, using the Unit Experience Questionnaire at the end of the semester. Much like Turhan and associates (2005), greater emphasis was placed on the rich qualitative data that was obtained as these assisted developments more than the quantitative ratings.
Analysing and interpreting the data
Figure 1 displays the results of the quantitative items across the five semester period. The professional developer and lecturer would meet periodically and analyse the reports from students. Students' comments would be scanned for constructive suggestions and compared against sound educational practice, the unit objectives, students' outcomes from assessment data, and pragmatic matters. The collaborators also celebrated the successes, those changes that had had a positive impact on students' learning.
Developing an action plan
The collaborators would usually identify two or possibly three aspects requiring attention. After engaging in considerable reflection, discussion, and reading of educational materials, an action plan was established on the approach that would be taken to make changes to pre-existing items or to create/plan an innovation.
Identifying areas of focus
Using both the students' feedback and the authors' own perceptions of what constituted sound practice, the assessments and workload were targeted as requiring change. The changes that have been made to the assessments over the course of five semesters included:
At the commencement of each semester the lecturer would explain how s/he involved the students in the curriculum review process by actively seeking and using their feedback to develop the materials and learning experiences. S/he provided incoming students with the feedback from the previous cohort and explained the changes that had been made to the unit and assessments as a result of the previous cohorts' feedback. Only a couple of aspects were targeted for change each semester, as radically altering the unit in one semester would confound the results.
- altering the distribution of marks for various assessment tasks;
- marking peer and self-assessment in group assignments;
- designing learning experiences that supported their group work;
- introducing a conflict resolution process (designed to teach them the processes of mediation and negotiation) if and when problems arose;
- encouraging students to develop an 'agreement' type contract;
- replacing low cognitive level weekly quizzes with fewer cognitively demanding mini tests distributed across the semester;
- the final examination - altering the questions type and format. The 'new style' exam was adopted for the mid-semester and final exams. The exams/test were divided into three sections - low (multiple choice questions), medium (short answer questions), and high cognitive demand (a short report on a case study-style scenario) with explicit marks weightings.
- providing explicit marking keys for the assignment at the commencement of the unit.
The Teamwork Assessment Task
The students worked in pairs. The assignment task was an investigative report on a specified topic. The students were to choose to explore one from the following three topics:
Students had 11 weeks to undertake this written assignment worth 30% of the total mark. They were provided guidelines on how to write a report, how to work together in teams, and self/peer evaluation reflective schedules. They were also encouraged to develop an agreement for the division of labour. Conflict resolution guidelines were provided.
- the features of PCI "Peripheral Component Interconnect" Express - what it is and is not, and how it improves the computer architecture. Students also had to apply their knowledge to the impact of PCI Express on business in general.
- An investigation of the Internet facility and the implications of availability in various locations, for example, airplanes. Their report was to include a feasibility study encompassing the requirements; advantages and disadvantages of implementing the internet in the chosen location eg., in flight Internet access system for airways.
- "Nobody uses passwords the way they should; who's got the patience and memory for it? But even biometrics, the holy grail of identification, could be problematic" (Larry Seltzer: http://www.eweek.com/). Students were required to discuss the validity of this statement demonstrating evaluative thought based upon their research and knowledge of this technology
Results and discussion
The results are set out with the Unit Experience Questionnaire data, quantitative and qualitative results outlined together in order to provide a richer picture of the authors' interpretation of students' perspectives.
The Unit Experience Questionnaire (UEQ) results
The UEQ data was collected in the latter weeks of semester by independent administrators. The unit had at least three different laboratory/tutorial groups involved each semester. The UEQ data presented is an aggregate of the various tutorial/laboratory sessions for each semester. In semester 1 2003, the lecturer had four additional teaching staff assisting with the multiple sessions, and just one additional sessional staff member in subsequent semesters. The data presented is an aggregate of the various tutorial groups' responses in each semester. The response rate in 2003 was 87%, in 2004 - 78%, and 77% in 2005. Table 1 outlines the percent agreement (which includes 'agree' and 'strongly agree') for each scale. Each scale is composed of a number of items which can be identified in Appendix 1. Table 2 displays the demographics of gender, and 'English language background' in the sample for each cohort.
The 'good teaching' scale reports on students' agreement with items that relate to the teacher's ability to motivate students, feedback and the staff member's commitment and scaffolding of learning by providing good quality feedback, the teacher's empathy and approachableness, and capacity to make the unit interesting (refer to items in the scale in Appendix 1). The teaching staff received higher ratings (2003 - 66%; 2004 - 69%; 2005 - 68%) with a range between 62%-73% in comparison with those within the school (58%). The students indicated they appreciated the lecturer's sense of "humour [which] catch the student's attention" ... "always full of energy" and "enthusiasm about teaching". S/he was consistently described as "very enthusiastic which doesn't make the lecture boring". His/her explanation skills were repeatedly singled out as a key positive aspect "[s/he] explains in a simple way to understand terms", "uses example to help us understand computer jargon". Students readily acknowledged his/her grasp of the content "definitely knows the information", "[s/he] gives us more information that is outside the textbook that is affecting the world out there". His/her efforts to make the unit more interesting and effective was clear to students ... "the lecturer is very helpful and approachable; I can see that [s/he] has put a lot of effort in on this unit to make it interesting and understandable". Teaching techniques used were identified as key strengths "[s/he] breaks down the examples into very simple terms, and uses analogies to make it easier to understand, [s/he] has a good sense of humour, [s/he] is fair". There were no significant differences between the male and female students' responses, although there were differences between English speaking background (ESB) students and non-English speaking background (NESB) student responses to the Good Teaching Scale (see Table 2).
Students' constructive criticisms included that s/he needed to "slow down speech", to "include more interaction [and] group discussions" and to provide "more examples" and "fewer lecture slides". A drop in agreement from ~72% down to 63% was evident in the sem 1, 2005 data. In that semester there was a problem with a sessional lecturer which resulted in extremely low satisfaction from students in that particular tutorial group. Their data (50% indicated they were satisfied with the quality of the unit) contrasted sharply with the overall satisfaction with the other groups who were taught by the unit controller (78%, 92%, 80%, and 78%). This anomalous group who reacted against their tutor, reported lower levels of agreement with the items producing an overall lower result across the entire semester data set. This explains why the upward trend was suddenly checked in the latest data set. This finding endorsed Richardson's (2005, p.389) statement that "there is a high correlation between the ratings produced by students taking different course units taught by the same teacher, but little or no relationship between the ratings given by students taking the same course unit taught by different teachers".
Clear goals and standards
The 'clear goals and standards' scale reports on students' agreement that expectations were clear and explicit to students. It also provides information on the clarity of the materials within the unit. The trend data across the years from 2003 to 2005 demonstrates that students agreed that the goals, standards and materials were clear and made explicit to them (2003 - 62%; 2004 - 67%; 2005- 75%). This unit compared favourably with the school's performance in this scale (2003 - 52%; 2004- 55%; and 2005- 59%). It appears that the effort made by the lecturer to review, refine, and make explicit the requirements for the assessment tasks was evident to the students. The effort made to develop specific criteria and have these translate to students in explicit marking keys, with time and instruction provided in classes was recognised by students. This was demonstrated by their increasing agreement with the items in this scale. They commented that s/he had provided "clear steps and learning path", they liked the "regular blackboard updates. ... easy to follow lecture and lab materials", "well structured unit and well structured lectures", and "[s/he] clearly explains what to do in lab and prepare the notes for students".
The 'appropriate workload' scale explores students' perceptions of the amount of pressure that they were under to complete the work required by the unit. It also reports on time constraints and the amount of work students had to complete. There are three negatively worded items in this scale. This was one of the poor performing scales in this instrument. The minority agreed that they had sufficient time, and pressure to get through the unit (2003 - 43%; 2004- 41%; 2005- 43%). Not surprisingly, students who took the unit in an intensive summer school period indicated they were most pressured and overloaded (38% agreement) with the comment emerging that the unit was "challenging".
The 'appropriate assessment' scale explores students' perceptions of the level of cognitive processing required in their assessment tasks. There are three negatively worded items in this scale. This was the lowest performing scale in this instrument. The students who agreed or strongly agreed with the items in this scale were in the minority (2003-23%; 2004- 27%; 2005 - 22%). Curiously, there were very few comments which specifically expressed their concerns with the workload or the assessment so it is unclear why students rated these items so poorly.
The generic skills scale is comprised of five items that explore the skills students feel were developed as a result of the unit. The skills that are identified include: written communication; risk taking; the ability to work in teams; analytical and problem solving skills. Of course it is unreasonable to expect that all skills would be developed within one unit, however, the items enable academics to gauge what skills students perceived had been enhanced. These ratings varied up and down across the semesters with 2003 - 46%; 2004 - 51%; and 2005 - 47%.
Unit Experience Questionnaire summative data for Technology Infrastructure 200
Figure 1: 2003-5 UEQ data for the unit Technology Infrastructure 200 compared with the School data
|Figure 1 data||Student population||N||No. units|
|TI200: S1, 2003||Metropolitan campus students in the unit Technology Infrastructure 200||77||1|
|TI200: S2, 2003||Metropolitan campus students in the unit Technology Infrastructure 200||121||1|
|TI200: S1, 2004||Metropolitan campus students in the unit Technology Infrastructure 200||125||1|
|TI200: S2, 2004||Metropolitan campus students in the unit Technology Infrastructure 200||82||1|
|TI200: S1, 2005||Metropolitan campus students in the unit Technology Infrastructure 200||51||1|
|Metropolitan campus students in the unit Technology Infrastructure 200||14||1|
|School of IS: 2003||Aggregate for the School from the UEQ for Metropolitan campus students||1742||21|
|School of IS: 2004||Aggregate for the School from the UEQ for Metropolitan campus students||1206||16|
|School of IS: 2005||Aggregate for the School from the UEQ for Metropolitan campus students||435||5|
Table 1 below displays the gender demographic data for the unit Technology Infrastructure 200 across the years 2003 to 2005. There were no significant differences between male and female responses on any scale.
Table 1: 2003-5 UEQ Gender demographic data for the unit Technology Infrastructure 200
|Good Teaching Scale (% agreement)||Clear Goals and Standards Scale (% agreement)||Generic Skills Scale (% agreement)||Appropriate Workload Scale (% agreement)||Appropriate Assessment Scale (% agreement)||Overall Satisfaction (% agreement)|
|2003||Males||134 (71%)||64% (3.66)||62% (3.59)||47% (3.36)||45% (3.35)||21% (2.76)||78% (3.90)|
|2004||143 (69%)||64% (3.68)||64% (3.59)||49% (3.41)||41% (3.25)||24% (2.88)||79% (3.92)|
|2005||34 (69%)||60% (3.70)||65% (3.63)||44% (3.41)||49% (3.27)||20% (2.78)||71% (4.00)|
|SS||9||69% (3.98)||70% (3.94)||48% (3.48)||38% (3.19)||22% (3.11)||100% (4.44)|
|2003||Females||54 (29%)||72% (3.92)||71% (3.73)||47% (3.36)||40% (3.29)||28% (2.93)||81% (4.06)|
|2004||63 (31%)||73% (3.80)||73% (3.83)||52% (3.48)||44% (3.32)||34% (3.05)||86% (4.14)|
|2005||15 (31%)||69% (3.87)||82% (3.81)||54% (3.48)||45% (3.38)||27% (2.90)||87% (4.00)|
|SS||5||80% (3.97)||85% (4.00)||46% (3.57)||38% (3.35)||20% (2.93)||100% (4.60)|
Table 2: 2003-5 UEQ English speaking background demographic
data for the unit Technology Infrastructure 200
|Good Teaching Scale (% agreement)||Clear Goals and Standards Scale (% agreement)||Generic Skills Scale (% agreement)||Appropriate Workload Scale (% agreement)||Appropriate Assessment Scale (% agreement)||Overall Satisfaction (% agreement)|
|2003||English||83 (44%)||61% (3.59)*||65% (3.59)||41% (3.20)*||50% (3.43)||29% (2.91)||76% (3.78)*|
|2004||82 (40%)||61% (3.57)*||64% (3.59)||44% (3.27)*||43% (3.25)||30% (3.02)||77% (3.81)*|
|2005||20 (41%)||50% (3.48)**||65% (3.58)||40% (3.30)||52% (3.30)||27% (2.98)||55% (3.72)**|
|SS||3||67% (4.24)||100% (3.83)||33% (3.39)||33% (3.17)||100% (3.33)||100% (4.67)|
|2003||Non-English||102 (54%)||71% (3.86)*||64% (3.67)||52% (3.50)*||38% (3.26)||1% (2.74)||81% (4.08)*|
|2004||124 (60%)||70% (3.81)*||69% (3.71)||54% (3.54)*||41% (3.29)||25% (2.88)||84% (4.11)*|
|2005||28 (57%)||73% (3.98)**||76% (3.79)||53% (3.55)||43% (3.28)||19% (2.72)||93% (4.25)**|
|SS||11||80% (3.91)||80% (4.00)||60% (3.55)||45% (3.27)||0% (2.97)||100% (4.45)|
|Independent t-test assuming equal variances was employed to determine whether the mean score of ESB students was significantly different to the mean score of NESB students, for each scale. Significance levels: 0.05 = *, 0.001 = **, 0.001 = *** (95%, 99%, 99.9%)|
Table 2 above displays the English speaking background demographic data for the unit. There were significant differences between English speaking background (ESB) students and non-English speaking background (NESB) student responses to the Good Teaching Scale and Overall Satisfaction items in all three year cohorts.
Students consistently reported that the unit developed their capacities to 'work as a team member' and their 'problem solving skills'. This feedback was valuable, as the authors had been targeting these two skills in the unit. Students had identified that the class activities and assessments that had been designed to teach and assess these two skills were positively impacting on their development.
Overall satisfaction item
The overall satisfaction item displays a general trend to increasing over the period of five semesters, with 2003 - 76%; 2004 - 81%, and 2005 - 88%. All (100%) of the summer school group agreed or strongly agreed that they were satisfied with the quality of the unit. It is interesting that students reported such high levels of satisfaction and yet had rated assessment and workload at a low level. This finding coincides with Richardson's (2005) comments that without exploring students' criteria for rating their satisfaction this one item can become rather meaningless, or worse, misleading. Students in this study did provide considerable amounts of open-ended data which served to qualify this item. What appeared important to students were the personal qualities of the lecturer such as approachableness and caring attitude towards students, a sense of humour and lively delivery style, and providing thorough, understandable, and relevant to real life explanations. The application of the unit to the real world of work was also very important to them.
Differences in responses by gender and English speaking background
At first glance it appears that female students' response data indicated a trend of higher levels of agreement with the items than their male counterparts across all the scales other than "appropriate workload" scale; however when reviewed with other statistical measures there were no significant difference between females and males. All of the cohorts of non-English speaking background students (NESB) demonstrated significant different responses to their ESB counterparts. Non-English speaking background students indicated higher levels of agreement in all of the scales with the exceptions of the "appropriate workload" and "appropriate assessment". The means of scores from the "Generic Skills" scale were significantly different for ESB and NESB students in the 2003 and 2004 cohorts, with NESB rating their skill development higher than ESB students.
The collaborative process - productive and constructive partnerships
The lecturer-unit controller was encouraged to keep a journal of his/her experiences and reactions. The discussions and support received by this collaborative process ... "encourages me to introduce new information and adopt new teaching style to make the lecture and lab sessions more interesting and useful to the students". The critical friendship that formed, served to empower the lecturer to experiment with various innovations without feeling 'out on a limb' and that s/he may be venturing into educationally unsound ground. S/he continued ...
this relationship between the [professional developer-academic] and me was an excellent example in my school to encourage each lecturer to involve her in their units so as to bring new techniques and approaches to the unit especially unit structure and assessments. Personally, I was very delighted to work with her, as I improved and enhanced my skills such as personal, problem solving and communication. In addition, the most important issue to involving the ... [professional developer-academic was] ... to gain more experience and knowledge from her and to obtain new ... techniques not only for me but also to share with my students as well ... My task ... is not only to present this information to the students, but the most important issue is to make sure that students learn the outcomes of the unit [including the skills we are working on] in order to carry out these skills to other units in their study.
For the [professional developer-academic] this collaboration has been just as beneficial. His/her insights were related to how adult learners within the business technology discipline responded to various innovations; and the confirmation that students appreciated the efforts made by their lecturer to make the learning experience more dynamic, explicit, fair and relevant. There was enjoyment in the intellectual stimulation resulting from shared discussions and problem solving. There was a flow on effect to other academics within the school - encouraging others to engage with teaching and learning challenges. As a result of this collaborative action-research partnership, there have been significant improvements to the unit controller's teaching practice which has paid dividends in increasing his/her knowledge and expertise about teaching.
Action research is a highly satisfactory and effective model for ongoing professional development that has positive outcomes for students and for the academic. A number of significant aspects emerged from this longitudinal study. First, students' feedback was useful in monitoring the success (from students' perspectives) of educational changes to the teaching practices. Students provided considerable constructive advice and were well balanced in identifying aspects that were excellent and those that needed improvement. Their feedback revealed how important the teacher was to them and how they appreciated his/her implementation of various strategies and assessment tasks to make the learning more interesting, fair, and effective. Curiously, students did not provide much information on the assessments. This needs to be followed up in further research.
Second, collaborative action research was a highly productive and constructive process which resulted in learning for both parties. This process opened up opportunities for dialogue between the teaching academic and the professional developer, and between the teaching academic and his/her students. This finding contrasted with Johnson's (2000, p.423) assertion that "[t]he SEQ method of evaluation does not allow students and lecturers to discuss, evidence, explain, justify negotiate, or gain new insights into their own or the others' views, interests, values and assumptions". The authors found that using students' feedback and discussing the changes that had been made as a result of that feedback with the next cohort, actually acted as a positive point of initial contact with students. It impressed on students that they were valued in the curriculum review process, that their feedback was being used, and served to model to students the desirability of continuing learning and improvement and how that relates to professional practice.
A third significant aspect to result was that the action research process was empowering to the individual academic. Unlike Johnson's (2000, p.424) view that "[t]he lecturer is effectively de-skilled - beyond the extent to which s/he might also be confused if results collected are not compatible with his or her own interests in or views about 'good teaching'", the point of engaging in a collaborative critical-friend relationship was to provide a supportive, open exchange which was informative and exploratory, focused on best practice.
A fourth aspect which warrants further investigation was the little change that resulted in the 'appropriate assessment' and 'appropriate workload' scales. It seemed strange that the unit and teaching review over the past five semesters has overtly targeted the development and refinement of assessments and yet there were no positive change trends evidenced in the UEQ data. With a greater number of negatively worded items represented in these two scales the authors conjectured that some 'English as a second language' (ESL) students, who were the majority group, may have experienced difficulty in correctly interpreting the intent underpinning these items, thereby creating a lower than expected rating. If this was the case it would endorse the findings of Weems et al (2003) who posited that ESL students may not correctly interpret the intent of the items thereby affecting their responses. It may be useful for the academic to draw upon other sources of data which may explain this phenomenon.
The wider application
Although this was a longitudinal case study of one academic in a business orientated faculty, the lessons learned can translate to other university academics' situations and to inform professional development. Engaging in reflective action research is empowering to the individual and provides more opportunities for learning about teaching than working alone. The benefits to the organisation included increased enrolments in this unit and recognition for a school which fosters good practice. The action research process was not onerous, did not overly much time, ensured that the refinements were informed by educational literature, and provided a supportive environment for experimentation.
As Australian education is a significant service export industry with increasing numbers of full-fee paying students, there have been calls to increase the quality of higher education and to produce better graduates. This approach has value in meeting the needs of the individual academic, while having a spin off advantage to the organisation in addressing these 'quality' issues. As more academics engage with this type of process, the learning experiences of students will improve and academics' understandings of best practice will similarly be enhanced.
This research would not have been possible without the considerable statistical expertise of our research associate, Mr Simon Kaars Sijpesteijn, who undertook the data collection and processing over the five year period.
Arends, R. (2004). Learning to teach (6th ed.). Boston: McGraw-Hill Companies Inc.
Bhattacharyya, N. (2004). Student evaluations and moral hazard. Journal of Academic Ethics, 2, 263-271.
Johnson, R. (2000). The authority of the student evaluation questionnaire. Teaching in Higher Education, 5(4), 419-434.
Mills, G. E. (2000). Action research. A guide for the teacher researcher. Upper Saddle River, New Jersey: Pearson Education.
Ramsden, P. (1991). A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Studies in Higher Education, 16, 129-150.
Ramsden, P. (2003). Learning to teach in higher education (2nd ed.). London: Routledge Falmer.
Roche, L. A. & Marsh, H. W. (2002). Teaching self-concept in higher education: Reflecting on multiple dimensions of teaching effectiveness, in N. Hativa & P. Goodyear (Eds.), Teacher thinking, beliefs and knowledge in higher education. Dordrecht, Kluwer.
Steffy, B. E., Wolfe, M. P., Pasch, S. H. & Enz, B. J. (Eds.) (2000). Life cycle of the career teacher. Thousand Oaks: Kappa Delta Pi and Corwin Press, Inc.
Stringer, E. T. (2004). Action research in education. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall.
Turhan, K., Yaris, F., & Nural, E. (2005). Does instructor evaluation by students using a web-based questionnaire impact instructor performance? Advances in Health Sciences Education, 10, 5-13.
Weems, G. H., Onwuegbuzie, A., J, Schreiber, J. B., & Eggers, S., J. (2003). Characteristics of respondents who respond differently to positively and negatively worded items on rating scales. Assessment and Evaluation in Higher Education, 28(6), 587-607.
Appendix 1: Summative Evaluation Instrument (end of unit feedback)
The Unit Experience Questionnaire Scales
|Good Teaching||The staff member motivated me to do my best work|
|The staff member put a lot of time into commenting on my work|
|The staff member made a real effort to understand difficulties I might be having with my work|
|The staff member normally gave me feedback on how I was going|
|The staff member was extremely good at explaining things|
|The staff member worked hard to make this unit interesting|
|Clear Goals and Standards||It was always easy to know the standard of work expected|
|I usually had a clear idea of where I was going and what was expected of me in this unit|
|It was often hard to discover what was expected of me in this unit|
|The staff member made it clear right from the start what was expected of students|
|The content of this unit clearly related to the unit outline|
|The topics in this unit were presented in a logical sequence|
|The unit materials provided were relevant and concise|
|Appropriate Assessment||To do well in this unit all you needed was a good memory|
|The staff member seemed more interested in testing what I had memorized rather than what I had understood|
|Too many questions asked were just about facts|
|The assessment methods employed in this unit required an in-depth understanding of the unit content|
|Appropriate Workload||The workload was too heavy|
|I was generally given enough time to understand the things I had to learn|
|There was a lot of pressure on me as a student in this unit|
|The sheer volume of work to be got through in this unit meant that it could not all be thoroughly comprehended|
|Generic Skills||The unit developed my problem-solving skills|
|The unit sharpened my analytical skills|
|The unit helped me to develop my ability to work as a team member|
|As a result of this unit I feel confident about tackling unfamiliar problems|
|The unit improved my skills in written communication|
|Overall Satisfaction||Overall, I was satisfied with the quality of this unit|
|Authors: Shelleyann Scott is the Divisional Coordinator of Teaching and Learning for the Business Division at Curtin University of Technology. Shelley has experience in business, government, medical research and in hospitals. Her research interests include professional development within education, business and government environments; teaching strategies; the use of information technology to support ongoing reflection and development for teachers and students; and utilising student evaluations to improve quality teaching and learning practices.
Dr Shelleyann Scott, Curtin Business School, Curtin University of Technology, GPO Box U1987, Perth WA 6845, Australia. Email: S.Scott@curtin.edu.au
Tomayess Issa is a lecturer in Information Systems at Curtin University of Technology. She is a Unit Leader for internal and offshore units, which are focusing on technology infrastructure and design. Her research interests include web technology and human factors. She is also interested in establishing teaching methods and styles to define the positive aspects of learning experiences and problems which are facing students during the semester in order to enhance the positives and address concerns.
Ms Tomayess Issa, Curtin Business School, Curtin University of Technology, GPO Box U1987, Perth WA 6845, Australia. Email: T.Issa@curtin.edu.au
Please cite as: Scott, S. and Issa, T. (2006). Lessons learned from using students' feedback to inform academic teaching practice. In Experience of Learning. Proceedings of the 15th Annual Teaching Learning Forum, 1-2 February 2006. Perth: The University of Western Australia.
Copyright 2006 Shelleyann Scott and Tomayess Issa. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format (including website mirrors), provided that the article is used and cited in accordance with the usual academic conventions.
[ Refereed papers ] [ Contents - All Presentations ] [ Home Page ]
This URL: http://lsn.curtin.edu.au/tlf/tlf2006/refereed/scott-s.html
Created 26 Jan 2006. Last revision: 26 Jan 2006.