|Teaching and Learning Forum 2013 [ Refereed papers ]|
Yu Dong and Anthony Lucey
Monitoring the quality of teaching and learning by universities relies primarily upon a combination of feedback from formal student-evaluation surveys and the long-established measure of student-cohort performance in unit assessments. This study explores major factors that might affect the data provided by these two measures and seeks to identify potential relationships between assessment performance and each of student satisfaction and students' engineering discipline interests. Enabling this study is a large data-set obtained over the last four years from the teaching of a first-year Engineering Mechanics unit delivered twice per year to approximately 350 students in each semester from all engineering and some of multi-science disciplines. Over these years, this unit has largely remained stable in terms of unit learning outcomes, syllabus, delivery methods and teaching staff, thereby permitting potentially robust conclusions to be drawn from analyses of the data-set. By interrogating this data-set, three questions are addressed in this paper, namely (i) Is there a correlation between academic performance and student satisfaction with the unit, (ii) Did a change in assessment weighting affect students' overall performance, and (iii) Does student interest, as reflected by their engineering-oriented discipline choice, affect their overall assessment outcomes. The investigations presented in this paper are preliminary, focusing on four-semester studies in 2010 and 2011, adopting a broad-brush approach, in order to provide the direction to more refined and rigorous lines of enquiry using the same data to determine the efficacy of present monitoring systems for teaching and learning. The initial results show that student feedback is correlated well to their assessment performance provided that cultural bias is removed. Overall, the influence on performance of changing the assessment weighting appears to be minimal. The influence of students' engineering discipline interests is also minimal.
From a practical point of view, the evaluation of student performance, to a great extent, relies on the assessment regime used in courses such as the number, type, sequence and weighting of assessment components. The establishment of an appropriate assessment regime is a key factor in the design and development of course syllabi and echoes student learning outcomes. Both student-involved classroom assessments (Rodriguez, 2004; Stiggins & Chappuis, 2005) and assessment-based tutorial sessions (Valle et al., 1999) were found to significantly promote strong achievement gains and motivate student effort and performance. Moreover, practice tests as formative assessment (Sly, 1999) and work-sample assessment (Denner, Salzman & Bangert, 2001) were also proven to be viable approaches to enhance student performance. Self-assessment tests (Boud, 1989; Guzmán et al., 2007) became very popular to facilitate the student engagement through self-monitoring and develop students' skills as a means to improve assessment performance. Notwithstanding these advantages, these strategies may have limited value because if these self-assessment tests are not included as a part of the overall assessment regime they may not be undertaken by assessment-oriented students. The rapid advances in information technology have added the use of computer-aided tests to assessment regimes in many cases, replacing paper-and-pen multiple choice tests (Lee & Weerakoon, 2001); however it has been shown to have poor reliability in grading students since students tended to score higher in paper-and-pen tests.
To measure student satisfaction in the setting of the present study, Curtin University developed and has continuously used an online unit survey system called eVALUate since 2006. Students are encouraged to participate in university-wide quantitative and qualitative evaluation for the units in which they are enrolled. The eleven quantitative items on eVALUate comprise learning outcomes, learning experiences, learning resources, assessment tasks, work feedback, workload, quality of teaching, self-motivation, best use of learning experiences, effective learning and overall satisfaction to seek students' level of agreement. Additionally, students are prompted to provide constructive comments through two qualitative items focused on (i) helpful aspects of the unit and (ii) suggested unit improvements (Oliver, Tucker, Gupta & Yeo, 2008). This survey system is unique in that it mainly builds upon an outcome-focused approach to student learning that is conducted via a variety of learning experiences, such as traditional face-to-face or online teaching, fieldwork, studios/workshops, tutorials and laboratories (Tucker, Pegden & Yorke, 2012). Pegden and Tucker (2009) reported gender bias in the results of eVALUate but also noted that identifiable differences between male and female aggregated scores for student satisfaction had decreased in more recent semesters. Within their higher level of satisfaction across the university (particularly in second year of study), males especially showed a higher percentage agreement in certain courses suggesting that subject preference influences feedback. Using aggregated eVALUate data, Tucker, Pegden and Yorke (2012) also showed that, opposed to the conventional belief of academics, students with high semester weighted averages (rather than underperforming students) tended to provide feedback and more consistently agreed with the survey items indicating a favourable learning experience. These findings therefore suggest a performance or academic-ability bias in the results of eVALUate unit surveys. Nevertheless, Tucker, Oliver and Gupta (2012) argued that the eVALUate instrument became sufficiently robust once it had achieved significantly high student-response rates (typically 35%), validating it as an effective and reliable tool for self-reflection, reward of teaching staff through the Teaching Performance Index (TPI), as well as being key in the judging of teaching-award and academic-promotion applications. However, it is emphasised that these conclusions were drawn from university-aggregated data within which there may be discipline differences.
The first-year experience is very important for tertiary-level students who have to adapt quickly to university life and a learning style that is very different from those they experienced at their high schools. First-year units in engineering have very high enrolment numbers because the demand for the discipline reflects the employment market in Australia. Accordingly these units entail tremendous educational challenges arising from large classes, such as low-staff-to-student ratio, high individual student workloads, ineffective personal feedback mechanisms, and a lack of interaction between students and lecturers at a one-to-one level. As a result, the students' self-motivation, study engagement and overall unit satisfaction can be significantly affected within the outcome-focused educational framework. The study by Krause et al. (2005) of a 2004 cohort of first-year students showed that only half of the respondents were significantly more satisfied with their study course and perceived teaching quality through the availability of the teaching staff to discuss students' work. This might be explained by the fact that less than one-third of respondents felt that teaching staff had an interest in students' progress and were prepared to provide helpful feedback. However, information and communication technology (ICT) tools were recognised to play a very significant role in changing the traditional forms of learning and interaction in the first year, as seen in the frequent use of online course resources, emailing contacts to peers and lecturers, and learning-aid computer software. Dong et al. (2012) further confirmed that the use of e-quiz and e-review benefited first-year engineering students in Engineering Mechanics as supplementary educational tools that facilitate effective learning. Kennedy et al (2008), on the other hand, found that while many first-year students were highly 'tech-savvy', considerable variation occurred when they used ICT tools in education that required skills and application beyond the domain of computer, mobile-phone and email usage with which students were familiar. Overall, there still remains uncertainty over the most effective means to deliver large first-year engineering units; this underlines the importance of monitoring tools to capture accurately student experience in such units.
Based on a first-year, large-class Engineering Mechanics unit, the main purpose of current study is to identify factors that might affect the results of monitoring tools such as eVALUate for the discipline of engineering which may be lost in university-aggregated data. In particular, this paper focuses on the possibility of a correlation between the student learning outcomes (as indicated by assessment performance/grade) and student unit satisfaction, whether a change of assessment weighting affects students' overall performance, as well as whether students' interest as identified by chosen majors/disciplines within engineering has a significant impact on their overall assessment outcomes. This paper represents a preliminary investigation of these issues and their inter-relation in order to frame research questions for a more rigorous and focused study, the outcomes of which will both improve understanding of the monitoring tools and the interpretation of their results as a means to improve the quality of engineering education.
Figure 1: Unit agreement (%) of quantitative items in eVALUate surveys for Engineering Mechanics 100 taught in four consecutive semesters over academic years 2010-2011: (a) Bentley campus and (b) Miri campus.
Miri respondents are seen to give uniformly high levels of agreement, always above 84%, for all the quantitative items. In particular, learning outcomes, self-motivation, best use of learning experiences, effective learning as well as overall satisfaction attracted scores of over 90% for the period of analysis. By contrast, the levels of agreement for the unit from Bentley respondents are much lower than those of their Miri counterparts with a wider range of scores, typically between 60% and 80%, returned across for the majority of the quantitative items. Significant fluctuations in the results are seen in the learning outcomes item indicating that the student cohorts in Semesters 2 perceived better learning experiences than the Semester 1 cohorts; 90% agreement in the former and 74% in the latter. In addition, self-motivation of Bentley students decreased monotonically from 87% to 72% from semester 1, 2010 to Semester 2, 2011, while work feedback and quality of teaching generated the two lowest levels of agreement levels being always below 65%.
Figure 2: Average unit mark vs. eVALUate overall student satisfaction rate.
The correlation between academic performance (as reflected by the average unit mark) and student satisfaction with the unit is explored in Figure 2. Except for the outliers of Semester 1, 2010, the overall results suggest overall student satisfaction does correlate with average assessment performance in the unit through a linear relationship. This finding holds at both the Bentley and Miri campuses, although the gradient and constant of the best-fit lines for the two campuses are very different. It is evident from Figures 2 and 3 that while the overall satisfaction rates of Miri students are consistently higher than the Bentley students, their academic performance is in general at a lower level of achievement. This feature implies that Miri students score at unrealistically high levels in the eVALUate surveys due to their cultural background that broadly views lecturers as occupying an authority role. Accordingly, Miri students are reluctant to give feedback that may be perceived as critical. By contrast, the majority of Bentley students have been educated in a western environment that tends to encourage the rights of the student who then tend to view lecturers as facilitators in the achievement of their individual aspirations. Accordingly, Bentley students may view their learning experiences through a more objective and independent lens. However, we emphasise that when the effects of cultural factors are removed, herein by analysing Miri and Bentley data separately, a correlation between student satisfaction and performance is found at each location.
Due to a change in Curtin's assessment policy, the assessment weighting for the final examination was reduced from 60% in 2010 to 50% in 2011. This resulted in increases to the quiz assessment weighting from 15% to 20% and laboratory practical test weighting from 25% to 30%. Figures 3 and 4 explore the effect of these changes on student performance in the unit. Figure 3 shows that the average overall unit mark (in the range of 53% to 62%) was not influenced significantly by the change of assessment weighting; this is true for both campus locations.
Figure 3: Average unit mark in relation to overall students' performance
before and after changes of assessment weighting
In order to understand this absence of change to the overall performance, the composition of the overall performance before and after change to the assessment weightings is considered in Figure 4 in which the cohort-average marks in each of the three assessment components (normalised to be out of 100 points) are displayed at each of the Bentley and Miri campuses. Overall it is seen that students tend to perform better in the laboratory practical test and weekly quizzes than in the final examination and that reducing the weighting of the final examination caused a drop in examination performance between 2010 and 2011 at both campuses. The latter may be accounted for by reduced student motivation and preparation given the reduced influence of the examination in the determination of students' overall unit mark. Differences between the skills demonstrated by Bentley and Miri students are also evidenced in Figure 4. Bentley students performed far better in laboratory practical tests than the Miri students (normalised average mark over 85 in comparison to more than 75 for Miri students), indicating that Bentley students have more hands-on practical ability and can participate better in group assessments. On the other hand, Miri students outperformed their Bentley counterparts in quiz assessments with gaining above 71 as compared with above 52 in normalised average marks.
Clearly the final examination is the weakest assessment component for all students irrespective of campus location. There is also some evidence to suggest that the change of assessment weighting may have resulted in students forming a strategy to pass the unit. For example, in Semester 2 2011, the decrease of examination mark for Bentley students was compensated for by their increased laboratory practical test and quiz marks relative to semester 2 2010. Such strategies are less obvious for Miri students whose performance in the practical tests reduced between 2010 and 2011. However, by retaining their high level of performance in the quiz assessments for which the weighting was increased they were able to maintain a similar overall performance after the change of assessment weighting. Overall these data suggest that students strategise for their overall performance in their unit and this leads to overall cohort performances that appear to be independent of the assessment weighting within a unit. However, upon more detailed inspection the balance of individual assessment performance between the different learning outcomes may be affected either adversely or favourably.
Figure 4: Normalised average assessment mark (converted based on overall weighting of 100 for each assessment component) for laboratory practical test, quiz and final examination before and after changes of assessment weighting: (a) Bentley campus and (b) Miri campus.
Figure 5: Performance distribution of student groups by academic major/discipline-choice: proportion (as percentage) of students within the discipline group who: (a) failed the unit (mark band from 0-49) and (b) performed at a high level (mark band from 80-100).
The relationship between students' interests and high achievement is far less clear. Rather surprisingly, Figure 5(b) shows that students who have chosen one of the three ICT and electrical and electronics (EE) based majors, consisting of Electronic and Communication Engineering, Computer System Engineering and Software Engineering, seem more likely to score highly in Engineering Mechanics. The relatively large students groups identified by interests in Civil and Construction Engineering, Mechanical Engineering and Mechatronics, or Chemical Engineering show the expected proportion of high-performers. This finding might rectify a misconception that only Mechanical and Civil Engineering students can do well in Mechanics related units. On the basis of the present data it seems that student interest as reflected by their choice of specific engineering discipline is a relatively small factor in determining student performance especially when compared to the effect of students' previous study and their level of achievement at high school.
Boud, D. (1989). The role of self-assessment in student grading. Assessment & Evaluation in Higher Education, 14(1), 20-30. http://dx.doi.org/10.1080/0260293890140103
Denner, P. R., Salzman, S. A. & Bangert, A. W. (2001). Linking teacher assessment to student performance: a benchmarking, generalizability, and validity study of the use of teacher work samples. Journal of Personnel Evaluation in Education, 15(4), 287-307. http://dx.doi.org/10.1023/A:1015405715614
Dong, Y., Lucey, D. & Leadbeater, G. (2012). A pilot study of e-quiz and e-review programs in the online blended learning of first-year Engineering Mechanics. Paper presented at the Australasian Association for Engineering Education (AAEE) Annual Conference, 3-5 December 2012. Melbourne: Swinburne University of Technology. http://www.aaee.com.au/conferences/2012/documents/abstracts/aaee2012-submission-78.pdf
Guzmán, E., Conejo, R. & Pérez-de-la-Cruz, J. L. (2007). Improving student performance using self-assessment tests. IEEE Intelligent Systems, 22(4), 46-52. http://dx.doi.org/10.1109/MIS.2007.71
Krause, K.-L., Hartley R., James, R. & Mclnnis, C. (2005). The first year experience in Australia universities: finding from a decade of national studies. Canberra: Australian Government. http://www.griffith.edu.au/__data/assets/pdf_file/0006/37491/FYEReport05.pdf [viewed 12 Nov 2012]
Lee, G. & Weerakoon, P. (2001). The role of computer-aided assessment in health professional education: a comparison of student performance in computer-based and paper-and-pen multiple-choice tests. Medical Teacher, 23(2), 152-157. http://dx.doi.org/10.1080/01421590020031066
Oliver, B., Tucker, B., Gupta, R. & Yeo, S. (2008). eVALUate: An evaluation instrument for measuring students' perceptions of their engagement and learning outcomes. Assessment & Evaluation in Higher Education, 33(3), 619-630. http://dx.doi.org/10.1080/02602930701773034
Pegden, J. & Tucker, B. (2009). Student evaluation of their learning: Differences in male and female students' perceptions of their units. Paper presented at the 7th Annual Australasian Higher Education Evaluation Forum, 21-23 October 2009. Byron Bay: Byron Community and Culture Centre. http://evaluate.curtin.edu.au/local/docs/Paper%20AHEEF%20Final.pdf [viewed 12 Nov 2012]
Rodriguez, M. C. (2004). The role of classroom assessment in student performance on TIMSS. Applied Measurement in Education, 17(1), 1-24. http://dx.doi.org/10.1207/s15324818ame1701_1
Sly, L. (1999). Practice tests as formative assessment improve student performance on computer-managed learning assessments. Assessment & Evaluation in Higher Education, 24(3), 339-343. http://dx.doi.org/10.1080/0260293990240307
Stiggins, R. & Chappuis, J. (2005). Using student-involved classroom assessment to close achievement gaps. Theory Into Practice, 44(1), 11-18. http://dx.doi.org/10.1207/s15430421tip4401_3
Tucker, B., Oliver, B. & Gupta, R. (2012). Validating a teaching survey which drives increased response rates in a unit survey. Teaching in Higher Education, 1-13. http://dx.doi.org/10.1080/13562517.2012.725224
Tucker, B., Pegden J.-A. & Yorke, J. (2012). Outcomes and evaluations: Is there a relationship between indicators of student success and student evaluations of learning? In Brown, N., Jones, S. M. & Adam, A. (Eds.), Research and Development in Higher Education: Connections in Higher Education. Proceedings of the 35th HERDSA Annual International Conference, 2-5 July 2012. http://www.herdsa.org.au/wp-content/uploads/conference/2012/HERDSA_2012_Tucker.pdf
Valle, R., Petra, L., Martinez-González, Rojas-Ramirez, J. A., Morales-Lopez, S. & Piña-Garza, B. (1999). Assessment of student performance in problem-based learning tutorial sessions. Medical Education, 33(11), 818-822. http://dx.doi.org/10.1046/j.1365-2923.1999.00526.x
Westerman, J. W., Nowichi, M. D. & Plante, D. (2002). Fit in the classroom: Predictors of student performance and satisfaction in management education. Journal of Management Education, 26(1), 5-18. http://dx.doi.org/10.1177/105256290202600102
Wiers-Jenssen, J., Stensaker, B. & Grøgaard, J. B. (2002). Student satisfaction: Towards an empirical deconstruction of the concept. Quality in Higher Education, 8(2), 183-195. http://dx.doi.org/10.1080/1353832022000004377
Yatrakis, P. G. & Simon, H. K. (2002). The effect of self-selection on student satisfaction and performance in online classes. The International Review of Research in Open and Distance Learning, 3(2), 1-6. http://www.irrodl.org/index.php/irrodl/article/view/93/172
|Please cite as: Dong, Y. & Lucey, A. (2013). Relationships between student satisfaction and assessment grades in a first-year engineering unit. In Design, develop, evaluate: The core of the learning environment. Proceedings of the 22nd Annual Teaching Learning Forum, 7-8 February 2013. Perth: Murdoch University. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/dong.html|
Copyright 2013 Yu Dong and Anthony Lucey. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.