Teaching and Learning Forum 2013 Home Page

Category: Research

Teaching and Learning Forum 2013 [ Refereed papers ]
Relationships between student satisfaction and assessment grades in a first-year engineering unit

Yu Dong and Anthony Lucey
Curtin University
Email: Y.Dong@curtin.edu.au

Monitoring the quality of teaching and learning by universities relies primarily upon a combination of feedback from formal student-evaluation surveys and the long-established measure of student-cohort performance in unit assessments. This study explores major factors that might affect the data provided by these two measures and seeks to identify potential relationships between assessment performance and each of student satisfaction and students' engineering discipline interests. Enabling this study is a large data-set obtained over the last four years from the teaching of a first-year Engineering Mechanics unit delivered twice per year to approximately 350 students in each semester from all engineering and some of multi-science disciplines. Over these years, this unit has largely remained stable in terms of unit learning outcomes, syllabus, delivery methods and teaching staff, thereby permitting potentially robust conclusions to be drawn from analyses of the data-set. By interrogating this data-set, three questions are addressed in this paper, namely (i) Is there a correlation between academic performance and student satisfaction with the unit, (ii) Did a change in assessment weighting affect students' overall performance, and (iii) Does student interest, as reflected by their engineering-oriented discipline choice, affect their overall assessment outcomes. The investigations presented in this paper are preliminary, focusing on four-semester studies in 2010 and 2011, adopting a broad-brush approach, in order to provide the direction to more refined and rigorous lines of enquiry using the same data to determine the efficacy of present monitoring systems for teaching and learning. The initial results show that student feedback is correlated well to their assessment performance provided that cultural bias is removed. Overall, the influence on performance of changing the assessment weighting appears to be minimal. The influence of students' engineering discipline interests is also minimal.


Introduction

Continuous monitoring and improvement of unit teaching quality are an essential 'close-loop' activity for university educators who can seek formal tertiary student feedback and comments to teaching staff (e.g. indicating student satisfaction) that also invite students to reflect upon their learning experience. On the other hand, student assessment performance is a direct measure to evaluate their achievements through rigorous assessment activities. The relationship between student satisfaction and assessment performance is important in contemporary higher education, attracting much attention by teaching practitioners and academics because it may underpin powerful synergies at work in students' educational experience. Biner et al. (2002) demonstrated that higher levels of relative performance (telecourse performance vs. prior academic performance) were associated with student satisfaction with the technological aspects of courses, student satisfaction with the promptness of material exchange with the instructor, and overall student satisfaction. In management education, Westerman, Nowicki and Plante (2002) suggested that student performance and satisfaction are linked up by the student-environment fit serving as a predictor for both of these two outcomes. Wiers-Jenssen, Stensaker and Grøgaard (2002) investigated Norwegian university students, finding that academic and pedagogic quality of teaching were crucial determinants of student satisfaction. However, other factors such as social climate, aesthetic aspects of physical infrastructure and the quality of administrative services could not be ignored when seeking improvements to student satisfaction and opportunity for learning. Yatrakis and Simon (2002) used 'self-selection' of online classes to evaluate its effect on student satisfaction and performance. In contrast to the aforementioned studies, their results revealed that while the 'self-selection' afforded to targeted students did lead to higher levels of satisfaction for this group, it did not make any difference to their grade (performance) outcomes in assessments.

From a practical point of view, the evaluation of student performance, to a great extent, relies on the assessment regime used in courses such as the number, type, sequence and weighting of assessment components. The establishment of an appropriate assessment regime is a key factor in the design and development of course syllabi and echoes student learning outcomes. Both student-involved classroom assessments (Rodriguez, 2004; Stiggins & Chappuis, 2005) and assessment-based tutorial sessions (Valle et al., 1999) were found to significantly promote strong achievement gains and motivate student effort and performance. Moreover, practice tests as formative assessment (Sly, 1999) and work-sample assessment (Denner, Salzman & Bangert, 2001) were also proven to be viable approaches to enhance student performance. Self-assessment tests (Boud, 1989; Guzmán et al., 2007) became very popular to facilitate the student engagement through self-monitoring and develop students' skills as a means to improve assessment performance. Notwithstanding these advantages, these strategies may have limited value because if these self-assessment tests are not included as a part of the overall assessment regime they may not be undertaken by assessment-oriented students. The rapid advances in information technology have added the use of computer-aided tests to assessment regimes in many cases, replacing paper-and-pen multiple choice tests (Lee & Weerakoon, 2001); however it has been shown to have poor reliability in grading students since students tended to score higher in paper-and-pen tests.

To measure student satisfaction in the setting of the present study, Curtin University developed and has continuously used an online unit survey system called eVALUate since 2006. Students are encouraged to participate in university-wide quantitative and qualitative evaluation for the units in which they are enrolled. The eleven quantitative items on eVALUate comprise learning outcomes, learning experiences, learning resources, assessment tasks, work feedback, workload, quality of teaching, self-motivation, best use of learning experiences, effective learning and overall satisfaction to seek students' level of agreement. Additionally, students are prompted to provide constructive comments through two qualitative items focused on (i) helpful aspects of the unit and (ii) suggested unit improvements (Oliver, Tucker, Gupta & Yeo, 2008). This survey system is unique in that it mainly builds upon an outcome-focused approach to student learning that is conducted via a variety of learning experiences, such as traditional face-to-face or online teaching, fieldwork, studios/workshops, tutorials and laboratories (Tucker, Pegden & Yorke, 2012). Pegden and Tucker (2009) reported gender bias in the results of eVALUate but also noted that identifiable differences between male and female aggregated scores for student satisfaction had decreased in more recent semesters. Within their higher level of satisfaction across the university (particularly in second year of study), males especially showed a higher percentage agreement in certain courses suggesting that subject preference influences feedback. Using aggregated eVALUate data, Tucker, Pegden and Yorke (2012) also showed that, opposed to the conventional belief of academics, students with high semester weighted averages (rather than underperforming students) tended to provide feedback and more consistently agreed with the survey items indicating a favourable learning experience. These findings therefore suggest a performance or academic-ability bias in the results of eVALUate unit surveys. Nevertheless, Tucker, Oliver and Gupta (2012) argued that the eVALUate instrument became sufficiently robust once it had achieved significantly high student-response rates (typically 35%), validating it as an effective and reliable tool for self-reflection, reward of teaching staff through the Teaching Performance Index (TPI), as well as being key in the judging of teaching-award and academic-promotion applications. However, it is emphasised that these conclusions were drawn from university-aggregated data within which there may be discipline differences.

The first-year experience is very important for tertiary-level students who have to adapt quickly to university life and a learning style that is very different from those they experienced at their high schools. First-year units in engineering have very high enrolment numbers because the demand for the discipline reflects the employment market in Australia. Accordingly these units entail tremendous educational challenges arising from large classes, such as low-staff-to-student ratio, high individual student workloads, ineffective personal feedback mechanisms, and a lack of interaction between students and lecturers at a one-to-one level. As a result, the students' self-motivation, study engagement and overall unit satisfaction can be significantly affected within the outcome-focused educational framework. The study by Krause et al. (2005) of a 2004 cohort of first-year students showed that only half of the respondents were significantly more satisfied with their study course and perceived teaching quality through the availability of the teaching staff to discuss students' work. This might be explained by the fact that less than one-third of respondents felt that teaching staff had an interest in students' progress and were prepared to provide helpful feedback. However, information and communication technology (ICT) tools were recognised to play a very significant role in changing the traditional forms of learning and interaction in the first year, as seen in the frequent use of online course resources, emailing contacts to peers and lecturers, and learning-aid computer software. Dong et al. (2012) further confirmed that the use of e-quiz and e-review benefited first-year engineering students in Engineering Mechanics as supplementary educational tools that facilitate effective learning. Kennedy et al (2008), on the other hand, found that while many first-year students were highly 'tech-savvy', considerable variation occurred when they used ICT tools in education that required skills and application beyond the domain of computer, mobile-phone and email usage with which students were familiar. Overall, there still remains uncertainty over the most effective means to deliver large first-year engineering units; this underlines the importance of monitoring tools to capture accurately student experience in such units.

Based on a first-year, large-class Engineering Mechanics unit, the main purpose of current study is to identify factors that might affect the results of monitoring tools such as eVALUate for the discipline of engineering which may be lost in university-aggregated data. In particular, this paper focuses on the possibility of a correlation between the student learning outcomes (as indicated by assessment performance/grade) and student unit satisfaction, whether a change of assessment weighting affects students' overall performance, as well as whether students' interest as identified by chosen majors/disciplines within engineering has a significant impact on their overall assessment outcomes. This paper represents a preliminary investigation of these issues and their inter-relation in order to frame research questions for a more rigorous and focused study, the outcomes of which will both improve understanding of the monitoring tools and the interpretation of their results as a means to improve the quality of engineering education.

Procedure and methodology

The data used in this study comprise the eVALUate Full Unit Report (FUR) survey feedback in Engineering Mechanics 100 gathered at both its Australian main campus (Bentley) and Sarawak (Miri) offshore campus from four semesters of unit delivery in the period of 2010-2011. The individual breakdown of assessment marks were obtained for the same period after mark collation to prepare for the unit panel meeting at the end of each semester. Students' major/discipline information was collected based on the provision of class lists by Engineering Foundation Year (EFY) officers, and occasionally by way of the Curtin University student management system (Student One) for any clarifications. All personal information associated with students' names and identification numbers remained anonymous when the data were reported. The student web portal OASIS was employed to administer the eVALUate system. eVALUate surveys were open twice a year at the time close to the examination period with an Official Communication Channel (OCC) message sent to students via student university webmail system. Students were encouraged to provide both quantitative and qualitative eVALUate feedback on their views and learning experience in their taught units. Finally, it is important to note that students' assessed work was moderated across the Bentley and Miri campuses to ensure that the same standards of marking (to a pre-prepared marking key) were applied.

Results and discussion

Correlation between academic performance and student satisfaction

The percentage level of agreement in the eleven quantitative items of the eVALUate survey for Engineering Mechanics 100 over four semesters in 2010 and 2011 are shown in Figures 1(a) and (b) for Bentley and Miri campuses, respectively.

Figure 1a

Figure 1b

Figure 1: Unit agreement (%) of quantitative items in eVALUate surveys for Engineering Mechanics 100 taught in four consecutive semesters over academic years 2010-2011: (a) Bentley campus and (b) Miri campus.

Miri respondents are seen to give uniformly high levels of agreement, always above 84%, for all the quantitative items. In particular, learning outcomes, self-motivation, best use of learning experiences, effective learning as well as overall satisfaction attracted scores of over 90% for the period of analysis. By contrast, the levels of agreement for the unit from Bentley respondents are much lower than those of their Miri counterparts with a wider range of scores, typically between 60% and 80%, returned across for the majority of the quantitative items. Significant fluctuations in the results are seen in the learning outcomes item indicating that the student cohorts in Semesters 2 perceived better learning experiences than the Semester 1 cohorts; 90% agreement in the former and 74% in the latter. In addition, self-motivation of Bentley students decreased monotonically from 87% to 72% from semester 1, 2010 to Semester 2, 2011, while work feedback and quality of teaching generated the two lowest levels of agreement levels being always below 65%.

Figure 2

Figure 2: Average unit mark vs. eVALUate overall student satisfaction rate.

The correlation between academic performance (as reflected by the average unit mark) and student satisfaction with the unit is explored in Figure 2. Except for the outliers of Semester 1, 2010, the overall results suggest overall student satisfaction does correlate with average assessment performance in the unit through a linear relationship. This finding holds at both the Bentley and Miri campuses, although the gradient and constant of the best-fit lines for the two campuses are very different. It is evident from Figures 2 and 3 that while the overall satisfaction rates of Miri students are consistently higher than the Bentley students, their academic performance is in general at a lower level of achievement. This feature implies that Miri students score at unrealistically high levels in the eVALUate surveys due to their cultural background that broadly views lecturers as occupying an authority role. Accordingly, Miri students are reluctant to give feedback that may be perceived as critical. By contrast, the majority of Bentley students have been educated in a western environment that tends to encourage the rights of the student who then tend to view lecturers as facilitators in the achievement of their individual aspirations. Accordingly, Bentley students may view their learning experiences through a more objective and independent lens. However, we emphasise that when the effects of cultural factors are removed, herein by analysing Miri and Bentley data separately, a correlation between student satisfaction and performance is found at each location.

Effect of change of assessment weighting on students' performance

The overall unit assessment in Engineering Mechanics 100 comprises a combination of continuous assessment through laboratory practical tests and quizzes and a final examination. Laboratory practical tests entail a formative group-based activity with a mark based on students' participation with hands-on practical experience, whereas quizzes and final examination are individual closed-book summative assessments to evaluate students' understanding of fundamental Mechanics concepts, principles and theories.

Due to a change in Curtin's assessment policy, the assessment weighting for the final examination was reduced from 60% in 2010 to 50% in 2011. This resulted in increases to the quiz assessment weighting from 15% to 20% and laboratory practical test weighting from 25% to 30%. Figures 3 and 4 explore the effect of these changes on student performance in the unit. Figure 3 shows that the average overall unit mark (in the range of 53% to 62%) was not influenced significantly by the change of assessment weighting; this is true for both campus locations.

Figure 3

Figure 3: Average unit mark in relation to overall students' performance
before and after changes of assessment weighting

In order to understand this absence of change to the overall performance, the composition of the overall performance before and after change to the assessment weightings is considered in Figure 4 in which the cohort-average marks in each of the three assessment components (normalised to be out of 100 points) are displayed at each of the Bentley and Miri campuses. Overall it is seen that students tend to perform better in the laboratory practical test and weekly quizzes than in the final examination and that reducing the weighting of the final examination caused a drop in examination performance between 2010 and 2011 at both campuses. The latter may be accounted for by reduced student motivation and preparation given the reduced influence of the examination in the determination of students' overall unit mark. Differences between the skills demonstrated by Bentley and Miri students are also evidenced in Figure 4. Bentley students performed far better in laboratory practical tests than the Miri students (normalised average mark over 85 in comparison to more than 75 for Miri students), indicating that Bentley students have more hands-on practical ability and can participate better in group assessments. On the other hand, Miri students outperformed their Bentley counterparts in quiz assessments with gaining above 71 as compared with above 52 in normalised average marks.

Clearly the final examination is the weakest assessment component for all students irrespective of campus location. There is also some evidence to suggest that the change of assessment weighting may have resulted in students forming a strategy to pass the unit. For example, in Semester 2 2011, the decrease of examination mark for Bentley students was compensated for by their increased laboratory practical test and quiz marks relative to semester 2 2010. Such strategies are less obvious for Miri students whose performance in the practical tests reduced between 2010 and 2011. However, by retaining their high level of performance in the quiz assessments for which the weighting was increased they were able to maintain a similar overall performance after the change of assessment weighting. Overall these data suggest that students strategise for their overall performance in their unit and this leads to overall cohort performances that appear to be independent of the assessment weighting within a unit. However, upon more detailed inspection the balance of individual assessment performance between the different learning outcomes may be affected either adversely or favourably.

Figure 4a

Figure 4b

Figure 4: Normalised average assessment mark (converted based on overall weighting of 100 for each assessment component) for laboratory practical test, quiz and final examination before and after changes of assessment weighting: (a) Bentley campus and (b) Miri campus.

Impact of academic discipline choice on overall assessment outcomes

We now consider whether students' interests, as reflected by their specific engineering major/discipline choice at the end of the first year, correlate with their performance in Engineering Mechanics 100. To remove any possible cultural bias, only the Bentley cohorts are analysed. Figure 5(a) shows the percentage of students within each of the nominated disciplines/majors who failed the unit (≤49 in the unit mark) while Figure 5(b) shows the percentage of students who achieved at high level of performance (≥80 in the unit mark). The results of students in double-degrees have been included based upon their engineering discipline or major choice. It is seen in Figure 5(a) that students electing to study Mining Engineering and Environmental Engineering-Mining are far more likely to fail that those choosing to do other engineering disciplines. This may be explained by the multi-Science background of the Mining students that has low entry level (performance in high-school) as compared to that of the general Engineering majors. Students in Civil and Construction Engineering, Mechanical Engineering and Mechatronics, as well as Chemical Engineering, evidenced much lower failure rates of less than 25% and this may be explained by the fact that students deem Engineering Mechanics to be an important subject for their studies beyond the first year engineering.

Figure 5a

Figure 5b

Figure 5: Performance distribution of student groups by academic major/discipline-choice: proportion (as percentage) of students within the discipline group who: (a) failed the unit (mark band from 0-49) and (b) performed at a high level (mark band from 80-100).

The relationship between students' interests and high achievement is far less clear. Rather surprisingly, Figure 5(b) shows that students who have chosen one of the three ICT and electrical and electronics (EE) based majors, consisting of Electronic and Communication Engineering, Computer System Engineering and Software Engineering, seem more likely to score highly in Engineering Mechanics. The relatively large students groups identified by interests in Civil and Construction Engineering, Mechanical Engineering and Mechatronics, or Chemical Engineering show the expected proportion of high-performers. This finding might rectify a misconception that only Mechanical and Civil Engineering students can do well in Mechanics related units. On the basis of the present data it seems that student interest as reflected by their choice of specific engineering discipline is a relatively small factor in determining student performance especially when compared to the effect of students' previous study and their level of achievement at high school.

Concluding remarks

This preliminary study has sought to identify factors that might affect the two most significant measurement tools used to monitor the quality of teaching and learning in a large first-year Engineering unit. It has been shown that the cultural background of students is an important determinant in the scores returned in survey feedback. However, when this factor is not accounted for, it is found that survey feedback shows a good correlation with the overall academic performance of students regardless of campus location. The impact of a change of assessment weighting has been found not to be very significant though this may have masked changes in the balance of demonstrated learning outcomes for the unit. Finally, it has been shown that students' interests, as reflected by their engineering-discipline choice for second-year onwards, are not indicative of overall performance in the assessment of the unit, either high or low, even though the unit in this study would have seemed to be of more interest and utility to particular student-interest groups.

References

Biner, P., Barone, N., Welsh, K. & Dean, R. (1997). Relative academic performance and its relation to facet and overall satisfaction with interactive telecourses. Distance Education, 18(2), 318-326. http://dx.doi.org/10.1080/0158791970180208

Boud, D. (1989). The role of self-assessment in student grading. Assessment & Evaluation in Higher Education, 14(1), 20-30. http://dx.doi.org/10.1080/0260293890140103

Denner, P. R., Salzman, S. A. & Bangert, A. W. (2001). Linking teacher assessment to student performance: a benchmarking, generalizability, and validity study of the use of teacher work samples. Journal of Personnel Evaluation in Education, 15(4), 287-307. http://dx.doi.org/10.1023/A:1015405715614

Dong, Y., Lucey, D. & Leadbeater, G. (2012). A pilot study of e-quiz and e-review programs in the online blended learning of first-year Engineering Mechanics. Paper presented at the Australasian Association for Engineering Education (AAEE) Annual Conference, 3-5 December 2012. Melbourne: Swinburne University of Technology. http://www.aaee.com.au/conferences/2012/documents/abstracts/aaee2012-submission-78.pdf

Guzmán, E., Conejo, R. & Pérez-de-la-Cruz, J. L. (2007). Improving student performance using self-assessment tests. IEEE Intelligent Systems, 22(4), 46-52. http://dx.doi.org/10.1109/MIS.2007.71

Krause, K.-L., Hartley R., James, R. & Mclnnis, C. (2005). The first year experience in Australia universities: finding from a decade of national studies. Canberra: Australian Government. http://www.griffith.edu.au/__data/assets/pdf_file/0006/37491/FYEReport05.pdf [viewed 12 Nov 2012]

Lee, G. & Weerakoon, P. (2001). The role of computer-aided assessment in health professional education: a comparison of student performance in computer-based and paper-and-pen multiple-choice tests. Medical Teacher, 23(2), 152-157. http://dx.doi.org/10.1080/01421590020031066

Oliver, B., Tucker, B., Gupta, R. & Yeo, S. (2008). eVALUate: An evaluation instrument for measuring students' perceptions of their engagement and learning outcomes. Assessment & Evaluation in Higher Education, 33(3), 619-630. http://dx.doi.org/10.1080/02602930701773034

Pegden, J. & Tucker, B. (2009). Student evaluation of their learning: Differences in male and female students' perceptions of their units. Paper presented at the 7th Annual Australasian Higher Education Evaluation Forum, 21-23 October 2009. Byron Bay: Byron Community and Culture Centre. http://evaluate.curtin.edu.au/local/docs/Paper%20AHEEF%20Final.pdf [viewed 12 Nov 2012]

Rodriguez, M. C. (2004). The role of classroom assessment in student performance on TIMSS. Applied Measurement in Education, 17(1), 1-24. http://dx.doi.org/10.1207/s15324818ame1701_1

Sly, L. (1999). Practice tests as formative assessment improve student performance on computer-managed learning assessments. Assessment & Evaluation in Higher Education, 24(3), 339-343. http://dx.doi.org/10.1080/0260293990240307

Stiggins, R. & Chappuis, J. (2005). Using student-involved classroom assessment to close achievement gaps. Theory Into Practice, 44(1), 11-18. http://dx.doi.org/10.1207/s15430421tip4401_3

Tucker, B., Oliver, B. & Gupta, R. (2012). Validating a teaching survey which drives increased response rates in a unit survey. Teaching in Higher Education, 1-13. http://dx.doi.org/10.1080/13562517.2012.725224

Tucker, B., Pegden J.-A. & Yorke, J. (2012). Outcomes and evaluations: Is there a relationship between indicators of student success and student evaluations of learning? In Brown, N., Jones, S. M. & Adam, A. (Eds.), Research and Development in Higher Education: Connections in Higher Education. Proceedings of the 35th HERDSA Annual International Conference, 2-5 July 2012. http://www.herdsa.org.au/wp-content/uploads/conference/2012/HERDSA_2012_Tucker.pdf

Valle, R., Petra, L., Martinez-González, Rojas-Ramirez, J. A., Morales-Lopez, S. & Piña-Garza, B. (1999). Assessment of student performance in problem-based learning tutorial sessions. Medical Education, 33(11), 818-822. http://dx.doi.org/10.1046/j.1365-2923.1999.00526.x

Westerman, J. W., Nowichi, M. D. & Plante, D. (2002). Fit in the classroom: Predictors of student performance and satisfaction in management education. Journal of Management Education, 26(1), 5-18. http://dx.doi.org/10.1177/105256290202600102

Wiers-Jenssen, J., Stensaker, B. & Grøgaard, J. B. (2002). Student satisfaction: Towards an empirical deconstruction of the concept. Quality in Higher Education, 8(2), 183-195. http://dx.doi.org/10.1080/1353832022000004377

Yatrakis, P. G. & Simon, H. K. (2002). The effect of self-selection on student satisfaction and performance in online classes. The International Review of Research in Open and Distance Learning, 3(2), 1-6. http://www.irrodl.org/index.php/irrodl/article/view/93/172

Please cite as: Dong, Y. & Lucey, A. (2013). Relationships between student satisfaction and assessment grades in a first-year engineering unit. In Design, develop, evaluate: The core of the learning environment. Proceedings of the 22nd Annual Teaching Learning Forum, 7-8 February 2013. Perth: Murdoch University. http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/dong.html

Copyright 2013 Yu Dong and Anthony Lucey. The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format, provided that the article is used and cited in accordance with the usual academic conventions.


[PDF version] [Refereed papers] [Contents - All Presentations] [Home Page]
This URL: http://ctl.curtin.edu.au/professional_development/conferences/tlf/tlf2013/refereed/dong.html
Created 7 Feb 2013. Last revision: 7 Feb 2013.