Teaching and Learning Forum 2009 Home Page

Category: Research
Teaching and Learning Forum 2009 [ Refereed papers ]
Understanding and accommodating students with apparently mixed levels of engagement

Leonora Ritter
School of Social Sciences and Liberal Studies
Charles Sturt University

Tanya Covic
School of Psychology
University of Western Sydney

In looking at teaching students with mixed levels of engagement, this paper challenges assumed links between observed participation and engagement. It presents evidence from an instrument that specifically measures Tutorial Style Preferences (the TSP) that suggests that individual differences matter, that engagement is more complex than just observed participation and that engagement can be enhanced by an understanding and by catering to individual differences in learning and teaching environments. It also suggests that summative assessment of observed participation neither measures nor encourages engagement and may even discourage engagement for a significant cohort of students.


Literature review

Literature supports the basic axiom that addressing disengagement is important because of its link with underachievement (Genovese, 2006, 123). In particular, a growing volume of literature investigates the factors that influence observed student participation in tertiary classes, (eg Marmolejo, Wilder, & Bradley, 2004; Myers, 2004; Boniecki & Moore, 2003; Howard & Henney, 1998; Wambach & Brothen, 1997; Martin 1993). These observations are complemented by research into preferred learning style that has discovered that students vary in how structured they want their instructional activities to be and in their preferred level of social interaction in the learning process (Entwistle, McCune & Walker, 2001, p. 103; Renzulli & Dai, 2001, pp 35-36). Thus one way to address disengagement is through appropriate and empathic responses to students as individuals (Zhang, 2006; Hart, 1992, pp 241-2). The importance of this is illustrated in Martinez and Munday's (1998) study of student drop-out rates in which they found that two of the factors for success were 'the student's awareness of his/her own learning process and the lecturer's response to the varied learning needs of the group' (Briggs, 2000, pp 17-18).

Having to struggle with non-preferred styles and behaviours matters less wit h the most able students, but it puts 'those already most at risk of withdrawal or failure ... in double jeopardy' (Smith, 2002, p. 66). These include 'individuals of lower intelligence' (Riding & Raynor, 1999, p. 108), those who 'have been educated in a less academic environment before entering higher education, ... from lower socio-economic groups, ... outside the 'mainstream' white culture, women, and those with disabilities' (Smith, 2002, p. 64). This is an area of growing concern as increased access to university has led to larger classes and a greater variety of learner, requiring a more aware, flexible and individualised approach from university teachers (Smith, 2002, p. 63; Garton, Spain, Lamberson, & Spiers, 1999, p. 11).

In understanding students as individuals, instruments that measure 'cognitive, affective and psychological factors' can usefully give teachers 'stable indicators of how a learner perceives, interacts with and responds to the learning environment' (Keefe, 1989 in Logan & Thomas, 2002, p. iii). Measuring differences in preferred learning style is, however, a contested area (Logan & Thomas, 2002, p .ii; Koob & Funk, 2002, p. 304; Renzulli & Dai, 2001, pp 27, 33-35; Riding, 2001, p. 49; Sadler-Smith, 2001, p. 213; Sternberg, 2001, pp 250-1; Sternberg & Grigorenko, 2001, pp 2-4, 11, 13, 16; Riding & Raynor, 1999, pp 8, 11, 128, 186; Ferrari et al, 1996, 171; Furnham, 1992, pp 429, 437; Tyler, 1978, p. 6). The rivalry between the various models and instruments within this literature generates counter-productive tensions about what is being measured. It is therefore valuable to tailor particular instruments to particular needs and circumstances (Ritter, 2007).

One particularly contested area is whether preferred learning style is innate and relatively fixed, or shaped by experiences and conscious choices and therefore mutable (Salter, Evans & Forney, 2006; Logan and Thomas, 2002, p. ii; Renzulli & Dai, 2001, pp 34, 38; Sadler-Smith, 2001, p. 213; Sternberg, 2001, 2; Sternberg & Grigorenko, 2001, 18; Zhang & Sternberg, 2000, p. 486; Riding & Raynor, 1999, p. 73). Some research suggests a compromise position that while students have preferences, most are open to or accepting of a range of approaches that educators can work with (Sternberg & Grigorenko, 2001, p. 3; Riding & Raynor, 1999, pp 11, 79; Sadler-Smith 1999, p. 17). Related to this is the question of whether styles are also dependent on a developmental component (Ahlfeldt, Mehta & Sellnow, 2005, p. 5; Fritschner, 2000, 342; Howard & Henney in Fritschner, 2000, pp 342-3). For those who believe that there are mutable or developmental components, it is arguable that student behaviour can be modified by summative assessment.

This is one possible rationale for trying to develop engagement by including participation as an assessable component. It is based on the belief that assessment focusses and motivates in order to facilitate learning (Nightingale, et. al., 1996, p.10) and is an effective means of behaviour modification (Renzulli & Dai, 2001, p. 36; Zhang & Sternberg, 2000, pp 486-7) because students give priority to 'the development of the learning skills and competence which [they] require to succeed on their coursework' (Davies et al (1998) in Briggs, 2000, p. 18).

On the other hand, giving marks for observable participation does not appear to respect individual differences in personality and preferred learning style. For a start, assessing engagement in debate disadvantages non-conforming personalities who choose to reduce stress by avoiding conflict (Barsky & Wood, 2005, p. 260) as it can set up situations which are perceived by the participants as conflict situations; in the traditional formal tutorial, the dominant practice of oral-verbal and linear forms of communication advantage particular types of student (Smith, 2002, p. 66; Renzulli & Dai, 2001, pp 35, 40; Trilling & Hood, 2001, p. 16; Riding & Raynor, 1999, p. 150; Philbin, Meier, Huffman, & Boverie, 1995, p. 491; Furnham, 1992, p. 429). The current trend towards group work favours 'wholists [sic]' over 'analytics' (Riding & Raynor, 1999, p. 92) and the shift from teacher directed teaching to 'learning activities that require individual effort and study' merely transfers the advantage from students 'who prefer a field-dependent learning style', to students 'who prefer a field-independent style' (Witkin, Moore, Goodenough et al., 1977 in Garton et al, 1999, p. 13). All these biases raise questions about the validity of measuring observable participation as a way of evaluating engagement (Brown & Knight, 1994, 25). '[T]he whole exercise of assessment is pointless if it cannot make useful judgements about student learning and if, in fact, it encourages students into ineffective learning' (Nightingale, et. al., 1996, p.10).

There is also a danger that in assessing participation we are primarily rewarding our students for being like us, as our 'only experience of thinking is [our] own, and it is easy for [us] to assume that everyone else sees things and thinks in a similar way' (Riding & Raynor, 1999, p. 128), and we know that 'styles of teaching' often 'stem directly from the ways the teachers themselves prefer to learn' (Entwistle, McCune & Walker, 2001, pp 123-4).

These considerations challenge reductionist assertions that students learn best when they are actively participating in class (Fritschner, 2000) and that 'teaching that incorporates discussion' and social interaction enhances 'the development of cognition, particularly where educator and learner engage in interactive dialogue' (Ho, 2002, p. 144).

Definition of observable participation is also problematic. In one study '[q]uiet students defined participation [as including] ... attendance, active listening ... and being prepared for class' (Fritschner, 2000, pp 342-3) and in-class observations have also found that in low participation classes, students become 'irritated with peers who [are] especially talkative' (Fritschner, 2000, p. 342). In response to such concerns, Spiller (2005, p. 305) sees written responses from students as satisfying the requirements of 'the conception of teaching as a conversation involving dialogue between teacher, student and subject area'.

While this paper focuses on the tutorial environment, literature suggests similar concerns and considerations apply to observable participation in the on-line environment, in which similar questions must be asked about encouraging and assessing student engagement with such participatory affordances as forums, blogs, bulletin boards and wikkis, which may be seen as replacing traditional tutorials (Sweeney, O'Donoghue & Whitehead, 2004); there is evidence to suggest that the presentation of online courses 'requires both opportunities for students to collaborate (such as forums) and to work by themselves in order to meet the needs of two types of students' (Logan & Thomas 2002, ix) and that computer assisted learning needs to be 'adjusted to the varying styles of different learners' (Chapman & Calhoun, 2006, 582). At this point in time it is clear that 'there is much work to be done in helping students and tutors... to achieve effective online interaction' (Price, Richardson & Jelfs, 2007, p. 19).

Research goals

The goals were to develop and test an instrument that would effectively diagnose 'different responses to specific classroom environments and instructional practices' (Felder & Brent, 2005, p. 57) in the specific context of face-to-face tutorials in order to facilitate understanding of the causes and effects of mixed student engagement in an area of observable student participation and to seek effective ways of enhancing engagement, including evaluating the motivational efficacy of summative assessment of participation.

Methods

In an attempt to address concerns about the validity and reliability of existing instruments that attempt to generalise across all learning and teaching experiences in measuring preferred learning styles, an instrument was designed to specifically measure Tutorial Style Preferences (TSP, see Appendix A). It involved self-evaluation questionnaires as, '[a]t present the most used and convenient way to assess learning styles ... [that] provide researchers as well as educators an easy, reliable and validated way to distribute and asses individual requirements (Cronbach 1990, in Logan & Thomas, 2002, iii). These questionnaires asked selected participants to rate 38 criteria of effective tutorials against a 5 point Likert scale.

The study involved 306 Australian university students (mean age 21; predominantly first year (71%), female (80%) and drawn from the discipline areas of psychology and education (91%)). A factor analysis of the TSP instrument was conducted in order to discover whether there was an identifiable taxonomy of multiple preferred tutorial styles.

Results

The factor analysis of the TSP instrument identified three preferred tutorial styles: authority (wanting information to be presented); participation (preferring an interactive process); and enjoyment (a need for it to be entertaining and feel safe). Of these three preferred styles, enjoyment (mean 3.9 on a on a 5 point Likert scale where 5 means strongly agree) was the most endorsed, followed by authority (mean 3.88) with participation (mean 3.57) the least supported.

Table 1: TSP factors: items that correlate >.5
(Pearson correlation sig. (2 tailed))

AuthoritarianParticipatoryEnjoyable
Clearly explained material.67Interesting debate.65Humour.72
Direct teaching.66Lots of debate and argument.64Entertaining.64
Orderly and well-organised process.64Lots of different ideas.63Bright people.51
Handouts.63Lots of participation by others.60Emotionally safe environment.51
Lots of information from tutor.63Material that stretches your thinking.59Interesting topic.51
Clear learning outcomes.59Everyone encouraged to participate.57

Strong control/leadership.58Open ended discussion.57

Detailed guidelines.58Interactive discussion.55

Building blocks to create understanding.56



Findings

Firstly, the findings of the TSP analysis suggest that unitarist generalisations about student engagement or what works best for students in tutorials need to be replaced with a more pluralist approach. Some students are much more comfortable than others about joining, or even playing a controlling role, in debate and argument. The respondents differed most strongly on how highly they rated: 'lots of debate and argument' (standard deviation 1.04), 'lots of participation by you' (standard deviation 1.04), 'entertaining' (standard deviation 1.04), and 'bright people' (standard deviation 1.05).

This suggests that variations between groups which experienced tutors often attribute to the efforts or abilities of the students, may be the result of a tendency of groups to self-regulate the amount of participation to a level that matches the preferred tutorial style of most members of the group. This could either be the result of the sum of each student responding with his/her preferred style, or the product of the interaction of the students within the group. This is an area worthy of future research.

Secondly, the TSP evidence suggests there may be many individuals who do not respond well to being required to observably participate. One of the most significant contributors to 'enjoyment' (ranked 6th overall) was an emotionally safe environment (mean 4.1), which could arguably be threatened by summative assessment. Within the 'authority' factor, subjects particularly endorsed obvious relevance (mean 4.4), clearly explained material (mean 4.3) and strong direction by tutor (mean 4.2), all of which could be jeopardized by a high level of participation. Within the 'participation' factor, interactive discussion was the most important, but it was only ranked eighth overall with a mean of 4.0.

Rather, they suggest that assessment of tutorial participation may only evaluate whether the student responds best to the particular tutorial environment, tutorial task, or expected participative relationships between themselves and tutors/peers. By rewarding participation, the tutor may simply be rewarding students who find the requirement to participate in particular ways empowering and challenging and disadvantaging those who find it threatening or overwhelming.

Thirdly, the TSP factors findings do suggest that while students have preferences, most are open to or accepting of a range of approaches so there are degrees of endorsement/preference.

Inferences

The first inference is that tutors need to take into account preferred learning environments in the assumptions they make about effective tutorials and to help learners to be strategic, to make choices and develop coping behaviours, such as writing some notes about what they want to say before they speak in the tutorial. It reinforces the evidence that it is effective when tutors speak respectfully to students, have a pleasant tone of voice, display encouraging body language, speak slowly, allow time for questions, wait for students to ask questions (Fritschner, 2000, pp 355-8), and when the learning environment involves small classes and problem-based learning (Ahfeldt et al., 2005, p. 5).

A second inference is that summative assessment of tutorial participation is questionable in a number of ways. Increased observable participation may not make the tutorial a better learning experience for all students; making individual students participate may not enhance their individual learning; summative assessment may fail to motivate students to develop the particular skills required for effective participation (eg comprehension of pre-reading, confidence, oral fluency); and measuring observable participation may not effectively measure a student's engagement with the subject and/or relevant preparation and presentation skills.

Furthermore, attempts to force 'participatory' behaviour through summative assessment can introduce the counter-productive element of assessment anxiety. Experiments (eg Entwistle in Edinburgh and Marton in Sweden) found that students who felt threatened and anxious 'adopted a surface approach, whereas if they were relaxed and interested and not anxious they adopted a deep approach to the task' (Heywood, 1989, p. 81). Thus enforcing observable student engagement is not going to promote interactive learning for all students.

Where assessed outcomes are not compatible with the student's preferred style, using summative assessment to pressure students to accommodate the institution's (or teacher's) preferred learning process inappropriately shifts the onus of accommodating individual differences from the teacher to the student. The outcomes being assessed are no longer skills that students can simply choose to acquire, but become the product of factors that are out of the students' control. This creates a danger that assessment will not encourage students to acquire intended skills, but encourage avoidant and protective behaviours in those who are not comfortable, thereby diverting learning-energy to developing these behaviours.

Where tutorial environments, processes or assessed outcomes are not compatible with the student's preferred style, those students who have to work counter to their preferred styles can end up in difficulties (Smith, 2002, p. 66). Attempts to force, for example, introverts to perform in front of a class or 'analytics' to work in groups (Sternberg & Grigorenko, 2001, pp 12-17), will retard their learning and can destructively lead to blaming them for failure to adapt, thereby placing great strains on a student. Formative assessment and constructive feedback requires the tutor's role to be constructed not as assessor and judge, but as 'enabling students to learn' (Briggs, 2000, p. 21), including identifying 'appropriate strategies for [each] individual' (Smith, 2002, p. 66).

The TSP results thus challenge assumptions that underpin summative assessment of tutorial participation; they add to the evidence that it is not an effective motivator for those who find it threatening and that it may, indeed, be counter productive for the learning of these students. They also add to the evidence that assessment of tutorial participation is not valid as it does not measure what it purports to measure. Thus, while tutorial participation may well help to develop cognitive understanding and social and oral skills for some students, and may be encouraged through appropriate teaching behaviour, serious issues of ethics and validity are raised when it becomes a summatively assessed aspect of a course.

If the goal of assessing observed participation is not just to motivate the behaviour but also to assess engagement with the material delivered through tutorials, it is necessary to go beyond monitoring oral participation, to maximise variation in assessment techniques so that what ends up being measured is not just the fit between style and learning environment (Logan & Thomas, 2002, p. iii; Zhang & Sternberg, 2000, pp 486-7). For example, the student could be given a choice between having verbal participation assessed or keeping a written journal of responses to issues raised in tutorials. This must, however, be done in a way that enhances rather than detracts from equity through comparability of criteria and feedback.

Conclusions

Engagement is a pluralist and complex concept, even in the relatively simple context of tutorial participation. It can neither be enhanced nor measured through simplistic approaches to observable participation.

The TSP results suggest that variations in the levels of observable tutorial participation from year to year, between different groups within the same year and between different students in the same group should be expected and accepted because there are valid and acceptable reasons for these differences that go beyond the common reductionist ones, based on ability or effort, to include a combination of preferred learning or relationship styles and the flexibility and adaptability of the students.

The outcomes of developing and testing the TSP reinforce these concerns. They have raised questions about whether a learning situation is more effective for all those involved if all students contribute, whether summative assessment is the best way to make students acquire participation skills, which student attributes are actually evaluated when participation is assessed and what skills assessment of observable participation encourages students to acquire.

This trial of the TSP suggests that optimum learning for each individual requires the tutor to understand the relationship between the tutorial environment and process and each student's needs. Thus effective tutorials will allow students to engage in a variety of equally valued ways to maximise their level of satisfaction and quality of outcomes, not use assessment to try to force a particular learning style. The TSP could be effective in helping tutors identify which tutorial styles are most effective for any particular cohort and in making students self-aware.

The trial of the TSP has also suggested that assessing students for their willingness to participate or their style of participation may be a matter of merely rewarding learning or behavioural-style preferences or assessment conformity. To counter the danger that in assessing participation we are primarily rewarding our students for being like us, applying the TSP in classes could help to raise tutors' awareness that their students respond to tutorials in ways that differ from the tutor's preferences.

There has been no attempt to correlate preferred tutorial styles with measures of academic achievement and this would be a fruitful area for further study. It is arguable in some courses that participation should be assessed because it is seen as relevant to course objectives and professional outcomes and the intent is to value and reward students for presenting in a particular way and/or their adaptability and maturity, then explicit, transparent and valid summative assessment may seem justified. The adoption of a particular style would be necessary for success in a particular subject or course. In such cases, however, it is important not to confuse the ability to participate in tutorials with the ability to demonstrate role specific and contextualised oral skills, such as giving evidence in court, interacting in a professional capacity with a patient or student, or making a professional presentation to a client. Context could provide a focus and designated roles might provide confidence and purpose for those who would not normally proactively participate. This would be a fruitful area for further research.

Appendix A: Instrument for measuring tutorial style preferences (TSP)*

In the table below please rate the things that make a TUTORIAL work well for you:
1 = not at all important and 5 = very important12345
1Strong direction by tutor12345
2Interactive discussion12345
3Lots of participation by others12345
4Lots of participation by you12345
5Broad exploration of topic12345
6Lots of analysis12345
7Following up interesting ideas even if not directly relevant12345
8Detailed guidelines12345
9A calm atmosphere12345
10Bright people12345
11Humour12345
12Clear learning outcomes12345
13Direct teaching12345
14Easily understood packaged knowledge12345
15Orderly and well-organised process12345
16Strong control/leadership12345
17Formally structured debates12345
18Discussion based on readings12345
19Working in small groups12345
20Not forced to participate12345
21Doing written exercises12345
22Students presenting papers12345
23Material that stretches your thinking12345
24Students in control, tutor facilitates12345
25Lots of debate and argument12345
26Entertaining12345< /td>
27Interesting debate12345
28Everyone encouraged to participate12345
29Lots of information from tutor12345
30Handouts12345
31Overheads to work from12345
32Clearly explained material12345
33Building blocks to create understanding12345
34Obvious relevance to subject/course12345
35Open ended discussion12345
36Lots of different ideas12345
37An emotionally safe environment12345
38Interesting topic12345

Acknowledgments

Associate Professor Ritter and Dr Covic's research into students' learning preference styles has been supported with a Charles Sturt University Scholarship in Teaching grant.

References

Ahlfeldt, S., Mehta, S., & Sellnow, T. (2005). Measurement and analysis of student engagement in university classes where varying levels of PBL methods of instruction are in use. Higher Education Research and Development, 24(1), 5-20.

Barsky, A. E. & Wood, L. (2005). Conflict avoidance in a university context. Higher Education Research and Development, 24(3), 249-64.

Briggs, A. R. J. (2000). Promoting Learning style analysis among vocational students. Education and Training, 42(1), 16-23.

Boniecki, K. A. & Moore, S. (2003). Breaking the silence: Using a token economy to reinforce classroom participation. Teaching of Psychology, 30(3), 224-227.

Brown, S. and Knight, P. (1994). Assessing learners in higher education. London: Kogan Page.

Chapman, D.M. & Calhoun, J.G. (2006). Validation of learning style measures: Implications for medical education practice. Medical Education, 40, 576-83.

Entwistle, N., McCune, V., & Walker, P. (2001). Conceptions, styles, and approaches within higher education: Analytic abstractions and everyday experience. In J. J. Sternberg & L. Zhang, (Eds.), Perspectives on thinking, learning and cognitive styles (pp 103-136). Mahwah, NJ: Lawrence Erlbaum Associates.

Felder, R. M. & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57-71.

Ferrari, J. R., Wesley, J. C., Wolfe, R. N., Erwin, C. N., Bamonto, S. E. & Beck, B. L. (1996). Psychometric properties of the revised Grasha-Reichman Student Learning Scales. Educational and Psychological Measurement, 56(1), 166-172.

Fritschner, L. M. (2000). Inside the undergraduate college classroom. The Journal of Higher Education, 71(3), 342-62.

Furnham, A. (1992). Personality and learning style: A study of three instruments. Personality and Individual Difference, 13(4), 429-438.

Garton, B. L., Spain, J. N., Lamberson, W. R., Spiers, D. E. (1999). Learning styles, teaching performance, and student achievement: A relational study. Journal of Agricultural Education, 40(3), 11-20.

Genovese, J. (2006). Age, sex and thinking style predict academic avoidance. Individual Differences Resesarch, 4(2), 123-128.

Heywood, J. (1989). Assessment in higher education. Chichester: John Wiley and sons.

Hart, M. U., (1992). Subsistence knowing. In S. B. Merriam, (Ed.) (1995). Selected Writings on Philosophy and Adult Education (pp. 239-247). Malabar, Florida: Krieger Publishing Company.

Ho, S. (2002). Encouraging on-line participation?. In A. Bunker & G. Swan (Eds.), Focusing on the student (pp. 143-152). Mount Lawley, WA: Edith Cowan University. http://www.ecu.edu.au/conferences/tlf/2002/pub/docs/Ho.pdf

Howard, J. R. and Henney, A. L. (1998). Student participation and instructor gender in the mixed-age college classroom. The Journal of Higher Education, 69(4), 384-405.

Koob, J. J. and Funk, J. (2002). Kolb's Learning Style Inventory: Issues of reliability and validity. Research on Social Work Practice, 12(2), 293-308.

Logan, K. & Thomas, P. (2002). Learning styles in distance education students learning to program. In J. Kuljis, L. Baldwin & R. Scobie, (Eds.), Proceedings of the Psychology of Programing Interest Group 14, pp. 29-44. London: Brunel University.

Marmolejo, E. K., Wilder, D. A. & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37(3), 405-410.

Martin, B. (1993). Increasing student participation in tutorials. Overview, 1(2), 8-11.

Marston, W. (1928). Emotions of normal people. Oxford: Harcourt Brace.

Myers, S. A. (2004). The relationship between perceived instructor credibility and college student in-class and out-of-class communication. Communication Reports, 17(2), 129-137.

Nightingale, P., Wiata, I. T., Toohey, S., Ryan, G., Hughes, C. & Magin, D. (1996). Assessing Learning in Universities. Kensington, NSW: University of NSW.

Philbin, M., Meier, E., Huffman, S. & Boverie, P. (1995). A Survey of gender and learning styles. Sex Roles, 32(7/8), 485-494.

Price, L., Richardson, J.T.E. & Jelfs, A. (2007). Face-to-face versus online tutoring support in distance education. Studies in Higher Education, 32(1), 1-20.

Renzulli, J. S. & Dai, D. Y. (2001). Abilities, interests, and styles as aptitudes for learning: A person-situation interaction perspective styles. In J. J. Sternberg, & L. Zhang (Eds.), Perspectives on thinking, learning and cognitive Styles (pp. 23-46). Mahwah, NJ: Lawrence Erlbaum Associates.

Riding, R. and Raynor, S. (1999). Cognitive styles and learning strategies. London: David Fulton Publishers.

Riding, R. (2001). The nature and effects of cognitive style. In J. J. Sternberg, & L.Zhang (Eds.), Perspectives on thinking, learning and cognitive styles (pp. 47-72). Mahwah, NJ: Lawrence Erlbaum Associates.

Ritter, L. (2007). Unfulfilled promises: How inventories, instruments and institutions subvert discourses of diversity and promote commonality. Teaching in Higher Education, 12(5&6), 569-579.

Sadler-Smith, E. (2001). Does the learning styles questionaire measure style or process? A reply to Swailes and Senior (1999). International Journal of Selection and Assessment, 9(3), 207-214.

Sadler-Smith, E. (1999). Intuition-analysis style and approaches to studying. Educational Studies, 25(2), 159-173.

Salter, D.W., Evans, N.J. & Forney, D.S. (2006). A longtitudinal study of learning style preferences on the Myers-Briggs Type Indicator and Learning Style Inventory. Journal of College Student Development, 47(2), 173-184.

Smith, J. (2002). Learning styles: Fashion fad or lever for change? The application of learning style theory to inclusive curriculum delivery. Innovations in Education and Teaching International, 39(1), 63-70.

Spiller, P. (2005). Teaching as a focussed conversation: The use of incentive-based preparation exercises. Innovations in Education and Teaching International, 42(4), 305-312.

Sternberg, R. J. (2001). Epilogue: Another mysterious affair at styles. In J. J. Sternberg & L. Zhang (Eds.), Perspectives on thinking, learning and cognitive styles (pp. 249-252). Mahwah, NJ: Lawrence Erlbaum Associates.

Sternberg, R. J. & Grigorenko, E. L. (2001). A capsule history of theory and research on styles. In J. J. Sternberg, & L. Zhang (Eds.), Perspectives on thinking, learning and cognitive styles (pp. 1-22). Mahwah, NJ: Lawrence Erlbaum Associates.

Sweeney, J., O'Donoghue, T. & Whitehead, C. (2004). Traditional face-to-face and web-based tutorials: a study of university students' perspectives on the roles of tutorial participants. Teaching in Higher Education, 9(3), pp 311-323.

Trilling, B. & Hood, P., (2001). Learning, technology, and education reform in the Knowledge Age. In C. Paechter, R. Edwards, R. Harrison and P. Twining, (Eds.), Learning, space and identity (pp.7-30). London: Paul Chapman Publishing/Open University.

Tyler, L.E. (1978). Individuality. San-Francisco: Jossey-Bass.

Wambach, C. & Brothen, T. (1997). Teacher self-disclosure and student classroom participation revisited. Teaching of Psychology, 24(4), 262-263.

Zhang, L. (2006). Does student-teacher thinking style match/mismatch matter in students' achievement? Educational Psychology, 26(3), 395-409.

Zhang, L. & Sternberg, R. J. (2000). Are learning approaches and thinking styles related? A study in two Chinese populations. The Journal of Psychology, 134(5), 469-489.

Authors: Associate Professor Leonora Ritter, School of Social Sciences and Liberal Studies, Charles Sturt University, Panorama Avenue, Bathurst NSW 2795. Email: lritter@csu.edu.au

Associate Professor Ritter is Head of the School of Social Sciences and Liberal Studies at CSU. She received a NSW Minister for Education and Australian College of Educations' (ACE) Quality Teaching Award in 2001 and last year was awarded a Higher Education Research and Development Society of Australasia (HERDSA) Fellowship. She is now a mentor and assessor for the Fellowship program. She is also Deputy Chair of the University's Learning and Teaching Committee.

Dr Tanya Covic is a lecturer in the School of Psychology, University of Western Sydney, with a PhD from the University of Sydney where she commenced teaching in 1996. Dr Covic's research interests include teaching and learning in higher education. In her previous research she has explored students' time management skills, stress and coping, and university program choices.

Please cite as: (2009). Understanding and accommodating students with apparently mixed levels of engagement. In Teaching and learning for global graduates. Proceedings of the 18th Annual Teaching Learning Forum, 29-30 January 2009. Perth: Curtin University of Technology. http://otl.curtin.edu.au/tlf/tlf2009/refereed/ritter.html

Copyright 2009 . The authors assign to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format (including website mirrors), provided that the article is used and cited in accordance with the usual academic conventions.


[ Refereed papers ] [ Contents - All Presentations ] [ Home Page ]
This URL: http://otl.curtin.edu.au/tlf/tlf2009/refereed/ritter.html
Created 28 Jan 2009. Last revision: 28 Jan 2009.