Teaching and Learning Forum 98 [ Contents ]

Variation in student learning: A basis for improving teaching and learning

J. H. F. (Erik) Meyer
School of Education
University of Cape Town
Against a background of accumulating research evidence, this paper presents a conceptual framework of how evidence of variation in student learning can be used to initiate and inform a variety of teacher-focused, and learner-focused, interventions aimed at improving the quality of teaching and learning.

Within this framework the attributes of some of contemporary models of student learning, or aspects thereof, are discussed. Examples of manifestations of student learning to which such models, or discrete aspects thereof, are 'fitted' for interpretative or evaluative purposes are presented and examined in terms of practical utility.

An example is presented of how a discipline-specific model of student learning can be developed that is sensitive to individual differences in the manner in which students engage the content and context of learning. It is argued, in particular, that a sufficiently sensitive model can form a basis for (a) initiating a process of change in terms of assisting teachers to reconceptualise their teaching and make it more student-focused, (b) identifying and assisting students who are potentially 'at risk' by virtue of what have technically been referred to as dissonant forms of study orchestration.


This paper addresses the question of how students in higher education vary in their engagement of the content and context of learning. It is argued that such variation, when exhibited, constitutes a form of teaching and learning management information that can inform educational practice in a number of beneficial ways. The challenge is how to (a) solicit and interpret such information and (b) use it to benefit teaching and/or learning in process terms.

To lay the foundation for an introductory model of student learning it may be observed that, for most students, 'learning' represents a purposeful activity, the outcome of which has to be externalised for scrutiny in some formally recorded sense. Teachers often use such formally observed outcomes (such as examination results) as a basis for speculating why, or how, students differ in relation to one another given essentially similar exposures to, say, a course of study in some subject. Attributions may thus typically be made in terms of variation in other unrecorded observables such as effort, enthusiasm, interest, and so on. So, intuitively, most teachers are able to admit the idea that some of the variation in observed outcomes has a possible explanation in terms of other unobserved sources of variation. An immediate difficulty is that only a few such explanatory sources of variation (how hardworking a student is, for example) are amenable to direct observation, and then so only to a limited degree.

Sources of variation

Some sources of explanatory variation in student learning are presented in Table 1. These sources are generally common to most learning contexts. More precisely, general purpose models of student learning that can be applied within, or that are transportable across, different discipline contexts usually reflect such common sources of variation that are attributable to contrasting forms of motivation, intention, process, and so on.

Table 1: Some explanatory sources of variation in student learning

ObservableAs operationalised in responses to questions likeDimension(s) of variation

IntentionWhat are you trying to do? Seek understanding - 'make personal sense'
Memorise in order to understand
Memorise in order to reproduce
Understand in order to memorise
'Play the system'
Seek high levels of achievement
MotivationWhy are you doing it? Intrinsic interest in object of study
Pass examinations with minimal effort
Perceived future relevance or application
Need to meet external demands or fulfill 'duty'
Enjoyment of peer competition
ProcessHow are you going about it?
What are you doing?
Explore subject beyond formal requirements
'Interrogate' subject - ask questions of it
Relate new knowledge to prior knowledge
Repeated rehearsal or recitation
Practise on previous examination papers
Various forms of study 'methods'
ContextHow do you perceive the learning environment?
What does the learning environment require of you?
Perceived criteria for success - task demands
Attributes of learning materials
Teacher's intention(s) - signalling of 'cues'
Freedom of choice
What do you think 'learning' means? Accumulating information
Transforming information into knowledge
Changing as a person

Sources of variation thus exist that need to be externalised by students themselves in order to become accessible to an outside observer. The process of externalisation may take place via written or spoken responses to appropriate stimuli and, if carried out in a cooperative spirit, the process can yield data about the students, and the context of response (a course, say) that can inform a variety of actions.

The focus here is on such data expressed in numerical form, and as typically captured in terms of inventory responses. A simple example of how a single source of variation such as 'disorganised studying' may be operationalised, and how variation associated with this source may be exhibited, is presented in Figure 1 in the form of a frequency distribution. The context of response for the data in Figure 1 is a course in one of the professions allied to medicine. The respondents are a small group of first year students (n=34) who volunteered data as part of a pilot project on student learning.

The shape of the distribution in Figure 1 conveys the presence of inter-individual variation in the data that, even in this simple descriptive example, can serve an evaluative function. At face value at least one student identified as 'Case 34' is exhibiting a high level of response (a score of 24 on a scale of 5-25) relative to the rest of the group. Viewed in isolation, a general pattern of disorganised studying is not cause for concern, but it could represent one feature of a more problematic pattern of learning engagement. In contrast, the student identified in Figure 1 as 'Case 24' is exhibiting a low level of 'disorganised studying'. So, on one dimension of conscious reflection, these two students are highlighting at least one difference between themselves.

Figure 1: Frequency distribution of disorganised study methods

In any modelling process there is an interest in relationships between sources of variation. For the same student sample as in Figure 1, consider the scatterplot presented in Figure 2. In this case responses to 'memorisation' and 'fear of failure' exhibit a moderate linear association (positive correlation).

Figure 2: Scatterplot of memorisation vs. fear of failure

Thus, even the most basic representations of interrelationships between observed data can convey potentially useful information about the 'real people' that the data represent. But there are also limitations in terms of the inferences that can be made from such data, especially where there is a concern for the learning well-being of students. For example, it is clear from Figure 2 that the student identified earlier as 'Case 34' is, once again, signalling a pattern of response that is potential cause for concern. Concern might also be expressed in respect of 'Case 2'.

But when should such concern be translated into action? For an individual student, 'fear of failure' can represent either a positive or negative motivational influence depending on what else it is temporally associated with. An undifferentiated process of 'memorisation' can similarly be regarded as contributing to either the simple storage and recall ('cramming and dumping'), or to the internal transformation ('understanding'), of information. The difficulty here is that that there is insufficient modelling information available to determine the possible consequences of what 'Cases 2 and 34' have disclosed thus far. Theoretically, though, it could be speculated that a high level manifested combination of 'disorganised studying', 'memorisation', and 'fear of failure', as exhibited so far by 'Case 34', is a sufficient enough cause for concern; albeit one informed by a limited range of diagnostics. How much further multivariate complexity to admit really depends on how sensitive the model needs to be to individual differences.

In referring again to Figure 2, it may be noted that 'Case 2' performed well in the final examination and that 'Case 34' performed poorly. So there is a question as to where the real difference of consequence between these two students lies in terms of what they are capable of declaring about themselves. A partial answer may be found in an examination of the scatterplot, presented in Figure 3, of two more sources of variation that represent learning pathologies. There is again a clear separation between the responses of 'Cases 24 and 34' (as in Figure 1) and, at this stage of disclosure, there is no doubt that the overall pattern of learning behaviour exhibited by 'Case 34', compared to 'Case 2', represents a valid distress signal.

Figure 3: Scatterplot of globetrotting vs. improvidence

A foundation model of student learning

Theoretically, an intention to 'seek understanding' supported by a congruent motive (such as an intrinsic interest in the subject) and congruent processes (such as relating ideas within, and across subjects, and the critical use of evidence in support of argument and conclusion) has more explanatory power than any single source(s) of variation viewed in isolation. Such a structural combination of intention, motive and process constitutes the basis of what may be referred to as a broader composite 'meaning' dimension of variation in learning.

Similarly, an intention to memorise, a motivation based on fear of failure, a lack of process (inability to integrate new information), and a narrow focus on the syllabus, constitutes what may referred to as a broader composite 'reproductive' dimension of learning. On theoretical grounds it can thus be hypothesised that, although there are multiple observables here, the underlying conceptual model basically consists of just two more complex dimensions of variation.

The degree to which such a hypothetical two-dimensional model can be fitted to observed data in a strict statistical sense can also be determined. Given that there are linear relationships in the data (as shown, for example, in Figures 2 and 3) it is possible to statistically determine, via a process known as confirmatory factor analysis, and subject to certain distributional assumptions, what the underlying linear (covariance) structure in any chosen set of observed data would need to look like in this case in order to satisfy the conceptual specification of the model. ('Covariance' can be thought of as a more generalised form of 'correlation'.)

The conceptual specification, in essence, is that the observables must unambiguously define two linear structures that are (negatively) correlated with one another. The degree to which the resultant mathematically specified structure corresponds to the observed structure essentially determines how well the conceptual model fits the data, and therefore the degree of confidence that can be placed in the observed model for inferential purposes. Note that the distinction being made here is that the conceptual model represents the theory, while the observed model represents the responses of 'real people'. In the example under discussion the degree of fit is in fact 'very good' in statistical terms as determined by a variety of 'goodness of fit' indices (for example, Steiger-Lind RMSEA = 0.009 as per SEPATH module in Statistica/w 5.0).

The observed model (that is, the empirical manifestation of the conceptual model based on the observed data) is presented in Table 2; this is what a compact multivariate model 'looks like'. The observed model in this case literally communicates an abstract conceptual 'picture' within which all the students differ from one another in varying degrees. Such 'pictures' can serve as a basis for evaluation in their own right; for example, in a dynamic sense, successive 'pictures' can be compared over time within courses to obtain an overall impression of the impact of the course on students' learning (Meyer and Scrivener, 1995). In this case the 'picture' is a static one; interpretations of the two composite dimensions of variation in the 'picture' (the factors labelled F1 and F2) are unambiguously indicated by the magnitudes of the tabulated numbers (technically called the factor loadings) which can be thought of as determining the relative 'emphasis' of the corresponding source of variation within each of the dimensions.

Table 2: An observed two-dimensional model:
'meaning' (F1) and 'reproducing' (F2)

Source of variationIllustrative items within each source of variationF1F2

use of evidenceI am usually cautious in reaching conclusions unless they are well supported by evidence 77.
relating ideasI try to relate ideas in this course to other ideas in this course, or to ideas in other courses 73.
deep approachI often find myself questioning things that I hear in class or read in books 70.
intrinsic motivationI find that parts of this course can often be really exciting 42.
fear of failureI am scared that I might fail this course .71
memorisationI often find I have to memorise things that I don't really understand .69
syllabus boundnessI tend to read very little beyond what's required for completing assignments .54
disorganised studyMy habit of putting off work leaves me with far too much to do before tests or exams .37

Note: All loadings multiplied by 100 and rounded. Loadings with an absolute value less than 20 tabulated as a period.
Inter-factor correlation = -0.34. Subscales substantively sourced from the Approaches to Studying Inventory.

More technically stated, a conceptually validated observed model presents a structural framework within which all the students contributing to the analysis vary relative to one another; in this case all the students vary simultaneously within two dimensions that, in conceptual terms, are basically incompatible. Furthermore, in terms of what has been observed in this case, and by virtue of the degree of 'fit' of the conceptual model, each student can be located within the structure of the model via estimates of what are technically referred to as 'factor scores'. A student may thus, for example, exhibit a low factor score on the 'meaning' dimension (F1) and a high factor score on the 'reproducing' dimension (F2). This is, in fact, the case for the student identified as 'Case 34'. On theoretical grounds such a student, and others with similar patterns of response (who would thus constitute an 'individual-similarity' subgroup), would be expected to exhibit a qualitatively poor learning outcome compared to a student with an opposite score pattern. There is thus an introduction to the concept of 'risk' in learning behaviour and, from a teacher perspective, a possible interest concerning the distribution of 'risk' within a given context of response.

Assisting students

The point here is that an observed model can be used to identify students within a range of exhibited response patterns (from 'high risk' to 'low risk') either directly, as already indicated, or indirectly via categorisation procedures as discussed in Meyer (1998). In either case the process is economic in terms of large numbers, and it can provide 'early warning' diagnostic information that can inform teaching and learning activity in a variety of senses. The contemplation of even 'sterile' anonymous evidence of variation in student learning can provoke teachers to speculate about the nature of their contribution to such variation.

More fundamentally, concerned teachers can differentially respond to the learning needs and problems of students as an integral part of academic practice. Responses generally need to address individual, or individual-similarity subgroup, needs in terms of (a) supportively assisting students to become aware of, and sensitive to the possible consequences of, their engagement of learning and, (b) the assumption of some (teacher) responsibility in assisting students to take control over any indicated process of change.

Teacher-based responses that focus on individual students can assume different forms that range from relatively benign learning conversations in which an informed interest is shown in how students are engaging learning (based on what students have disclosed), to interventions that are either implicitly or explicitly intended to assist students to alter 'high risk' patterns of learning engagement. Studies by Meyer and Kaschula (1994), and Meyer, Cliff and Dunne (1994) describe practical examples of how some such teacher-based interventions have been approached and implemented.

A discipline-specific model

A limitation of what has been discussed thus far, notwithstanding the comments made about individuals represented in Figures 1-3, is the lack of sensitivity of general purpose sources of variation in discipline-specific response contexts. The problem is that while the theory that generally underpins observed models is transportable across different contexts, an observed model such as the one presented in Table 2 usually fails, for inferential purposes, to adequately reflect the theory (or empirically reconstitute it) in disciplines like, for example, mathematics or medicine. Questions thus arise within these disciplines, and in terms specific to them, as to the appearance of a more sensitive model of student learning and its constituent 'building blocks' (the sources of variation).

Two sources of variation specific to the diagnostic process in clinical medicine are represented in the scatterplot presented in Figure 4, which is based on the responses of fifth-year medical students (n=110). 'Premature closure' refers here to a diagnostic outcome that has been arrived at prematurely; for example a diagnosis based on a history taking without adequately considering the findings of the physical examination of the patient. 'Difficulty with likelihoods', as the title suggests, refers to an inability to assess diagnostic information in terms of likelihoods.

There is a considerable degree of inter-individual variation represented in Figure 4 that, in this case, can be interpreted and responded to strictly within the discipline. Students represented in the top right hand region of Figure 4 are legitimate objects of professional concern and curiosity. Bluntly put, there is no point in pretending that the individual-similarity subgroups at opposite ends of the regression line are similar in this example. Given additional sources of variation that are also discipline-specific it becomes possible to construct discipline-specific model forms that, as Meyer and Cleary (1997) have illustrated, can facilitate the identification of students exhibiting potentially 'high risk' forms clinical reasoning.

Figure 4: Scatterplot of difficulty with likelihoods vs. premature closure

Summary and conclusion

From a teacher perspective the benefits of 'fitting' a conceptual model to observed data can be simply stated: A validated observed model of student learning can capture dimensions of variation in contextualised learning behaviour in a (linear or non-linear) structural sense; it thus provides a framework (or 'picture') within which individual student responses representing 'real people' can (a) be located; that is, 'positioned' within the structure(s) of the model, and thus (b) interpreted (in either absolute theoretical terms within the model, or comparatively in relation to other students' responses within the model) for (c) evaluative, (d) diagnostic, (e) intervention and (f) inferential purposes. That is, the horison of what a teacher might normally observe about students, and the corresponding locus of academic response, can both be extended by addressing student-based evidence of contextualised variation in learning behaviour.


Entwistle, N. and Ramsden, P. (1983). Understanding student learning. London: Croom-Helm.

Cleary, E. G. and Meyer, J. H. F. (1997). The Conceptions and Experiences of Diagnosis Inventory. Department of Pathology, University of Adelaide.

Meyer, J. H. F. (1998). A medley of individual differences. In B. Dart and G. Boulton-Lewis (Eds), Teaching and Learning in Higher Education: From Theory to Practice. Camberwell: Australian Council for Educational Research, in press.

Meyer, J. H. F. and Cleary, E. G. (1997). Towards an 'interference' model of student learning in medicine. Symposium paper, Seventh European Conference for Research on Learning and Instruction, Athens, August 26-30.

Meyer, J. H. F., Cliff, A. and Dunne, T. T. (1994). Impressions of disadvantage. II - monitoring and assisting the student at risk. Higher Education, 27, 95-117.

Meyer, J. H. F. and Kaschula, W.A. (1994). Helping engineering students to learn better: The concept and creation of a learning 'hot seat'. In A. J. Smith (Ed.), Engineering Education: Increasing Student Participation. Sheffield: Sheffield Hallam University, 294-300.

Meyer, J. H. F. and Scrivener, K. (1995). A framework for evaluating and improving the quality of student learning. In G. Gibbs (Ed), Improving student learning through assessment and evaluation. Oxford: OCSD, 44-54.

Meyer, J. H. F. and Watson, R. M. (1991). Evaluating the Quality of Student Learning. II - study orchestration and the curriculum. Studies in Higher Education, 16, 251-275.

Please cite as: Meyer, J. H. F. (1998). Variation in student learning: A basis for improving teaching and learning. In Black, B. and Stanley, N. (Eds), Teaching and Learning in Changing Times, 214-222. Proceedings of the 7th Annual Teaching Learning Forum, The University of Western Australia, February 1998. Perth: UWA. http://lsn.curtin.edu.au/tlf/tlf1998/meyer.html

[ TL Forum 1998 Proceedings Contents ] [ TL Forums Index ]
HTML: Roger Atkinson, Teaching and Learning Centre, Murdoch University [rjatkinson@bigpond.com]
This URL: http://lsn.curtin.edu.au/tlf/tlf1998/meyer.html
Last revision: 10 Mar 2002. The University of Western Australia
Previous URL 31 Dec 1997 to 10 Mar 2002 http://cleo.murdoch.edu.au/asu/pubs/tlf/tlf98/meyer.html