Teaching and Learning Forum 2008 Home Page

Category: Professional practice
Teaching and Learning Forum 2008 [ Refereed papers ]
Assessing tutorial participation and participation in assessing tutorials: A teaching intern's experience

Gillian Abel
School of Social and Cultural Studies
The University of Western Australia

This paper combines a discussion of the assessment of class participation in tutorials with some comment on my own experiences as a teaching intern. Utilising interviews with discipline based academic staff, I sought to gain an appreciation of the basis on which participation marks were allocated. This was done in light of charges that participation marks are overly subjective, a concern I had in relation to my own marking, and also that they are used as a 'fudge factor' or discretionary mark. In the process of this investigation I became aware of a noteworthy perception that academic developers and discipline based academics are at crossed purposes. This disjuncture appears to emerge from the perception that academic development is based upon a deficit model and also that much of its advice is unworkable in a context where casualisation of teaching and increased bureaucratisation are having a negative impact. While I empathise with the latter I can see areas where critical engagement with academic developers can be productive and beneficial for both discipline based academics and, of course, students. Although this paper is largely a reflection on my own limited professional practice it raises some important issues regarding the dissemination of research by academic developers and its acceptance by discipline based academics.


Having, at the time of writing, recently completed a teaching internship year for beginning teachers at The University of Western Australia (UWA), I feel particularly aware of the contentions surrounding the assessment of student work. In this paper I focus on one particular area of assessment identified as problematic, that concerning student participation in tutorials. Some time ago, The Centre for the Advancement of Teaching and Learning (CATL, 1999) at UWA published a series of questions regarding assessing class participation in its newsletter Issues of Teaching and Learning. Readers were encouraged to ponder, amongst other questions, what is meant by participation and how might it best be assessed? In this paper I present my findings from a discussion of these issues with some members of the academic staff in the Discipline of Anthropology and Sociology at UWA coupled with my own thoughts and experiences. The findings are not intended to be representative; rather I sought, in the tradition of my discipline, to shed some light on the lived reality of assessing class participation from the lecturers' perspectives.

As a result of the discussions held I have also gained an increased awareness of the tension which sometimes exists between discipline based academics and academic developers. I use these terms in the absence of better alternatives but recognise that the two roles cannot be so easily differentiated. While aware that discussion of this tension strays from the original focus of the project on assessing classroom participation I feel it is important recognise the way such a tension impacts upon collaboration between the two groups and ultimately on learning.


In this paper I take a qualitative approach to investigate the methods employed by discipline based academics when assessing class participation and more broadly their thoughts on this component of assessment. The qualitative approach is deemed particularly appropriate in that assessing participation is often viewed as an overly subjective method. By engaging in conversations the study provided an opportunity for academics to elaborate on their individual positions in relation to assessing participation eliciting information which may not have come through in a quantitative study. Five members of staff were interviewed for the study although the work is also informed by less formal conversations between myself and other members of the discipline. The discussions focused on a particular question brought up in Issues of Teaching and Learning, that is, 'What criteria do you use to assess participation in tutorials?' All participants addressed this question but also tended to include broader meditations on the context in which this assessment was occurring such as increasing student numbers, increasing use of sessional teaching staff and increased bureaucratisation. The discussions were supplemented by a survey of the unit outlines of all undergraduate units offered in the Discipline in the academic year 2007 and both are discussed in reference to the broader literature on the topic.


The results of this investigation are not intended to be representative rather, in deference to the social sciences and Anthropology in particular, I hope to offer some 'depth' to the issue. I further recognise that garnering the students' viewpoint on the subject of assessing classroom participation would have enhanced the results. While this was considered beyond the scope of the project it does offer a potential future area of study. Joughin (2007) who focuses on oral presentations in his discussion of 'variations in students' conceptions of common academic tasks' notes that there seems to be little recent work on oral assessment from a student point of view.

I also think that this is an appropriate point to flag my own limited experience in respect to both the academic and the teaching and learning aspects of this discussion. Tai Peseta, an Associate Lecturer in the Institute for Teaching and Learning at The University of Sydney calls for 'writing in academic development which is experimental, vital and vulnerable'(Peseta, 2007) as a Post-graduate student who has chosen in this instance to critique some aspects of the discipline I am working in I can attest to feeling the latter most strongly. I hope that my interpretations of the situation do not cause offence and at the risk of displaying my naivety I hope that my 'newness' to the topic might prove productive.

Background and Context

Since 2001 an Outcomes Based Education approach has been endorsed at UWA. This is a student centred approach which focuses on facilitating learning rather than upon teaching. This approach is important in the context of this investigation of the assessment of participation in tutorials due to the not infrequent accusations that this mode of assessment is overly discretionary and that it amounts to little more than a 'fudge factor' used to adjust marks allocated for written examination CATL (1999). For proponents of OBE, the introduction of criterion referenced assessment would serve to eliminate some of these concerns. However, as my discussions with members of the discipline based academics show there is an aversion to the introduction of what is seen as a 'culture of compliance' (Manathunga, 2007, p.29) characterised by neoliberal attributes such as rationality and accountability amongst others. These concerns go beyond the university setting as evidenced by UWA's decision to change the nomenclature to Student Learning Outcomes (SLO) in 2006 in order to distance its approach from controversy surrounding OBE in Western Australia's secondary schools CATL (2006a).

The SLO and assessment approach at UWA recognises differences between criterion referenced assessment and norm referenced assessment and offers a UWA centred hybrid model. This model, according to an overview distributed by the Centre for Teaching and Learning at UWA (Centre for the advancement of teaching and learning, 2006b), is reflective of current good practice and promotes a 'slow but steady progression towards increased accountability for awarding of marks.' The preceding statement suggests a 'softly softly' approach to the introduction of SLO at UWA, however, in my investigation of assessing participation, it soon became obvious that, despite the implication that there will be no dramatic changes the message was received slightly differently. There remains a perception amongst the discipline based academics that I talked to that the implementation, aims and practice of a SLO approach signal, amongst other things, a reduction in spontaneity in teaching and learning coupled with an increased administrative workload. Others would argue that these perceptions are erroneous but they are nonetheless real. Objections to an OBE or SLO approach were raised each time I suggested the implementation of criteria for the assessment of participation in tutorials.

Unlike Cooper (2005, p.128), who found that 'dissent from staff' was a barrier to introducing new tutorial activities related to assessment in the Law Faculty at Queensland University of Technology, my investigation did not uncover a sense of outrage at the attempted imposition of new ideas. Rather, I noted a sense of disillusionment with the university as an institution, and hence the academic developers employed to deliver its message. In the words of one of my participants who saw their own role as trying to ignite a passion for learning in students:

Learning is about lighting a fire rather than filling a container. Filling a container and ticking the boxes on what your outcomes are seems to me at odds with lighting a fire and also does sort of make you think, well they don't trust me with what I am doing.
Similarly the following was noted by another participant:
It seems like everything in the structure of the university is set up not to foster good committed teaching... people are... supposed to be trained and teach these techniques that are supposed to do it but the structure doesn't allow it.
The above quotation suggests an interest in new techniques but also seems to lament the way they are foisted upon teaching staff. As already noted these statements were made with an air of disillusionment. It is probably worth noting at this point that the discipline in question has recently undergone quite substantial staff changes which could arguably account for some of the sentiments expressed, although this was not an issue raised in the interviews. It is important to point out that the disillusionment stood in stark contrast to the enthusiasm with which these academics spoke about teaching and about tutorials in particular. All participants displayed a particular keenness for tutorial group style teaching. This was manifested in such statements as 'The real learning is cemented in tutorials' and 'they are more satisfying than just about any other part of teaching'. So it appeared that I was not observing disillusionment with teaching itself but rather with perceived expectations of the discipline based academics which were seen to be coming from an institutional level, delivered by academic developers, and therefore at least partially beyond the discipline based academics control.

The intern experience, 'betwixt and between'

My discussions with staff members compounded the feeling of being 'betwixt and between' which I had been experiencing since the beginning of my teaching internship. In the first few weeks I was cautioned by a senior member of the discipline staff, not one of those who I interviewed, 'not to let that lot at CATL take over, your priority is your thesis '. This could be viewed as pragmatic advice when the time constraints on PhD students are well known but the statement suggested that there was a gulf between the teaching and research components of my scholarship and not that the two were interconnected. This feeling was compounded on more than one occasion when I initiated discussion of ideas from the internship programme in the context of my teaching in the discipline which were summarily dismissed, usually with the caveat that these sorts of things would be fine were there the time to contemplate them. Concurrently I felt under a certain pressure from the academic development side to conform to a particular style of teaching, one which essentially checked all the boxes in relation to, for example, running a good tutorial or giving a great lecture. My own experience was mirroring divisions others have encountered and in this respect, I feel fortunate to have discovered the work of Catherine Manathunga (2007) and others who call themselves the Challenging Academic Development Collective (CAD). CAD's work emphasises the many commonalities between the work and concerns of academic developers and discipline based academics which they argue can be put to productive use. Manathunga is a historian, who in the context of short term contracts and funding restrictions, turned academic developer. She relates her position to the concept of 'unhomeliness' conceived of in the work of Homi Bhaba noting that
In some senses, when we migrate from other disciplines, academic developers may experience this dislocating need to re-invent ourselves, especially if the casualisation of academic work and chronic restructuring have forced our 'travel' in search of work.
Manathunga goes on to note her own concerns with being on what she terms the 'front line of the quality agenda'(2007, p.30). She also believes however, that there are positive results to be gained from the implementation of some aspects of academic development if discipline based academics and academic developers alike treat the pedagogical/andragogical literature with the same respect and rigorous critique they afford other academic work. It is only through this engagement, Manathunga proposes, that the benefits of the considerable amount of research done in academic development will be recognised.

It would appear that the perceived discrepancies between academic developers and discipline based academics are based on the assumption that academic development is centred on a deficit model and is therefore critical of those it seeks to engage with. One participant in this study commented, 'I am really against trying to say that there is such a thing as best practice', and that 'the more you set down criteria and attempt to routinise, the less to me that is good teaching'. The implication in both these statements being that trying to fit academics to a prescribed way of teaching was unwelcome. Sharon Fraser (2006) of the Centre for Professional Development at Macquarie University in NSW asserts that academic developers and discipline based academics are from different 'discourse communities' and that gaps in understanding between the two about the meaning of common phrases such as 'quality teaching', 'innovative practice' and arguably in my example 'best practice' will continue to be the source of misconceptions unless a shared dialogue can be established.

Assessment of tutorial participation

In order to highlight some similarities between the work of the two apparently disparate groups and to offer some potential for compromise I now turn attention to the assessment of tutorial participation, which, as Armstrong and Doud (1983, p.34) note
differs from the viva voce and most other assessment activities, in that it forms an integral part of the teacher/learning process...students are assessed in the same context in which they learn within the dynamics of group discussion and participation.
This of course places an increased demand on the teacher who must not only lead the discussion but also make judgements on the performance of each learner. It is here that an argument for having assessment criteria to follow can be made, however, those academics whom I interviewed did not have explicit written criteria for the assessment of tutorial participation although they did employ variations of tick sheets to record student participation and stressed that they emphasised in class what the assessment requirements were. These factors struck me as being less far removed from the concept of assessment criteria than the remarks of the academics would have suggested and prompted me to think that finding some middle ground was possible. UWA (1997) has produced a list of Minimum essentials for good practice in assessment, the following three, isolated from a list of twelve, relate most closely to the assessment of tutorial participation and could arguably be further developed in many of the units offered in Anthropology and Sociology. They state:
A4. That each unit co-ordinator provide to students a unit outline, which contains a written statement of the unit learning outcomes aligned with assessment tasks and explicit marking criteria for each assessment task, no later than the second week of the semester in which they are offered.

A8. That an assessor's comments on any piece of assessed work should indicate the strengths and the weaknesses of that work in relation to the specified learning outcomes.

A9. That there be a method of feedback and analysis of any assessment component which forms part of the assessment process to the students. Wherever practicable, this should also apply for formal examinations.

I am supportive of the fact that in line with these recommendations the more explicit we can make our expectations the better for the students. At the same time, having spent my internship year working with two sessional lecturers, whose contracts started around two to three weeks before the start of semester, I am also aware that the conditions for implementing such practices in one's teaching are far from ideal. As I noted earlier there are structural issues at play here which mean that invoking notions of best, or good practice as it is labelled in this particular example, can be construed as a personal insult to those trying their best under difficult circumstances. In one interview time constraints were noted as an obstruction to being more proactive in relation to implementing some of the suggestions offered by academic developers. For example when I suggested peer assessment as a component of a classroom participation mark the following was offered:
I did it in my first year of teaching ... where everybody had to assess the presenter and ... it required a lot of energy and effort but it was very successful and enjoyable but I'd never do it again in a pink fit because I haven't got time or energy because of all these other pressures.
This is a valid concern although I can also see that once in place one of the benefits of such criteria may include a higher standard of completed work from students who are clear about what is expected of them and also that the assessment process may become more efficient.

Of the fourteen units offered in the Discipline of Anthropology and Sociology in the 2007 academic year all awarded marks for student attendance and participation in tutorials. The marks allocated ranged from between ten percent of the overall unit mark to 30 percent. In the unit where the mark for tutorial presentation stood at thirty per cent of the overall unit mark the students were offered the following information:

You will be assessed according to how well you prepare for the tutorial session, your contributions to the discussion, the questions you pose, and your preparedness to engage thoughtfully and analytically with the topics at hand.
This constituted the most written information given to the students in regard to what was expected of them and how they would be assessed in the tutorial participation component of their marks. Armstrong and Doud (1983, p.43) argue that in assessing classroom participation 'if it is not to become a subjective, impressionistic and unreliable measure [that] [C]lear and explicit criteria are necessary', their argument supports the recommendations in the Minimum requirements for best practice mentioned earlier. Other unit outlines are less explicit, one which allocates ten per cent of the overall unit mark to participation and attendance notes only 'that this mark depends on participation in discussion'.

Despite this lack of documented criteria or outcomes there was considerable agreement amongst the discipline based academics as to what constituted participation evidenced by comments such as 'the challenge is to make it work well by structuring it in such a way that it invites or encourages informed oral delivery' and 'it's not just that they speak, it's that I can tell from what they have said that they've done the reading, that is very important to me.' One academic also stated:

Verbally I try to give a sense in the first tutorial that I expect students not just to retain the readings but that I'd like critical comments on the readings, where do they disagree, where is it not consistent with their experience. I value contestation, especially contestation of my own views and I kind of make it clear that you are going to get more out of the unit by contesting by calling my views into question as such. One is always graded on how well one defends ones stance.
Another stated that it is about:
The skills of engagement, like not talking over or dismissing ones fellows or not being... open to listening to other people . . . [valuing] the kind of contributions to the discussion that furthered it and that opened doors.
The theme of quality of discussion rather than merely quantity was present in all the interviews, as was recognition of the importance of tutorial participation.

Being able to articulate one's thoughts verbally was considered at least as important for the students as being able to do so in a written examination. This was expressed in comments such as the following:

I often will say to students on the first day look use this as an opportunity to learn how to speak about intellectual topics in a group setting because this is a very useful skill.

Speaking in a sort of serious venue is a very, very important skill and it is one which it is really hard to imagine people not needing, there may be some occupations that you don't but most, especially those I associate with arts students you would need.

I think that tutorial participation and a separate mark for it is really important particularly in the Australian university system because it is the only place that we actually invite, encourage and assess oral performance.

If you are trying to evaluate students in all of their different aspects I think it is important to have that participation count as something... being able to articulate ideas orally is an important as articulating them in written form.

These comments correspond with the sentiments of Armstrong and Doud (1983, p.34) who note that 'There has been an increased recognition in recent years that the ability to communicate effectively in group situations is an important skill required by graduates.' Indeed the participation component of the mark was highly valued amongst the academics despite lack of or limited written articulation of this in the unit outlines.

Prosser and Trigwell (1999, p.11) argue that good teaching practice is, amongst other things, 'about teachers developing a coherent and well-articulated view of what they are trying to achieve and how they are planning to achieve that outcome' similar sentiments are expressed by The Higher Education Academy (2006). While all of the academics I interviewed articulated their view of what participation in tutorials meant to them and reported that this was verbally disseminated in tutorials this did not always translate into written articulation of the ideas in the form of criteria. Sadler (1989 p.127), referring to teachers, uses the term 'guild knowledge' to describe knowledge built upon experience but notes that this type of knowledge sometimes leaves the concept of standards required inaccessible to the student. This was acknowledged by one person I interviewed who noted, 'You know as I am talking to you I am thinking that being clear about what is expected in a tutorial is pretty important and I am not sure that that is something that always happens.' Again this suggest that although there is a perception of different goals between the discipline based academics and their academic development colleagues that the gap between the two is not insurmountable.

Thus, while the importance of oral participation is being communicated to the students verbally and while there is a strong belief that the assessment of class participation is important in promoting essential skills, these views are not reflected or articulated in the form of written assessment criteria which could benefit the students and at the same time satisfy some of the demands of university administrators. It would appear that this is something which should not be too difficult to address. Setting out in writing why there are participation marks and what makes a good participant seems to me to be something which could begin at a discipline level with the potential for greater refinement within particular units. In an example of why she feels that her role as an academic developer is important, Manathunga (2007) describes a particular project she worked upon to produce a list of graduate attributes. In this instance Manathunga states that despite her initial scepticism by the end of the project she was able to see the benefits the list provided graduates applying for work. Despite concerns about accountability and surveillance as driving forces behind the desire to implement many changes, changes which are, it must be noted, increasingly necessary for universities to secure funding, Manathunga is also cognisant of the flow on benefits to students in this particular example.

Suggestions for implementing assessment criteria for tutorial participation

Having established the importance of the participation component of the overall unit mark and that there is already a strong sense of what is required from the students I will now consider how this could be developed into criteria for adoption within the discipline. Biggs (2003, p.171) argues, in relation to criterion referenced assessment of extended prose that 'it would be very valuable if staff in a department collectively clarified what they really are looking for under these, or other, headings.' Some units in Anthropology and Sociology at UWA do provide a list of criteria used in written assessment and it can reasonably be asked whether something of similar structure could be devised for the assessment of tutorial participation. Based upon the discussions detailed above it would appear that there is some degree of consensus regarding what constitutes participation and its value across all units and years of study in the Discipline of Anthropology and Sociology therefore it would seem that the foundations are there. The model proposed by Armstrong and Doud (1983, p.38), devised in discussion with a group of students, offers a reasonable starting point. They propose three classifications of criteria with associated examples:
(1)Characteristics of an individual's contribution
      Cognitive: logic, objectivity, knowledge, creativity
      Expressive: clarity, fluency, conciseness
      Affective: enthusiasm, interest
Contribution to the process of learning: recognition of the responses to others, constructive criticism, contribution to group climate, relevance.
(2)Inferred preparation: amount, consistency, timeliness.
These criteria seem to cover most of the expectations raised in my discussions with the discipline based academics.

A slightly different approach is taken by Tyler (2000) in his Class Participation Assessment Guidelines as can be seen from the following extract:

Class participation in this course: Class attendance is required and students are encouraged to contribute to class discussion. Participation is the key to a lively class. 15% of the course grade will depend upon contributions to our class sessions. Class participation provides the opportunity to practice speaking and persuasive skills, as well as the ability to listen. Comments that are vague, repetitive, unrelated to the current topic, disrespectful of others, or without sufficient foundation will be evaluated negatively. What matters is the quality of one's contributions to the class discussion, not the number of times one speaks.
Guidelines for evaluating participation
Outstanding Contributor: Contributions in class reflect exceptional preparation. Ideas offered are always substantive, provide one or more major insights as well as direction for the class. Challenges are well substantiated and persuasively presented. If this person were not a member of the class, the quality of discussion would be diminished markedly.
Good Contributor: Contributions in class reflect thorough preparation. Ideas offered are usually substantive, provide good insights and sometimes direction for the class. Challenges are well substantiated and often persuasive. If this person were not a member of the class, the quality of discussion would be diminished.
Adequate Contributor: Contributions in class reflect satisfactory preparation. Ideas offered are sometimes substantive, provide generally useful insights but seldom offer a new direction for the discussion. Challenges are sometimes presented, fairly well substantiated, and are sometimes persuasive. If this person were not a member of the class, the quality of discussion would be diminished somewhat.
Non-Participant: This person says little or nothing in class. Hence, there is not an adequate basis for evaluation. If this person were not a member of the class, the quality of discussion would not be changed.
Unsatisfactory Contributor: Contributions in class reflect inadequate preparation. Ideas offered are seldom substantive, provide few if any insights and never a constructive direction for the class. Integrative comments and effective challenges are absent. If this person were not a member of the class, valuable air-time would be saved.
Again these offer a good starting point for establishing some of the expectations we have of student participation. At the same time I would feel uncomfortable about the last sentence in each criteria level which states whether the class discussion is diminished or not by the presence or absence of a particular student therefore in my adaptation of these criteria the last sentence of each would be omitted. The benefit of modifying criteria which have already been created means that individual teachers have no need to start from the beginning. As one of my interviewees pointed out in relation to workload:
You know if I was just teaching [not also engaged in research] then maybe I would but unless... someone can develop a package where I know exactly what to do and how to implement it...there aren't many models for this...write it down clearly enough so I can follow it.
This statement suggests that there is a potential use for some generic criteria which can then be modified for the individual units and also for making discipline based academic staff aware of what already exists in the form of various tried and tested models.

Of course the implementation of a list of criteria for assessing classroom participation should not be seen as a panacea as pointed out by one of my interviewees; merely stating the facts may not improve the final product:

you know McDonalds may list all the ingredients they put in their food now and such but doesn't mean they make better food than a locally owned place that cook fresh things and you know doesn't tell you at all what is in there. You know this is not a recipe for better classes necessarily.
I believe that this is a valid point, having a set of criteria which is seen to fill an administrative gap with little other engagement with the concept is of little benefit to either the students or the staff. Despite such reservations I think there may be room for compromise and the adoption of criteria within the discipline concerned, something suggested in the following statement:
I guess I object less to explicitness than I do to uniformity I mean I do think there is some good aspects of the outcomes based model in terms of making things explicit and linking assessment to the aims more closely and it is something that I think I have improved ... but I am really against trying to say that there is something, such a thing as best practice.
The first sentence in this quotation is positive in terms of the adoption of assessment criteria but it is qualified by the perception of academic development as being largely based on a deficit model. In this respect I think it nicely highlights what I have been trying to get across in this paper that those perceptions, whether erroneous or not, are preventing change.

Lyn McAlpine is an academic developer who states in a recent opinion piece that there is a need for change and suggests 'giving ownership to the disciplines - letting them focus on what they need and want to learn, acting as a collaborator throughout the process, and avoiding driving the decisions about what is important and how to assess it' (2006, p.126). McAlpine's stance is controversial but her personal view is that such an approach would strengthen her discipline, that is, academic development. Arguably those I have referred to earlier as discipline based academics may need to take a similar look at their own positions in relation to academic development and their own teaching practice in order to break the stalemate. As Rowland (2007, 162) notes 'if academics are to expect a critical engagement on the part of their students, one must expect no less of them as they struggle to understand their own professional practice of teaching. If higher education is a critical business for students, so must it be for their teachers.' This is a sentiment which I will endeavour to bear in mind should my teaching career progress beyond the teaching internship.


As someone new to teaching, and with very little practical experience, the work of the Challenging Academic Development Collective has given me a new perspective from which to view teaching and learning in that it sets out to recognise commonalities between the academic development endeavour and that of discipline based academics. It has helped to resolve the sense of feeling betwixt and between which has dominated my thoughts throughout the internship programme. I empathise with the concerns of the academics in my discipline over managerialism and seemingly ever increasing workloads. At the same time I recognise the value of engaging with research carried out in the name of academic development. Hopefully my focus on the assessment of tutorial participation demonstrates that there is potential for compromise and that straddling the gap may become less of an undertaking for future teaching interns.


Armstrong, M., & Doud, D. (1983). Assessing participation in discussion: An exploration of the issues. Studies in Higher Education, 8(1), 33-44.

Biggs, J. (2003). Teaching for quality learning at university (2nd ed.). Buckingham: SHRE and Open University Press.

Centre for the Advancement of Teaching and Learning (1999). Assessing class participation. Issues of Teaching and Learning (Vol. 4). http://www.catl.uwa.edu.au/publications/ITL/1999/4/assessing

Centre for the Advancement of Teaching and Learning (2006a). Implementing student outcomes at UWA. [retrieved 15 Oct 2007] http://www.catl.uwa.edu.au/current_initiatives/obe2

Centre for the Advancement of Teaching and Learning (2006b). Student learning outcomes at the University of Western Australia. [retrieved 17 Sep 2007] http://www.catl.uwa.edu.au/__data/page/77891/SLO_final.pdf

Cooper, D. (2005). Assessing what we have taught: The challenges faced with the assessment of oral presentation skills. Proceedings HERDSA 2005, University of Sydney, Australia. http://conference.herdsa.org.au/2005/pdf/refereed/paper_283.pdf

Fraser, S. P. (2006). Shaping the university curriculum through partnerships and critical conversations. International Journal for Academic Development, 11(1), 5-17.

Joughin, G. (2007). Student conceptions of oral presentations. Studies in Higher Education, 32(3), 323-336.

Manathunga, C. (2007). 'Unhomely' academic developer identities: More post-colonial explorations. International Journal for Academic Development, 12(1), 25-34.

McAlpine, L. (2006). Coming of age in a time of super-complexity (with apologies to both Mead and Barnett). International Journal for Academic Development, 11(2), 123-127.

Peseta, T. (2007). Troubling our desires for research and writing within the academic development project. International Journal for Academic Development, 12(1), 15-23.

Prosser, M., & Trigwell, K. (1999). Understanding learning and teaching: The experience in higher education. Buckingham: SRHE and Open University Press.

Rowland, S. (2007). Academic development: A site of creative doubt and contestation. International Journal for Academic Development, 12(1), 9-14.

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.

The Higher Education Authority (2006). Teaching, some very short introductions: assessment and feedback. [viewed 17 Sep 2007] http://www.c-sap.bham.ac.uk/resources/guides/assessment.htm

Tyler, J. (2000). Class participation assessment guidelines. [viewed 17 Sep 2007] http://www.brown.edu/Departments/Italian_Studies/dweb/pedagogy/particip-assessm.shtml

UWA. (1997). Minimum essentials for good practice in assessment. [viewed 17 Sep 2007] http://www.secretariat.uwa.edu.au/__data/page/20809/Min-Essen-Good-Pract.pdf

Author: Gillian Abel is a Postgraduate Student in the second year of her PhD looking at British migrant women in Western Australia and their social networks. Over the past academic year she has been a participant in the Postgraduate Teaching Internship Scheme run by the Centre for the Advancement of Teaching and Learning at UWA. Email: abelg01@student.uwa.edu.au

Please cite as: Abel, G. (2008). Assessing tutorial participation and participation in assessing tutorials: A teaching intern's experience. In Preparing for the graduate of 2015. Proceedings of the 17th Annual Teaching Learning Forum, 30-31 January 2008. Perth: Curtin University of Technology. http://otl.curtin.edu.au/tlf/tlf2008/refereed/abel.html

Copyright 2008 Gillian Abel. The author assigns to the TL Forum and not for profit educational institutions a non-exclusive licence to reproduce this article for personal use or for institutional teaching and learning purposes, in any format (including website mirrors), provided that the article is used and cited in accordance with the usual academic conventions.

[ Refereed papers ] [ Contents - All Presentations ] [ Home Page ]
This URL: http://otl.curtin.edu.au/tlf/tlf2008/refereed/abel.html
Created 23 Jan 2008. Last revision: 23 Jan 2008.