Teaching and Learning Forum 99 [ Contents ]

Assessment in the context of self directed learning

Blair McLeish and Jacqueline Shaw
School of Design
Curtin University of Technology
Peer assessment is a central feedback strategy to enable students to become critical and reflective about their own and others' production of design work. Many students are reticent about giving themselves or anyone else a fail mark and thus an indication of their need to strengthen areas of weakness. There is also the social dynamic of group assessment being manipulated by more assertive members of the group.

When observing these activities we reasoned that these problems could be overcome by acknowledging that assessment had two main objectives and these two objectives were being confused. A distinction had to be made about implementing assessment as a feedback instrument and assessment as a judgement about the quality of student's work. Students could also be encouraged to judge work without compromise if they could examine and determine the grade anonymously.

This proceedings paper documents the strategy put into place to deal with both feedback of the design process and with assessment as a judgement of work. We will present our findings at the Forum and are looking forward to feedback that may help us in refining this system.


In 1995, the School of Design embarked on a new assessment system (Shaw & Armytage, 1995). The system was a marked improvement on previous systems and was a collaboration between staff and students. It also maintained assessment consistency across all the disciplines and units within the School and gave students a written document to which they could refer in order to understand how grades were awarded.

Reason for change

The implementation of this system over the past three years has highlighted some fundamental problems. These are identified as follows: With this system there was no distinction between the assessment of design work and the evaluation of learning. In defining assessment of work, we mean a process that uses measurement as a means of determining student levels of achievement. This includes both quantitative judgements (craft skills, level of execution, use of fundamental principles) and qualitative judgements (is the idea clever, innovative, does the solution have 'magic'). By evaluation of learning, we mean a process that determines whether the student has an understanding of what has been taught and to what extent this is being used. For example, the assessment of a practical design project involved the allocation of a grade based on design proficiency criteria rather than learning criteria. However, we may have been able to evaluate learning through a process of progressive assessment. It was presumed that by assessing design work we had been assessing how much a student has understood. This we know through experience has not always been the case. Some students can achieve a proficient result without fully understanding the principles involved.

Final feedback, which facilitates learning, has been currently intrinsically linked to the previous assessment process. As a result students have appeared more interested in a reward system rather than analysing their own progress or evaluating their performance.

It was observed that different units have different objectives and that the employment of one methodological approach to assessment cannot address these differences.

The adapted assessment system

In July 1998, at an in-house Teaching and Learning Workshop, some of the problems and possible solutions were discussed. It was agreed that we would trial an adaptation of the system that uses the six bandings (Fail, Pass, Credit, Distinction, High Distinction and High Distinction Plus), and criteria based on the School's own core abilities (Shaw & Armytage, 1995). In order to address some of the above problems we decided to use a specific methodological approach appropriate for each of the units taught and to also separate assessment from feedback. This paper covers only one of the methodological approaches trialed. Members of staff who teach units whose objectives differ from ours agreed to test and trial their own adaptations to the system.

Description of project based assessment with a common brief (Design Specialism & Design Minor)

Following is a description of the method trialed:

Common Brief: This is where students work individually on the same brief and the end results can be compared. Where there is a common brief, peer group assessment is used. This has saved time on the assessment process leaving more time for feedback.

Assessment Procedure: Work is displayed and identified only by a number.

Students are supplied with several sheets of different coloured sticky dots, each colour relating to different marking grades, and a numerical list that relates to each of the pieces of work.

The lecturer acts as arbitrator and evaluates the standard of the work presented, establishing the distribution of the top end of the grade bandings, for example, High Distinction Plus, High Distinctions, and Distinctions. If the overall standard of work presented is deemed below High Distinction Plus then the top grade will be a High Distinction or lower. This will determine the number of each of the coloured dots the student can use in the assessment procedure.

The criteria, previously agreed to by the lecturer and students are displayed and should be considered in the assessment procedure. These criteria are not weighted but instead are used as a guide for students.

Without conferring, the students individually and anonymously assess the work, choosing which pieces (if any) they believe deserve the assigned top end of the grade bandings. The remaining grades of Credit, Pass and Fail have no restrictions allowing the students to determine the lower end of the bell curve. Only the final piece will be assessed at this point, ie. idea generation and development will not be included in the assessment procedure but rather will be included in the feedback.

Assessment sheets will be collected and checked by the lecturer. From these sheets an average grade will be calculated.

Development and final pieces will be collected for the purposes of feedback.

On receipt of feedback students will be asked to fill in a proforma designed to evaluate learning efficiency which they must return to the lecturer in order to obtain their final grade.

Benefits of project based assessment with a common brief

From industry experience (Blair McLeish, six years as Art Director in Advertising and Jacqueline Shaw, eight years as Creative Director in Graphic Design), we understand realistically how clients and our peers judge a design piece. The realisation that in industry as in any competitive environment, when it comes to practical based projects as opposed to exercises or critical writing, there is usually only one winner. Whether winning means an award for the 'best' design or art direction or simply winning the pitch (job), competitiveness plays an important part in design and its associated industries.

By limiting the distribution of Distinction, High Distinction and High Distinction Plus (and controlling the upper end of the bell curve), students have to strive harder and be more competitive to 'win' these limited marks. In some cases, peer group assessment and averaged marks can distribute the highest mark to more than one piece. If this should occur then a decision is made by the lecturer whether the piece with the majority of these marks is the winner or that there is no clear winner. This does happen in reality and is not deemed a flaw in the system. Controlling the upper end means that the majority of marks do fall into the Credit banding, with some in the Pass and a few in the Fail banding.

The use of an anonymous assessment system eliminates the peer group pressure by the more assertive students. Because the work is identified by number the personalities of the students are removed from the equation. This has empowered the students to fail work that does not meet requirements.

The averaging of marks in a particular unit over a number of briefs or components has been introduced in an attempt to persuade students to distribute their efforts equally.

The separation of feedback from assessment makes it easier to concentrate on individual students who are struggling with their understanding of the material being delivered. Lecturers can now put more emphasis and energy into feedback and this in turn promotes student interest and involvement. A student who has struggled to achieve an adequate result is now rewarded for their efforts in the feedback.

Asking students to fill in a learning evaluation proforma in order to receive their final mark encourages them to be more reflective in their remarks and enables us to identify any individual problems in their learning.

Description of project based assessment with an individual brief (Design Specialism & Design Minor)

Following is a description of the method trialed:

Individual Brief: This is where students work individually on self generated briefs, which are sanctioned by the lecturer, or work individually on briefs chosen from a range provided by the lecturer. Self and lecturer assessment are used in the case of individual briefs.

Assessment Procedure: Student and lecturer agree on assessment criteria appropriate to their individual projects prior to final submission and fill out an assessment form that is then duplicated.

On submission the student completes their copy of the assessment sheet which is collected by a student representative and sealed in an envelope until the lecturer has completed their assessment.

The lecturer assesses the work separately and the average of the two assessments becomes the final grade for the project.

The work is returned to the student with feedback in either a written or audio format with a proforma designed to evaluate learning efficiency.

The student is required to reflect on the feedback and the project and fill out the learning proforma sheet. The proforma is then submitted to their lecturer in order to obtain their final grade,

Benefits of project based assessment with an individual brief

The advantage of individual, self directed briefs is that it allows students to choose areas of personal interest within a subject and learn to objectively criticise their own work through self assessment. This tends to shift the locale of impetus and aids in intrinsically motivating the student. With the freedom to chose their own project comes the responsibility of identifying personal learning objectives and outcomes. Combining the projects with self-assessment, forces the student to address their own objectives and outcomes more reflectively.

It also allows students to chose projects that may be more culturally relevant. In Design Specialism (Advertising), overseas students are encouraged to address social issues, products-services and markets that are culturally relevant to them. For example, a student for whom English is a second language is permitted to submit practical work in his or her own language. In such cases, the student is required to provide English translations and a written rationale that outlines cultural relevance.

The results

Having trialed this adapted system over the previous semester it is too early for any final conclusions. The results will be presented at the Forum after we have evaluated the findings and understood their relevance. We are looking forward to receiving comments and suggestions at the Forum that will help in establishing a design assessment system that will be seen as fair and just by both students and staff.


Cole, P. and Chan, L. (1987). Feedback and correctives. Teaching Principles and Practice. Prentice Hall, Australia.

Cole, P. and Chan, L. (1987). Assessment and evaluation. Teaching Principles and Practice. Prentice Hall, Australia.

Latham, A. (1997). Learning through feedback. Educational Leadership, 54(8, May), 86.

Loacker, G., Cromwell, L. and O'Brien, K. (1986). Assessment in Higher Education: To Serve the Learner. Assessment in American Higher Education. US Department of Education, Washington.

Shaw, J. and Green-Armytage, P. (1995). Feedback sheets as a form of assessment to support learning. In Summers, L. (Ed), A Focus on Learning, 234-238. Proceedings of the 4th Annual Teaching Learning Forum, Edith Cowan University, February 1995. Perth: Edith Cowan University. http://cleo.murdoch.edu.au/asu/pubs/tlf/tlf95/shaw234.html

Schwartz, P and Webb, G. (1993). Case Studies on Teaching in Higher Education. Kogan Page, London.

Please cite as: McLeish, B. and Shaw, J. (1999). Assessment in the context of self directed learning. In K. Martin, N. Stanley and N. Davison (Eds), Teaching in the Disciplines/ Learning in Context, 267-271. Proceedings of the 8th Annual Teaching Learning Forum, The University of Western Australia, February 1999. Perth: UWA. http://lsn.curtin.edu.au/tlf/tlf1999/mcleish.html

[ TL Forum 1999 Proceedings Contents ] [ TL Forums Index ]
HTML: Roger Atkinson, Teaching and Learning Centre, Murdoch University [rjatkinson@bigpond.com]
This URL: http://lsn.curtin.edu.au/tlf/tlf1999/mcleish.html
Last revision: 26 Feb 2002. The University of Western Australia
Previous URL 18 Jan 1999 to 26 Feb 2002 http://cleo.murdoch.edu.au/asu/pubs/tlf/tlf99/km/mcleish.html