Teaching and Learning Forum 2000 [ Proceedings Contents ]

How a centralised testing facility functions in a large university

Leith Sly
Centre for Educational Advancement
Curtin University of Technology
    Curtin University operates a centralised Computer Managed Learning (CML) Laboratory running a mainframe based testing system. This system was first implemented in the late 1980s with Nursing and Human Biology being the first 2 subject areas to pioneer the application of CML. The system has continued to expand and by the first half of 1999, 8000 students sat 19,000 tests. This demonstration will be in two parts. Part one will describe the key components of the testing system as well as the reports and feedback that are available to both lecturers and students. It will draw special attention to research indicating the value of formative assessment. While test questions, students results and all analyses are stored on the mainframe computers students sit a paper version of their test under supervised conditions before entering their answers into the computer for final marking and feedback. Part 2 will allow participants to experience the workings of the CML Laboratory for themselves. They will sit a short "fun" test under conditions that replicate those used by students during the semester.
Teaching and Learning Forum 2000 Home Page


Introduction

Curtin University of Technology runs a centralised testing laboratory using a mainframe based computer managed learning (CML) testing system. This system, utilised predominately by first year classes, generated 19 000 tests in semester 1, 1999. CML systems are software packages with several common functions, including generating tests from banks of questions, marking the tests generated, analysing the results and keeping records of students' marks and progress. The tests generated using the CML system typically provide feedback relating both to student performance and the effectiveness of individual questions in the bank, hence the information can be used for either formative or summative purposes.

Session outline

This workshop is in two parts. Part One will describe the key components of the testing system as well as the reports and feedback that are available to both lecturers and students. It will draw attention to research indicating the value of formative assessment. Part Two will allow participants to sit a test in the CML Lab, under conditions similar to those used by students.

Part One

Key CML system components

The key components of the CML testing system in use at Curtin University are the testbanks of questions, the coursemaps from which tests are generated and the reporting functions. The main reports identified in this session are those that give feedback on question performance. The "error summary" that the students receive on test completion as well as the testbank analysis provided to the lecturer are included.

The testbank

The component parts of the testbank are modules, which are subdivided into objectives. Each module must have at least one objective but can have up to nine objectives. Questions are stored at the objective level. While the system handles multiple choice, true false, short answer, calculation and assignment questions, most lecturers select multiple choice. Although advantages may exist for using questions which require written answers, i.e. constructed response items (Birenbaum & Tatsuoka, 1987), increasing student numbers and lecturer workloads have been associated with the use of multiple choice questions which are marked by the system.

The coursemap

The CML system has a template or a coursemap where the parameters used to generate any test are stored. There is a range of available options depending on the specific parameters in the coursemap. Tests may be generated using the same questions for all students, others may be set to select questions randomly but to take them from specific parts of the bank of questions. Another option could include the coding of individual questions as a variable, for example, a test could be set to draw a specified number of questions from selected objectives but to take only those questions coded at a particular difficulty level. Again, the test could specify that certain questions were mandatory. The coursemap contains parameters that allow the designation of a category for a test. Typically tests are coded as practice or assessed. Students can access both practice tests and assessed tests, but only the assessed tests contribute to the overall grade.

Reports

Feedback is available to both the students and the lecturer. Students receive an "error summary" on completion of their test. This report, in most cases, gives students the correct answer to a question they have answered incorrectly. However, some lecturers do not allow students to know the correct answer and so the error summary merely notifies the student which questions were answered incorrectly.

Because CML systems retain students' answers to questions, they are able to produce a report of the distribution of question responses and students' marks. This feedback can alert lecturers to problem areas in terms of question quality and also student performance. Each question in the testbank has a unique number. Part of this number identifies the module and objective in which the question is located. The report generated on question responses identifies the correct answer, the number of times the question was issued, and how many times each option was chosen by a student as the correct response. Lecturers can be made aware of common misconceptions held by students and are able to address these before summative assessment. An additional useful report is the "grades report". This gives a total score for each student on any or all tests chosen by the lecturer for investigation.

Formative assessment

While the meaning of formative assessment is not strictly defined feedback is consistently part of any definition. Black and Wiliam (1998) and Sadler (1998) specifically refer to the role of feedback in modifying or improving the learning process while Rolfe and McPherson (1995) see it as allowing the student to take responsibility for their own learning. However, in all cases the aim of formative assessment is to improve a process at a stage where change can be made rather than to appraise a finished product.

Features which make CML a formative assessment tool

Practice tests, which do not contribute marks to final assessment, are one use of the CML system as a formative tool. These practice tests can be generated using the same parameters and with questions drawn from the same test banks as those used for the summative assessment.

The CML system also has the option to give students immediate feedback about their performance, which is the basis of its use for formative assessment. When the practice test and feedback facilities of CML are used, students can test their knowledge about the topic and be shown where they need to improve. Students receive feedback at the completion of CML tests and are encouraged by staff in the CML laboratory to rework their test paper with the error summary, before they leave the CML laboratory. Some lecturers make a textbook or notes available for student reference. All students are able to have their test paper sent to their lecturer if they wish to get a further explanation on any question.

Use of CML at Curtin

Lecturers use the CML system in a variety of ways. Some use it to encourage a self directed learning approach by students while others prescribe mandatory time frames for each test. Most use the CML tests as one component of the total unit assessment with marks typically contributing about 20% towards the final unit mark. The number of assessed tests generated by the CML system for a particular unit varies between one and five. While some lecturers allow an optional practice test before the first assessed test, very few make this compulsory and most lecturers do not implement a practice test at all.

Research indicating the value of formative assessment

Of the six practice and accompanying assessed tests that were investigated in 1999 all demonstrated that those students who complete a practice test score higher marks on the subsequent assessed test. This effect was seen for students from a number of different subject disciplines. While students improved their own performance from the practice test to the assessed test in all except one study those students who sat the practice test also performed significantly better on the assessed test than the non-practice test group.

Previous investigations have found no link between ability as measured by university entry mark, final exam mark in the current unit or exam mark in the prerequisite unit and student choice to do an optional practice test. On average 55% of students choose to sit an optional practice test. However, students who had used CML in previous units had a lower participation rate on an optional practice test, which creates something of a quandary. If the practice test does enhance performance as these studies suggest, but prior CML exposure has the effect of reducing student participation on a practice test students may be disadvantaged because they miss the opportunity of the practice test to obtain formative feedback on their performance. If either the lecturers or students believe that the benefit of the practice test lies purely in familiarity with the CML system, and have no regard for the value of formative feedback, then this situation may be detrimental to students' later performance.

Discussion

The practice test may be increasing later performance by alerting the students to the content of the test, but, it may also be that the students are using the external feedback they received to generate their own internal feedback. This could allow them to modify their learning strategy or their approach to the subject matter. The non-practice test group did not have any prior exposure to either the type of questions or the question content and so had no opportunity to receive feedback. These findings suggest that the CML system when used as a formative assessment tool can enhance students' performance on later assessments. It appears to have this beneficial effect irrespective of student ability and subject discipline.

Part Two

The second part of this workshop is a short hands on session where participants will be given an ID and instructions to use the CML system. They will sit a fun test under similar conditions to those encountered by students during the semester. This practical session will allow those in the session to experience directly the testing situation and related benefits of immediate feedback.

Test procedure

Participants will follow procedures similar to those used by students during the semester. They will come to the CML Lab, be marked off the booking list and proceed to a terminal. Next, they select the appropriate subdirectory for their particular subject, identify themselves to the system using both their identification number and password (which will be supplied), and their test is randomly generated. A paper copy of the test is printed. This is completed under supervision in the CML Lab, before answers are entered on the terminal. Again participants are required to identify themselves with both ID and password, and are then prompted for answers to their specific test. Before answers are finally marked, they are able to review and change any answer. No feedback is given at this stage. This review and change process can be repeated as many times, as required.

When participants indicate they are satisfied with their answers the test is marked and immediate feedback given on incorrect responses. Participants are encouraged to print this feedback, which consists of the correct response to any question answered incorrectly. This "error summary" is used in conjunction with the printed test paper for the purpose of examining errors. Participants may take their error summary from the Lab, but test papers are retained so as not to compromise the test bank.

References

Birenbaum, M. and Tatsuoka, K. K. (1987). Open-ended versus multiple-choice Response formats - It does make a difference for diagnostic purposes. Applied Psychological Measurement, 11, 385-395.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7-74.

Rolfe, I., & McPherson, J. (1995). Formative assessment: How am I doing? The Lancet, 345/8953, pp 837-839.

Sadler, D. R. (1998). Formative assessment: Revisting the territory. Assessment in Education, 5(1), 77-83.

Please cite as: Sly, L. (2000). How a centralised testing facility functions in a large university. In A. Herrmann and M.M. Kulski (Eds), Flexible Futures in Tertiary Teaching. Proceedings of the 9th Annual Teaching Learning Forum, 2-4 February 2000. Perth: Curtin University of Technology. http://lsn.curtin.edu.au/tlf/tlf2000/sly.html


[ TL Forum 2000 Proceedings Contents ] [ TL Forums Index ]
HTML: Roger Atkinson, Teaching and Learning Centre, Murdoch University [rjatkinson@bigpond.com]
This URL: http://lsn.curtin.edu.au/tlf/tlf2000/sly.html
Last revision: 20 Feb 2002. Curtin University of Technology
Previous URL 19 Dec 1999 to 20 Feb 2002 http://cleo.murdoch.edu.au/confs/tlf/tlf2000/sly.html