Nothing Special   »   [go: up one dir, main page]

Assess 6

Download as rtf, pdf, or txt
Download as rtf, pdf, or txt
You are on page 1of 30

Assessment

Assessment Series No.6

A Briefing on Assessment of Portfolios


David Baume
6

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

David Baume taught in higher education for 20 years. Increasingly fascinated by the ways in which students learn and staff both teach and learn to teach, David became a staff and educational developer. He is an ILT Accreditor, and co-ordinator of the accreditation work of the Staff and Educational Development Association (SEDA). He is researching the assessment of higher education teachers. David chaired SEDA from 1990 to 1995, and is a founding editor of the International Journal for Academic Development (IJAD). David is Director of Teaching Development of the Centre for Higher Education Practice at the Open University until February 2002.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

Contents

Introduction assessment and portfolios Portfolios Case Study the assessment of portfolios on a course in teaching in higher education Features of a good portfolio assessment scheme Issues in portfolio assessment Improving portfolio assessment Conclusion Acknowledgements References

3 6 10 12 15 22 23 23 24

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

Generic Centre Guides and Briefings

Welcome to the Learning and Teaching Support Network Generic Centres series of Assessment Guides and Briefings. They aim to provide a series of overviews of important issues and practices in the field of assessment for the higher education community. The Assessment Guides are intended for colleagues with particular roles and for students, as their titles suggest. The Briefings are primarily intended for lecturers and other staff involved in supporting learning. The Assessment Series is a snapshot of a field in which development is likely to be rapid, and will be supplemented by specific case studies produced by the LTSN Subject Centres. The series was developed by Brenda Smith and Richard Blackwell of the LTSN Generic Centre with the support of Professor Mantz Yorke. Experts in the field were commissioned for each title to ensure that the series would be authoritative. Authors were invited to approach the issue in their own way and no attempt was made to impose a uniform template. The series editors are grateful to colleagues in LTSN Subject Centres and other senior colleagues who refereed the series, and of course to the authors for enabling its publication. We hope that you will enjoy the Assessment Series and find it interesting and thoughtprovoking. We welcome your feedback and any suggestions you may have for future work in the area of assessment. Professor Brenda Smith Head, LTSN Generic Centre Richard Blackwell, Senior Adviser, LTSN Generic Centre Professor Mantz Yorke, Liverpool John Moores University November 2001

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

Introduction assessment and portfolios

Summary This briefing considers some current issues in assessment in higher education and suggests how the use of portfolios can address some of these concerns. It raises issues about the use and assessment of portfolios and suggests some desirable features of a portfolio assessment scheme. A case study is described on the assessment of portfolios on a course in teaching in higher education. Consideration is also given to issues that need to be addressed in developing and using portfolios. The final part shows how the investigation of portfolio assessment processes and results can be used to improve assessment.

Some current issues in assessment Some issues currently affecting programme design and assessment: More and more courses are concerned with the development and assessment of knowledge and abilities for the real world as well as for the academic world. This move is partly influenced by the agenda to widen participation. Employers want to see what applicants can do as well as what they know. There is an increasing realisation that some conventional forms of assessment often test only a narrow range of knowledge and abilities. There is growing concern that a modular course structure may restrict the extent to which students integrate their learning across subjects (Rust, 2000). New teaching approaches such as problem-based learning may require more integrated approaches to assessment. How can portfolios address these issues and needs? They can support the development, demonstration and valid assessment of a wide range of personal, professional and academic capabilities, both inside and outside a programme of study. They can provide evidence of work done and learning achieved. They can show reflection on and analysis of evidence and learning. They can support the integration of learning from different parts of the course and beyond. For reasons including these, there is growing interest in the use of portfolios, whether paperbased or multimedia / on-line, across a wider range of subjects in courses of higher education.
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

(Portfolios have of course been an important element of education for the visual arts for decades.) Although this guide concentrates on student portfolios, many of the same issues apply in the use of portfolios in staff development.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

Questions and concerns in the use of portfolios If you are contemplating the introduction of portfolios, you may have questions and concerns. For example: What kinds of student work should be included in a portfolio? What exactly does portfolio mean in a course of study? How closely should we specify the form, structure, size and content of a portfolio? What help do students need to assemble portfolios? Dont portfolios take a long time to assess?

Those already using portfolios may well have further questions: Can portfolios be assessed reliably? How can portfolio assessment be made more valid? How can we be sure that the portfolio represents the students own work?

This guide will address these and related questions. The present state of art and knowledge does not allow all these questions to be answered as fully as I would wish. However, some answers are already reasonably clear; some issues are becoming clearer; and in a number of areas the first necessary steps towards answers are becoming clearer. I hope very much that you will bring your own experiences and answers to the debate around portfolio assessment and thus help speed the development of understanding and practice in the uses of portfolios in higher education.

Some of the issues referred to here - concerning for example general principles of assessment, course design and student support are dealt with in more detail in other guides in this series.

The assessment of a portfolio is inextricably related to its purpose, content and structure. This guide therefore continues with a brief consideration of the nature and purpose of portfolios. There follows a case study on the assessment of portfolios. This is based on work by Mantz Yorke and me on portfolio assessment on an Open University course on teaching in higher education. This case study is used to identify some of the major issues in portfolio assessment. These issues are considered in subsequent sections.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

5
Overview of assessing portfolios

Here is an overview, shorn of discussion or qualification, of the guidance that follows on the assessment of portfolios: Make the portfolio a vehicle for learning as well as assessment Make the portfolio structure simple Make the portfolio assessment scheme simple Describe the portfolio structure and the assessment scheme and requirements with the utmost clarity, to students and to assessors Brief and train students in portfolio construction, and assessors in portfolio assessment Review the validity and the reliability of your assessment after each round of assessments Do not be downcast if the validity or reliability of your portfolio assessment turns out to be lower than you had hoped, but rather use your data on validity and reliability to make necessary changes to teaching and the assessment of your course Plan to improve portfolio assessment on your course over a period of years.

Portfolios can be a rich addition to the array of learning and assessment methods used in higher education. Like any learning and assessment methods, they bring distinctive challenges and opportunities. This guide suggests how some of these opportunities can be achieved and challenges met.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

6 Portfolios

What are portfolios for?

Filing A student can collect evidence of their work and learning, and file this into a portfolio. This evidence may come from one course, or from more than one course within a programme. Evidence may also be included from work and life outside the course or programme. Even at this stage, students can benefit from guidance as a student told me wryly, "I first filed all my evidence into one large box." Subsequent sorting was a major task for the student.

Learning Whilst they are collecting the evidence, or after they have collected it, the student can analyse and review the evidence. Through this analysis, the student can make further sense of the work they have done, analysing and interrogating it. They can also argue what the evidence shows about what they have learned; what capabilities they have developed; perhaps how far they have moved towards attaining the learning outcomes of the course. Before the student presents a portfolio for assessment, they can offer sections of the portfolio for formative assessment, for feedback, from tutor or peers. In a students words, I was nervous about showing anyone else my portfolio, but once I got the courage it was really useful to see what others had done, and ask them why they had done it that way, and get new ideas. The student can use this feedback, and their own reflection and analysis, to identify gaps in their evidence and in their learning. They can plan how they will fill these gaps, if they wish or are advised to do so.

Assessment The student can present a completed portfolio for assessment. The assessed portfolio may be the complete portfolio they have assembled, or a chosen sub-set of this.

Employment Students can use their portfolio, at interview for further study or employment, to bring to life their qualifications, experience and learning.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

7
What is a portfolio?

An essential feature of a portfolio is surely that it comprises a collection of items rather than a single piece of work. In this respect it can be distinguished from a dissertation or project report (although such items may be included in a portfolio). A portfolio may have the following elements and features:

Evidence Each discipline and each field of work has its own characteristic forms of working records and products. Stereotypically, engineers produce analyses and designs, social scientists reports and essays, scientists lab reports. These characteristic types define appropriate evidence for a portfolio of work in that discipline or profession. Evidence for a synoptic portfolio that reviews achievement and learning throughout a students programme of study will draw on work done on courses within the programme. Little or none of the evidence in the portfolio may have been produced especially for the portfolio. Evidence may have been produced, for example, during a work placement, during fieldwork or observation, or during another part of the programme. When a portfolio has been produced to show some form of academic or professional capability, much of the evidence may be evidence that has been produced naturally in the course of that academic or professional work. The main requirement for the evidence in a portfolio is that the evidence should be appropriate to the field of study or activity.

Labelling of evidence Minimally for each piece of evidence in a portfolio, we probably need to know: Its author(s) for pieces of work produced collaboratively, the portfolio creator should specify their role in the production Its date of production Its title or name If other than print, the technology (hardware and software) required to access it What it is, where this may not be immediately clear to the assessor It may also be necessary to label within a piece of evidence, to bookmark particular sections or elements to which the claim or critical reflection will refer. Beyond this, it may be necessary to specify further information requirements about each piece of evidence. If the portfolio uses multimedia or hypertext, then more sophisticated labelling and
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

10

metadata-tagging become necessary.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

11

8
Structuring and signposting the portfolio A portfolio can be a daunting object. Clear and explicit structure and signposting are vital, for the creator as well as for the assessor. At minimum, a portfolio needs a contents page. Beyond that, it can be structured in many ways, as considered below in Specifying and structuring portfolios. If the portfolio takes the form of a web-site or CD-ROM, then a richer variety of ways of accessing the material may be possible. Ready links can be made between evidence and critical review, and from specific bookmarked points within these. Materials can be grouped under various headings for comparison and analysis. Hyperlinks can be made to relevant outside sources. This rich range of possibilities for a multimedia hypertext portfolio makes clear indexing and signposting even more important than they are for a paperbased portfolio.

Critical reflection The critical reflection or commentary, very probably written especially for the portfolio, serves several functions. Item by item, it contextualises the evidence, saying how and where and why it was produced if this is not already evident from within the evidence or from the labelling of the evidence. The critical reflection makes sense of the evidence, for the student assembling the portfolio and for the assessor. It converts the tacit knowledge in the evidence into explicit knowledge related to the student and their learning (Nonaka, 1994). In their critical reflection, the student also stands back from the disparate details of the evidence. A typical student comment is I thought the work I had put into my evidence bank was pretty good, when I wrote it, most of it anyway. It was useful and a bit humbling to review it months later, and see what I still thought was good about it and what I now see could have been better. I could really see, and write about, how my understanding had developed during the course, and especially the work placement. The student draws the evidence together into a coherent tale of learning, of sense made, of new ideas developed, tested and sometimes discarded. They explain what the evidence shows about what they have learned, about their current state of capability and understanding in the subject. They may claim that the evidence shows that they have achieved the intended learning outcomes of the course. The critical reflection can thus be a piece of self-assessment, although summative self-assessment is not essential to the use of portfolios. The relationship between evidence and critical reflection should be intimate. The mapping between critical reflection and evidence may be one to one, one to many, many to one or many to many. A multimedia, particularly a hypertext presentation can make reference back and forth between critical reflection and evidence particularly easy. Assessment will generally start from the critical reflection rather than from the evidence. The assessor checks at each point whether the supporting evidence shows what the student claims that it shows.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

12

9
Specifying and structuring portfolios Portfolios can be structured in very many ways. For example, a portfolio can: Be organised round the learning outcomes being demonstrated. Be organised around required underpinning knowledge, professional values or other features of the qualification scheme. Assemble evidence from several courses, and thus be organised around these courses. If it describes work placements, be organised around the headings of the work done on these placements. Be organised, day by day or week by week, around a diary. Be organised around obvious and natural topic headings. Who specifies the structure? Assessment will be easier if the assessors specify the structure. Some students, too, on their first portfolio, may prefer to work to a given template. However, even on a first portfolio, some students will prefer to develop their own structure. And students may feel greater ownership of their portfolio if they determine form as well as content. The minimum essential requirements are that structure is clear and explicit and that links are unambiguous, with the overall aim that the portfolio is readily assessable.

So, finally, what is a portfolio? A portfolio, we can suggest after this discussion, is a structured collection comprising labelled evidence and critical reflection on that evidence. A portfolio is produced as a part of a process of learning. It is presented to show evidence of that learning. It may additionally comprise an explicit claim or demonstration that specified learning outcomes have been achieved.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

13

10 Case study the assessment of portfolios on a course in teaching in higher education


This case study is concerned with the assessment of portfolios on a course to train higher education teachers the students on this course are higher education teachers. More details will be found in Baume and Yorke (2002a:b). The case study itself considers mainly validity and reliability issues, although other issues are raised and considered later. The course has an unusually complex assessment structure. However, this complexity serves to sharpen assessment issues, many of which are present in some form in any course. As you read the case study, test how it relates to a course that you know, and for which you feel that some form of portfolio assessment may be appropriate. The course is at postgraduate level, but the same issues apply for assessment in any discipline and at undergraduate level. Mantz Yorke and I studied the results of the assessment of 53 portfolios on a course in teaching in higher education (IET 2001) developed in 1997-8 by the Centre for Higher Education Practice at The Open University. On this, course portfolios are presented and assessed against a framework of seven learning outcomes, underpinned by six professional values, the accreditation framework developed by the Staff and Educational Development Association (SEDA 2001). The assessment is valid in terms of the outcomes of the course in that, for assessment, course participants strive explicitly to show that they have attained the stated outcomes (and underpinning professional values) of the course. Validity is further enhanced by the requirement for evidence from the course participants practice of outcomes achieved, not just description or discussion. A course participant receives detailed guidance on preparing their portfolio, and feedback from their tutor on draft portfolio sections. The maximum size of the portfolio number of words in each claim, number of sides of evidence in each section is specified, and assessors are asked not to read beyond these limits. Portfolios on this course contain evidence. This may include lesson plans, assessed student work, student feedback, or reports of observations of teaching. The evidence is labelled to show its authorship, date and title. The portfolio is clearly structured an signposted, in part to simplify the job of the assessor. In the critical reflection (in the course called a claim), the course participant makes the case that the evidence shows that he or she has achieved each course outcome, and has done so in a way that is underpinned by specified professional values. Assessment is undertaken against a very detailed framework, provided to participants 75 assessment judgements are recorded for each portfolio. Each of the seven outcomes must be passed for the course as a whole to be passed. An outcome can be passed as long as there is no more than one marginal fail on one element of that outcome. Thorough written briefing and face-to-face training are provided for assessors (who are usually also tutors on the course). Before each round of assessment, each assessor first reads and assesses the same portfolio. Then they work together for a day to share their

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

14

11
judgements, and strive to reach agreement, on the judgement and on the principle informing their judgement. This process is called coordination. Two assessors then independently assess each portfolio. A third assessor, a course team member, resolves any disagreements. A complex assessment scheme, then, with thorough briefing and preparation of the assessors (and, not considered here, also of the course participants.) How effective is all this effort? Ninety-two percent of the course outcomes were passed, which makes the course reasonably successful but only 61% of the candidates achieved an overall agreed pass at their first attempt, which is disappointing, for the course and also for the course participants. This latter fact is a consequence of the assessment structure, as discussed below. There was 87% agreement between the two assessors on whether a given candidate had passed a given course outcome but only 60% agreement on whether a candidate had passed or failed the course. Again, the assessment structure has a big effect here. Lets focus on assessor agreement for a moment. Table 1, taken from Baume and Yorke (2002b), shows the number of discrepancies between the judgements of pairs of assessors. These data on discrepancies guide the course team to prioritise steps to improve the course, starting with the areas where discrepancy is at a maximum (shaded). This is considered further in section 6. What are the implications of this case study, for devising assessment schemes for portfolios, and subsequently for undertaking the assessment? What are the desired features or qualities of a good assessment scheme?

Table 1: Discrepancies are instances of disagreement of more than one grade between the two assessors. Here are presented the discrepancy rates for judgements of values underpinning outcomes from the assessment of 53 portfolios. There are 16 blank cells because not all outcomes are required to be underpinned by all values. Course Outcome Underpinning value 1 How students learn 2 Concern for student development 3 Scholarship 4 Colleagueship 5 Equal opportunities 6 Reflection Mean 7.0 9 9 12 8.0 11 9 7.5 1 5 7 2 4 7 3 5 5 4 5 5 5 5 15 6 6.8 10 10.0 8 7.0 6 5 6 7 12 14 8 10 12 8 10.7 Mean 6.2 7.6 6.5 7.0 11.2 8.8 8.2

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

15

12 Features of a good portfolio assessment scheme

Validity A good, a valid, assessment scheme tests whether the student has achieved the goals of the course. These goals may be expressed as the intended learning outcomes of the course, or for example in terms of knowledge to be learned or professional or academic values to be demonstrated. Assessment in higher education may be less than valid for several reasons. The assessment may involve proxy tasks most typically asking the student to describe how they would do some particular task, rather than, as the learning outcome requires, actually doing it. Or more broadly the assessment method simply may not allow the learning outcome to be demonstrated, for example because of the practical limits to what can be achieved in a threehour unseen examination. Portfolios can allow valid assessment. They can allow the collection of evidence of sustained pieces of academic, professional and personal work. The portfolio can be the students response to what is probably the most valid possible assessment task "Show that you have attained the learning outcomes of the course." (Baume, 2000) There is another, more difficult, dimension to validity. Thus far I have considered the validity of the assessment method, and asked how far it tests the attainment of the goals of the course. Behind that, and much harder to access, lies the question of the validity of the goals themselves. How far does attainment of the prescribed course outcomes mean that a person is fit to join the profession or the discipline? I shall return to this later, when I consider the need to restrict the number of required learning outcomes in order to increase the reliability of assessment. Reliability Under a good assessment scheme, the assessment criteria are so well specified and the assessors briefed, trained and in such close agreement over the meaning and application of these criteria that different assessors agree closely on their assessment judgements on a given piece of work. That is, the assessment scheme is reliable. Much assessment in higher education is discouragingly unreliable. The case study above shows that, with sustained effort, a good level of reliability of assessment on individual outcomes can be achieved. However, where a course has multiple essential outcomes, reliability almost inevitably falls. This is considered further below. Fairness Fairness is a profoundly subjective concept, and one that raises strong passions (and from a very early age!) Validity and reliability contribute to perceived fairness. So, often, does the principle that equal marks should reward equal effort. We cannot accommodate this latter dimension of fairness into our assessment practice, but we can specify maximum portfolio size as a partial and reasonable proxy. Portfolios can be described and perceived as fair in that they allow the student to present their own selection and their own analysis of their own work, undertaken over a period of time and with access to information and other resources, as in the
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

16

case study.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

17

13
Value A good assessment process surely allows students to produce work that they value, in which they take some professional or even personal pride. A portfolio is more likely to be valued by its producer than a single essay or exam script. Why? Because the portfolio comprises the students own selection from their work. Because it includes a variety of their work. Because the student has stood back from these items of work, critiqued or reviewed them, and formed and expressed their own view of the value and meaning of their work. Participants in the course described in the case study, particularly those who have successfully complete the course, have said that they value their portfolio much more than examination scripts they have produced, for example. Efficiency A good assessment process makes efficient use of both the students and the assessors time and effort. However, the search for efficiency in the use of portfolios for assessment has very different implications for student and for assessors. I shall consider these separately. Producing a portfolio can be a long job for the student a hundred or more hours for the course described in the case study. However it is hard to mark the border between work done in preparing the portfolio and work which would have been done anyway in planning classes and giving feedback to students. The student needs to collect or produce the evidence, and then annotate and label and organise it. They then need to review it critically, to discover and explain what the evidence shows about their learning and development. They may also be asked to relate the evidence to course outcomes. Producing a portfolio can be a small or very large part of a students work. However, the necessary acts of production, selection, critical judgement and reflection are, I believe, profoundly educational and developmental. What of the time of the assessor? Assessing a large portfolio can also be a long job. It may take longer than would reading a single piece of work of the same length, because of the need to cross refer as between critical reflection and evidence. However, a portfolio can be a major, or even the sole, final assessment vehicle for a students work on a course, meriting considerable assessment time. This said, what can be done to make the assessment task manageable for the assessor? The first step is to specify, and insist on, upper size limits (you could specify the maximum numbers of pages of evidence, and the maximum number of works in the critical reflection. In any event, assessing a final portfolio may not take as long as the size of the portfolio may initially suggest. Why? The assessor may well not need to read every word of a portfolio, particularly the evidence. If the critical reflection sections refer to specific pieces of evidence, or to specific paragraphs within the evidence, then the assessor may be able to form a confident judgement without reading all of the evidence. The assessor may simply need to check that the evidence is what the portfolio author claims it to be, and that it proves what the portfolio author claims it proves. And if the portfolio contains work seen and commented on earlier in the course, a detailed reading again may not be necessary. Planning any assessment method means balancing the quality of the assessment judgement that can be made with the time and effort applied to the assessment.
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

18

14
Openness A good assessment system has no secrets. Students and staff know and understand the course outcomes, the assessment criteria and the assessment process. Additionally, a good course process means that assessment brings few surprises to the students. The feedback they have received on their work during the course, on early drafts of materials for their portfolio, means that they have a clear view of how well they are doing. They have the time, opportunity and support to make any necessary improvement to the work that goes into their final portfolio.

Complete evidence, or a selection? Should a portfolio contain all evidence of a students work, or just a selection? If we feel that the purpose of assessment is to allow the student to show the best work of which they are capable, then we should clearly allow them to enter their selection of the best. If we want a representative account of the full range of their work, then we want to have access to all their work. If we are concerned with professional accreditation, we may want to know that their performance has never dropped below some minimum acceptable standard. Or we may ask for a representative sample of work (representativeness perhaps judged by marks or grades awarded), together with critical commentary showing what has been learned from any poor performances and how that learning has been applied to improvement.

Other features You may have other views about what makes an assessment process good; in your own institution, in your own subject, for your own course and your own approach to teaching. You may find it useful to note down your own list of qualities of a good assessment process, and review this list with colleagues and students. This process will help you to make the best use of this Guide and of other materials on assessment. I hope that this Guide will enable you to weigh up how well portfolio assessment can have these other qualities that you, your colleagues and your students value.

Conclusion In suggesting in this section some of the features of a good portfolio assessment scheme, I may have suggested that I see portfolios as the perfect assessment method. I do not hold this view. Assessment of complex skills and knowledge will remain a complex task. I hope I have done two things in this section: suggested some ways in which portfolio assessment can be good; and suggested some criteria against which all assessment methods should be judged.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

19

15 Issues in portfolio assessment

Outcomes and reliability

Professional courses to be assessed by portfolio may well have some required learning outcomes. For example, just prior to landing, we should prefer to know that our pilot had passed all elements of flying training, not just take-off, level flight and talking to passengers.

The situation may differ in a course that is more academically oriented. We may feel that a student can properly graduate in the discipline through having shown attainment of a selection from a wide range of outcomes or capabilities. However, whether the course is primarily professional or academic or some combination of the two, when we plan the assessment scheme for a portfolio-based course, the same issue remains. How should we decide which outcomes are essential and which optional?

Within any professional or academic requirements, the arithmetic of assessment suggests that we should minimise the number of essential outcomes in order to maximise the reliability of the assessment. The more outcomes which must be passed for the course to be passed, as I shall show in a moment, the more magnified are any unreliabilities in assessment. And, from the students perspective, each distinct essential outcome is an additional opportunity to fail. So the decision on what outcomes are essential and what are optional is made by balancing three sets of considerations: 1. Assessment reliability (with this number of outcomes, will our assessment be acceptably reliable?). 2. Professional or academic considerations (is this outcome essential for safe professional performance or for admission to the discipline?). 3. Student considerations (do there need to be this many ways for a student to fail the course?)

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

20

16
Let us consider these in more detail, for the course in the case study. As you read this discussion, consider how it relates to your own course.

The reliability of assessment with multiple essential outcomes In the course described in the case study above, the pair of assessors agreed whether a particular outcome had been passed on 87% of occasions. Every one of the seven outcomes has to be passed for the course to be passed. How much agreement might we expect on whether a particular candidate had passed the course or not? If there were perfect correlation between a candidates performance on each of the course outcomes, we would expect a similar degree of agreement on the candidates overall performance 87%. If on the other hand their performance on each outcome were essentially independent of their performance on each other outcome, then we would expect the agreement on overall pass-fail performance to be much lower, at only (0.87)7 = 38 per cent. The conclusion is clear; to maximise reliability of overall assessment, minimise the number of essential course outcomes. But maximising the reliability of assessment should not be the sole determinant of how many outcomes the course requires to be achieved for a pass. More important than reliability is validity, to which I now return.

Professional or academic considerations The course on teaching in higher education has seven required outcomes, derived from the relevant professional accreditation scheme, the SEDA Associate Teacher qualification (SEDA 2001). These are, in summary, the abilities to: 1. Plan lessons; 2. Teach; 3. Assess student work; 4. Monitor and evaluate teaching; 5. Keep appropriate records; 6. Cope with the demands of the job; and 7. (a)1 Reflect on teaching, (b) analyse development needs and (c) plan continuing professional development.

1 These subdivisions (a, b, c) were not present in the original presentation of the
outcome
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

21

17
Could we safely reduce the number of outcomes whilst retaining the validity of the outcomes as a description of a competent teacher?

These are all clearly desirable abilities for a teacher. Outcomes 1 to 3 are clearly essential. We might feel that outcome 2 requires outcome 1, and therefore that outcome 1 does not need to specified separately; or we might feel that lesson planning is a vital and distinct skill. We might combine 4 with 7(a) into Review teaching. We might decide that outcomes 5 and 6 are in different ways manifest in each of the other outcomes, and do not need to be assessed separately. We might decide that our definition of planning continuing professional development includes a requirement to analyse development needs.

I shall not resolve these debates here, simply show the sorts of debate which need to be undertaken to ensure that the outcomes of the course are the minimum essential number. (As you do this, watch for outcomes including the word and - these may indicate compositing of several outcomes in one rather than a true reduction in the number of outcomes.) When you think you have completed this process, ask yourself and colleagues whether a student who has achieved these outcomes has met the professional or academic requirements that you want the course to develop and assure. Is your proposed set of learning outcomes, in this demanding sense, valid and complete?

Student considerations

Student considerations combine the previous two issues. Students want the course to minimise the number of ways in which they could fail the course. They also want to know that passing the course means what it says, ensures that they have met the appropriate professional or academic requirements.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

22

18
How detailed a judgement? How do you decide how many levels of judgement to set (if you have a choice)? Will you mark on a percentage scale, a letter grade scale (with or without pluses and minuses), a sixteen-point scale, a nine-point scale? The subject discipline has a great effect on how easy or difficult it is to produce criteria against which fine, or even coarse, gradations of performance can be judged reliably. For example, in the sciences, maths and engineering, it is typically rather less problematic to specify in advance an appropriate sequence of steps in an argument and a correct answer, and thus to use a fine-grained marking scale, than it is in the social science or arts. However, even accepting this, establishing grading criteria for work in any discipline can be difficult. There can be several reasons for this difficulty: There may be insufficient agreement among assessors on what qualities differentiate between performance at one level and at another. There may be several ways in which performance can be good, excellent, just acceptable, or whatever standard of performance we are trying to define. Assessors may be able to agree on how to grade a particular piece of work without necessarily being able to articulate the criteria on which they made this judgement. (Whilst solving the problem of assessment, this does not solve the teaching problem, the problem of how to communicate criteria and standard to students other than by examples.) In many disciplines, then, it may be difficult to define clearly the boundary between any two performance bands. The best use of assessor time is surely to put time and effort into defining the boundary between the two most important categories pass and fail. This done, the essential qualities of passing and failing work can be articulated to students as they begin to develop their portfolios, and shared among assessors at assessment time. You may additionally decide to develop criteria for outstanding performance. These criteria can be less tightly specified than for pass/fail, but they should still be articulated for those students for whom outstanding performance is a goal. You may feel that the criteria for outstanding performance should include and expand on those for a pass.

Whose outcomes? The learning outcomes for a course assessed by portfolio need not be specified wholly by the course. There is merit in allowing the students some say in the outcomes against which they will be assessed. This can be done through a process of negotiation and formal agreement of the outcomes towards which a student is working. (These negotiations should take place against an explicit set of meta-criteria for determining what are and are not acceptable outcomes.) This done, the process of working towards and assessment against these outcomes is very similar to that for coursespecified outcomes. Some formal process of re-negotiating and re-confirming outcomes is desirable learning is rarely the linear process of movement towards immovable goals that the use of learning outcomes can sometimes suggest.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

23

19
Whose work?

A major argument expressed for assessment by examination is that it assures that the work assessed is wholly the work of the student. Dont portfolios allow inappropriate co-operation and thus a loss of certainty of authenticity? They certainly may. The word-processor, the world-wide web and optical character recognition technology; not to mention older technology such as libraries and other people; together mean that any work offered by a student for assessment that was produced except under conventional examination conditions may not be wholly or at all the work of the student. This problem can be addressed in several ways: Students can be explicitly rewarded for making maximum appropriate referenced use in their portfolio of public sources, thus rewarding referencing rather than plagiarism. The learning outcomes and the assessment of the course can require the application of theoretical or other ideas from the literature to particular topics of which each student has unique knowledge and experience, perhaps gained through a project or work placement. Even where students have undertaken broadly the same work tasks and evidenced these in their portfolios, the requirement for critical reflection requires the student to find their own voice and make their own sense of the work they have done.

Plagiarism becomes progressively more difficult as the task becomes more particular to the individual student, to their own interests and experiences - another argument for assessment via portfolio.

The briefing of students and assessors The aim of such briefing is to ensure as far as possible that students performance reflects their capabilities in attaining the outcomes of the course, and is not reduced by any failure to understand assessment requirements. The case study above described a process for briefing assessors. The aim was to ensure that assessors assess student performance and capability as validly and reliably as possible, and are not distracted or confused by any failure to understand assessment requirements.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

24

20
Here in a little more detail are some ways to brief students and assessors on the nature of the assessment task:

Tell students and assessors the intended learning outcomes of the course, the things which students will be required to be able to do to complete the course successfully. Tell them the assessment process to be followed, how the judgements will be made. Discuss the outcomes with the students and the assessors during the course, not once but several times. Do the same for the assessment criteria. Show students and assessors portfolios from the course, with the assessment judgements made. This will need either the permission of the students who produced the work or some anonymising. (Real examples are a necessary adjunct to more general guidance Wolf 1995). Invite students to assess and analyse their own emergent portfolios during the course. (I suggested earlier that the critical reflection components of a portfolio were in part selfassessments by the students of their own work and learning.) Invite students to assess and give feedback on each others emergent portfolios. (This will give them some sympathy with the task of the assessors. They will also learn much of value, about the topic of the course and about standards and the making of judgements.) Give the students your feedback on sections of the portfolios. Invite the students to explore how your, their own and each others feedback relate, how and why these different sets of feedback agree and disagree.

Doesnt all this give undue weight and time to the assessment of portfolios? I think not. Judgement and discussion about assessment involves consideration of what constitutes good work, on the course and hence in the discipline or profession being studied. This feels to me to be a very appropriate topic for discussion.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

25

21
The process of judgement

How are portfolios actually assessed? Assessors will generally try to follow the assessment process specified, using the learning outcomes, the assessment criteria and any other guidance provided. However, assessors also carry their own standards. These may add to, amplify or otherwise alter those specified by the course. Two broad approaches to assessment can be identified an outcomes-based or bottom-up approach and a holistic or top-down approach. Working bottom-up, assessors make judgements on each element of assessment, then combine these into an overall judgement, using any rules for combining assessment judgements provided by the assessment scheme. Working top-down, assessors form an overall judgement - 2(i), clear pass, C grade.

Some assessors iterate between these two approaches. They check whether the rulesbased application of individual judgements on elements of assessment adds up to their overall, holistic, judgement. If these match, then alls well. If not, the assessor may modify their holistic judgement to match the judgement that the bottom-up approach gives; or they may alter individual judgements on elements to match their holistic judgement, or make some combination of these two kinds of change. This is a much bigger topic than can be done justice to here. However, it suggests that a process of negotiation between element-based and holistic assessment could, over time, lead to two very different but compatible accounts of assessment criteria for the portfolio, one account holistic, the other based on the individual elements assessed.

The consequences of failure

Where the portfolio requires successful attainment of a number of discrete essential outcomes, attention is needed to the consequences of failing one or more outcomes. If the outcomes are discrete then we have in essence a profile system of assessment. Rather than passing or failing the course as a whole, we may take the view that a student has attained some outcomes and not others. The student can be told this, told what part(s) of their portfolio they need to resubmit, and given feedback as appropriate. It may be felt necessary to make this a formal resubmission for the next assessment point. Alternatively, if time and assessment regulations allow, students may be invited to undertake a rapid resubmission of just the failed outcomes. (Ease of re-submission is an argument in favour of portfolios being structured around course learning outcomes.) In the course described in the case study, re-submission rates and pass rates at re-submission are both high.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

26

22 Improving portfolio assessment


Detailed analysis of student and assessor performance on portfolio-assessed courses makes it possible to identify areas of difficulty, in the course and in the assessment. Analysis of what lies behind the difficulties then makes it possible to plan and make improvements. Learning from student performance A simple analysis of where students did well and less well, which learning outcomes they attained and did not attain, gives valuable though not complete information. Poor student performance on an assessment task may spring from several causes. Misinterpretation of the assessment task; a poorly specified assessment task; poor learning and/or teaching of the associated concepts and learning outcomes; or the fact that this topic or outcome is inherently difficult. Further analysis will be needed to see which of these explanations holds, and still more exploration to see what steps can most efficiently be taken to address the situation. In the illustration from the course described in the case study, students were judged to have performed less well on outcome 7 than on others. Examination of assessors detailed judgements quickly revealed why. The outcome is actually a composite of three outcomes. Some course participants were missing the second and third parts of the outcome, and thus failing the outcome and the course. Clearer signposting of this outcome and briefing of course participants and their tutors addressed the problem. Sometimes, the necessary steps are obvious when the data is available to be inspected. Learning from assessor performance The study of discrepancies between assessors can also be useful. There was maximum disagreement between the assessors on outcome 7, already discussed above as an area of high participant failure. Detailed study of disagreements between assessors shows them disagreeing about how explicit a needs analysis and a development plan should be in order to be graded as satisfactory. Assessors comments they are asked to provide notes for feedback to the candidate on outcomes not achieved confirmed this impression. Again the main implication for action was obvious - clearer briefing, of students and assessors. (Most of this briefing is achieved through the same document, a course assessment guide sent to course participants and assessors.) Table 1 shows substantial disagreement about an underpinning professional value, commitment to equal opportunities. We were disappointed, but hardly surprised this is a problematic concept. However, we could identify steps to take. These included giving to course participants and to assessors printed materials on equality of opportunity prepared for a subsequent course. You may see in this account a cycle of learning by the course team from experience. Similar to the learning described by Kolb et al. (1974). A pre-requisite to apply this cycle is the collection and analysis of detailed knowledge about the assessment of the course, the judgements made and if possible the reasons behind these judgements.
LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

27

23 Conclusion

I have suggested that portfolios can be used effectively, in conjunction with other forms of student work, to prompt, support, integrate and then assess student learning. Good practice in assessing portfolios has many of the same prerequisites as good practice in any kind of assessment; chiefly, a clear account of what learning is to be assessed, and then a willingness to analyse student work and its assessment and to make informed improvements year on year to assessment practice. The case study illustrates many of the ideas and practices described elsewhere in this guide. Perhaps the single best reason for the use of portfolios is that students value them, as a tangible outcome from and demonstration of their learning.

Acknowledgements

I am grateful to colleagues in the Centre for Higher Education Practice (CeHEP) at The Open University, where the programme described in the case study was developed and where the research for the case study was undertaken, particularly to Carole Baume and Graham Gibbs; to colleagues in the Institute of Educational Technology (IET), where the programme now runs; to Student Services Planning Office for funding the research; to the Open University Learning and Teaching Innovations Committee for funding the Promoting Excellence in Portfolio Practice (PEPP) Project; to Mantz Yorke, visiting professor in CeHEP, for his generous research collaboration; and, for conversations about portfolios and their assessment to many colleagues including in particular Martin Coffey (CeHEP), Jo Tait (IET), Jo Mutlow (IET), and participants in the seminar on portfolios organised by the South East England Consortium in March 2001.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

28

24 References

Baume (2000) Dialogues: Assessment. Educational Developments. 1(3)

Baume, D. and Yorke, M. (2002) (1) Portfolio Assessment? Yes, but . . .. In: G. Webb and P. Schwarz (eds.) Case studies on Teaching in Higher Education. London: Kogan Page (in press).

Baume, D. and Yorke, M. (2002) (2) The reliability of assessment by portfolio on a course to develop and accredit teachers in higher education. Studies in Higher Education (in press). Open University, Institute of Educational Technology. Available from: http://iet.open.ac.uk/courses/PGCTLHE

Kolb, D.A., Rubin, I.M. and McIntyre, J.M. (1974) Organizational Psychology An Experiential Approach. Englewood Cliffs, N.J.: Prentice Hall.

Nonaka, I. (1994) A Dynamic Theory of Organizational Knowledge Creation. Organisation Science. 5(1).

Rowntree, D. (1977) Assessing Students: How Shall We Know Them?. London: Harper & Row.

Rust, C. (2000) An opinion piece: A possible student-centred assessment solution to some of the current problems of modular degree programmes. Active Learning 1(2), 6-131. SEDA Teacher Accreditation Scheme (2001). Available from: http://www.seda.demon.co.uk

SEDA (2001) Teacher http://www.seda.demon.co.uk

Accreditation

Scheme

[online].

Available

from:

Wolf, A. (1995) Competence-based assessment. Buckingham: Open University Press.

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

29

The Learning and Teaching Support Network Generic Centre


The Learning and Teaching Support Network (LTSN) is a network of 24 Subject Centres, based in higher education institutions throughout the UK, and a Generic Centre, based in York, offering generic information and expertise on learning and teaching issues that cross subject boundaries. It aims to promote high quality learning and teaching through the development and transfer of good practice in all subject disciplines, and to provide a one-stop shop of learning and teaching resources for the HE community. The Generic Centre, in partnership with other organisations, will broker information and knowledge to facilitate a more co-ordinated approach to enhancing learning and teaching. It will: Work with the Subject Centres to maximize the potential of the network; Work in partnership to identify and respond to key priorities within the HE community; Facilitate access to the development of information, expertise and resources to develop new understandings about learning and teaching. The LTSN Generic Centre Assessment Series Guides for: Senior Managers Heads of Department Lecturers Students Briefings: Assessment issues arising from key skills Assessment of portfolios Key concepts: formative and summative, criterion and norm-referenced assessment Assessing disabled students Self, peer and group assessment Plagiarism Work-based learning Assessment of large groups Published by Learning and Teaching Support Network (LTSN) For more information, contact the Generic Centre at: The Network Centre, Innovation Close, York Science Park, Heslington, York, YO10 5ZF Tel: 01904 754555 Fax: 01904 754599 Email: gcenquiries@ltsn.ac.uk www.ltsn.ac.uk/genericcentre

LTSN Generic Centre A Briefing on Assessment of Portfolios November 2001

30

You might also like