Nothing Special   »   [go: up one dir, main page]

Raun Er 2021

Download as pdf or txt
Download as pdf or txt
You are on page 1of 572

Technical and Vocational Education and Training:

Issues, Concerns and Prospects 33

Felix Rauner

Measuring and
Developing
Professional
Competences in
COMET
Method Manual
Technical and Vocational Education and
Training: Issues, Concerns and Prospects

Volume 33

Series Editor
Rupert Maclean, RMIT University, Melbourne, Australia

Associate Editors
Felix Rauner, TVET Research Group, University of Bremen, Bremen, Germany
Karen Evans, Institute of Education, University of London, London, UK
Sharon M. McLennon, Newfoundland and Labrador Workforce Inno, Corner Brook,
Canada

Advisory Editors
David Atchoarena, Division for Education Strategies & Capacity Building,
UNESCO, Paris, France
András Benedek, Ministry of Employment and Labour, Budapest, Hungary
Paul Benteler, Stahlwerke Bremen, Bremen, Germany
Michel Carton, NORRAG c/o Graduate Institute of International and Development
Studies, Geneva, Switzerland
Chris Chinien, Workforce Development Consulting, Montreal, Canada
Claudio De Moura Castro, Faculade Pitágoras, Belo Horizonte, Brazil
Michael Frearson, SQW Consulting, Cambridge, UK
Lavinia Gasperini, Natural Resources Management and Environment Department,
Food and Agriculture Organization, Rome, Italy
Philipp Grollmann, Federal Institute for Vocational Education and Training (BiBB),
Bonn, Germany
W. Norton Grubb, University of California, Berkeley, USA
Dennis R. Herschbach, University of Maryland, College Park, USA
Oriol Homs, Centre for European Investigation and Research in the Mediterranean
Region, Barcelona, Spain
Moo-Sub Kang, Korea Research Institute for Vocational Education and Training,
Seoul, Korea (Democratic People’s Republic of)
Bonaventure W. Kerre, Moi University, Eldoret, Kenya
Günter Klein, German Aerospace Center, Bonn, Germany
Wilfried Kruse, Dortmund Technical University, Dortmund, Germany
Jon Lauglo, University of Oslo, Oslo, Norway
Alexander Leibovich, Institute for Vocational Education and Training Development,
Moscow, Russia
Robert Lerman, Urban Institute, Washington, USA
Naing Yee Mar, GIZ, Yangon, Myanmar
Munther Wassef Masri, National Centre for Human Resources Development,
Amman, Jordan
Phillip McKenzie, Australian Council for Educational Research, Melbourne,
Australia
Margarita Pavlova, Education University of Hong Kong, Hong Kong, China
Theo Raubsaet, Centre for Work, Training and Social Policy, Nijmegen, The
Netherlands
Barry Sheehan, Melbourne University, Melbourne, Australia
Madhu Singh, UNESCO Institute for Lifelong Learning, Hamburg, Germany
Jandhyala Tilak, National Institute of Educational Planning and Administration,
New Delhi, India
Pedro Daniel Weinberg, formerly Inter-American Centre for Knowledge Develop-
ment in Vocational Training (ILO/CINTERFOR), Montevideo, Uruguay
Adrian Ziderman, Bar-llan University, Ramat Gan, Israel

More information about this series at http://www.springer.com/series/6969


Felix Rauner

Measuring and Developing


Professional Competences
in COMET
Method Manual
Felix Rauner
University of Bremen
Bremen, Germany

ISSN 1871-3041 ISSN 2213-221X (electronic)


Technical and Vocational Education and Training: Issues, Concerns and Prospects
ISBN 978-981-16-0956-5 ISBN 978-981-16-0957-2 (eBook)
https://doi.org/10.1007/978-981-16-0957-2

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore
Pte Ltd. 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by
similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

In less than a decade, the methods of competence diagnostics in accordance with the
COMET test procedure have become an internationally established instrument for
quality assurance and quality development in vocational education and training. At
its core, the methodological instruments comprise the COMET competence and
measurement model as a basis for the development of test and examination tasks
and the evaluation of task solutions: the rating procedure. The insight that, when
solving and working on tasks in the working environment, there is always a
situational solution space as well as extensive scope for creativity on a social level
that must be exploited was translated into the form of open complex test tasks. The
insight that professional tasks must always be solved completely, if only for reasons
of occupational safety, health protection, environmental and social compatibility,
and not least for reasons of the qualitative competition to which the companies are
exposed, justifies the theory of holistically solving professional tasks.
After the psychometric evaluation of the COMET competence and measurement
model by Thomas Martens and Birgitt Erdwien had already been successful in 2009,
the COMET project developed into an international research and development
network with projects encompassing numerous industrial-technical, commercial
and personalised service occupations. In particular, the cooperation projects with
the COMET consortia in China, headed by Professor Zhao Zhiqun and in
South Africa, supported by the “Sector Education and Training Authority
merSETA” and a group of doctoral students, have contributed to expanding and
profiling internationally comparative vocational education research in intercultural
teaching and learning research.
Thanks to Professor Martin Fischer’s initiative, an interim report on COMET
research was presented at a conference at KIT in October 2013 in keeping with the
slogan: “COMET under the microscope”. The documentation of presentations and
discussions by COMET experts in practice, educational administrations and voca-
tional training research and, above all, the exchange of experience with colleagues
who evaluated the COMET project from an overarching external vocational educa-
tion and training perspective, contributed to a conference result that had a lasting

v
vi Preface

effect on the further development of the COMET methodology (Fischer, Rauner, &
Zhao, 2015). Above all, the criticism that the COMET competence model only
covers conceptual and professional planning competence but not practical skills,
decisively contributed to the further development of the competence and measure-
ment model. Meanwhile, an extended measurement model has been developed as a
foundation for conducting competence-based examinations, including their
“practical” part.
This manual serves as response to a frequently expressed request for a summary
of the methods developed and tested in the COMET projects in a method manual.
Such extensive work has only been possible with the participation of the large
number of colleagues who have contributed to the development of these methods.
The spectrum of the documented methods ranges from the development of test tasks
to the performance of pre-tests, cross-sectional and longitudinal studies, the devel-
opment and evaluation of scales for measuring professional and organisational
identity and the commitment based thereon, culminating in the development of
context analyses to form a procedure for measuring test motivation. Particular
importance is attached to the presentation and exemplary illustration of the methods
of psychometric testing of the competence and measurement model, as well as the
scales and models of context analysis.
In hardly any other field of vocational education and training research is the
participation of teachers and trainers in the research process as indispensable as in
competence diagnostics. This is one of the main findings of COMET research in
recent years.
I would, therefore, like to thank the numerous project groups that have so far
played a very decisive role in the implementation of projects in an increasing range
of occupations and specialist areas in initial vocational training, technical colleges
and higher technical schools, as well as tertiary vocational training courses. This
applies above all to the evaluation of the wide variety of solutions to the test tasks,
the didactic evaluation of the rating scales and the interpretation of the test results,
whereby the latter requires intimate knowledge of the respective teaching and
learning contexts.
My thanks also go to the Manufacturing, Engineering and Related Services
Sector Education and Training Authority (merSETA) in South Africa who supported
the translation of the handbook from German to English, and the Institute for Post-
School Studies at the University of the Western Cape, South Africa, under the
leadership of Prof Joy Papier who, together with Dr. Claudia Beck-Reinhardt,
managed the book translation project.
In the first part, the manual introduces the COMET competence and measurement
model in three introductory chapters. The fifth chapter describes the methods of
developing and evaluating test tasks. The sixth chapter provides a detailed insight
into the psychometric evaluation of the test instruments using practical examples.
Chapters 7 and 8 document the steps required for planning, conducting and
evaluating the tests.
Chapter 9 presents the contribution of COMET competence diagnostics to
teaching-learning research. Once the participation of teachers and trainers in the
Preface vii

“student” tests had led to new findings regarding the transfer of the professional
competence profiles of teachers/lecturers of vocational subjects (LbF [TPD]) to their
students, COMET competence diagnostics was also developed for LbF (TPD).
Chapter 10 shows those methods of competence diagnostics and development for
LbF (TPD) are available for the implementation of large-scale projects and for the
training and further education of LbF (TPD).
The concluding eleventh chapter deals with the issue of the application of
COMET instruments for the design, organisation and evaluation of VET processes,
which is regarded as crucial from the perspective of VET practice.
I hope that this methodological manual will provide a handy toolkit and therefore
a powerful boost to quality assurance and development in vocational education and
training.
Furthermore, I would like to thank several colleagues for (co-)drafting specific
chapters: Joy Backhaus (Sect. 5.6.2), Thomas Martens (6.1 and 6.3), Johanna
Kalvelage and Yingyi Zhou (6.4), Rongxia Zhuang and Li Ji (6.5), Jürgen Lehberger
(10.7, 10.8 and Chap. 11) as well as Karin Gäumann-Felix and Daniel Hofer (11.3).
Additionally I thank the many contributors who were involved in the realisation of
this book in various ways: Martin Ahrens, Nele Bachmann, Birgitt Erdwien, Jenny
Franke, Jenny Frenzel, Bernd Haasler, Ursel Hauschildt, Lars Heinemann, Dorothea
Piening and Zhiqun Zhao.

Bremen, Germany Felix Rauner


June 2021
Series Editors Introduction

This ground breaking volume by Professor Felix Rauner, on Measuring and Devel-
oping Professional Competencies in COMET: Method Manual, is the latest book to
be published in the long-standing Springer Book Series “Technical and Vocational
Education and Training”. It is the 33rd volume to be published to date in the TVET
book series.
This is an important book on an important topic and will no doubt be widely read
and respected. Through its eleventh chapters, the volume comprehensively and
critically examines and evaluates key aspects of measuring and developing profes-
sional competencies (COMET). As Professor Rauner points out, in less than a
decade, the methods of competence diagnostics, in accordance with the COMET
test procedure, have become an internationally established instrument for quality
assurance and quality development in vocational education and training.
The book focuses particularly on examining what teachers and trainers can learn
from modelling and measuring vocational competence learning tasks related to each
other and vocational identify development for the design and organisation of voca-
tional training processes; whether test and learning tasks are related to each other and
what distinguishes them from each other.
Professor Felix Rauner is very well qualified to write this important and timely
book since he is widely regarded and respected as being an outstanding, widely
influential researcher, author and opinion leader working in the area of education,
with particular reference to technical and vocational education and training (TVET).
Professor Rauner is based in Germany, working for many years at the Institut
Technik und Bildung, University of Bremen. He has published very widely in the
field of TVET, including being co-author of the widely used and highly respected
comprehensive (1103 page) Handbook of Technical and Vocational Education,
published in the Springer International Library of Technical and Vocational Educa-
tion and Training. That Handbook is published in both English and German.
In terms of the Springer Book Series in which this volume is published, the
various topics dealt with in the series are wide ranging and varied in coverage, with
an emphasis on cutting edge developments, best practices and education innovations

ix
x Series Editors Introduction

for development. More information about this book series is available at http://www.
springer.com/series/5888
We believe the book series (including this particular volume) makes a useful
contribution to knowledge sharing about technical and vocational education and
training (TVET). Any readers of this or other volumes in the series who have an idea
for writing their own book (or editing a book) on any aspect of TVET, are enthu-
siastically encouraged to approach the series editors either direct or through Springer
to publish their own volume in the series, since we are always willing to assist
perspective authors shape their manuscripts in ways that make them suitable for
publication in this series.

School of Education, RMIT University, A. O. Rupert Maclean


Melbourne, Australia
10 February 2021
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Possibilities and Limitations of Large-Scale Competence
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Modelling Professional Competence . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Format of the Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Modelling and Measuring Professional Identity and Professional
Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Modelling and Measuring the Competence of Teachers in
Vocational Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 The Quality Criteria for Professional Competence Diagnostics
and the Design of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Professional Competence as a Subject of Competence
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Design Instead of Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 The Possibilities and Limitations of Large-Scale Competence
Diagnostics (LS–CD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Implicit Professional Knowledge
(Tacit Knowledge) . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Professional Competence (Employability) . . . . . . . . . 10
2.2.3 Craftsmanship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.4 Social and Key Competences . . . . . . . . . . . . . . . . . . 11
2.2.5 Abilities That Are Expressed in the Interactive
Progression of the Work . . . . . . . . . . . . . . . . . . . . . . 12
3 Categorial Framework for Modelling and Measuring Professional
Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 The Occupation Type of Societal Work . . . . . . . . . . . . . . . . . . 16
3.1.1 Employability or Professional Competence . . . . . . . . . 18
3.1.2 Architecture of Parallel Educational Paths . . . . . . . . . 18
3.1.3 Professional Validity of Competence Diagnostics . . . . 19

xi
xii Contents

3.2 The Design of Work and Technology: Implications for the


Modelling of Professional Competence . . . . . . . . . . . . . . . . . . . 20
3.2.1 Professional Work Tasks and Professional
Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Task Analyses: Identification of the Characteristic Professional
Work Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Professional Scientific Task Analyses Include . . . . . . . 26
3.3.2 Identifying Professional Work Tasks: Expert
Specialist Workshops (EFW) . . . . . . . . . . . . . . . . . . . 27
3.3.3 Professional Scientific Work Process Studies . . . . . . . 30
3.4 Guiding Principles and Objectives of Vocational Education
and Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 Professional ‘Gestaltungskompetenz’ (the Ability to
Shape or Design One’s Professional Future) . . . . . . . . 39
3.4.2 Design-Oriented Vocational Education. . . . . . . . . . . . 40
3.4.3 Professional Competence . . . . . . . . . . . . . . . . . . . . . 40
3.5 Theories of Vocational Learning and Professional
Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5.1 The Novice-Expert Paradigm: Competence
Development in Vocational Education . . . . . . . . . . . . 42
3.5.2 Work Process Knowledge . . . . . . . . . . . . . . . . . . . . . 45
3.5.3 Practical Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.4 Multiple Competence . . . . . . . . . . . . . . . . . . . . . . . . 54
4 The COMET Competence Model . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Requirements for Competence Modelling . . . . . . . . . . . . . . . . . 61
4.2 The Levels of Professional Competence (Requirement
Dimension) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2.1 Operationalisation of the Competence Criteria:
Development of the Measurement Model . . . . . . . . . . 68
4.3 Structure of the Content Dimension . . . . . . . . . . . . . . . . . . . . . 68
4.4 The Action Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4.1 The Importance of Action Types . . . . . . . . . . . . . . . . 73
4.5 A Cross-Professional Structure of Vocational Competence . . . . . 74
4.6 Extending the Competence Model: Implementing Planned
Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6.1 Operational Projects/Orders . . . . . . . . . . . . . . . . . . . . 77
4.6.2 The Expert Discussion . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.3 Rater/Examiner Training for Assessing the Practical
Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.7 Identity and Commitment: A Dimension of Professional
Competence Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.7.1 Normative Fields of Reference for Commitment and
Work Morale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.7.2 Professional Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Contents xiii

4.7.3 Organisational versus Occupational Commitment . . . . 82


4.7.4 Construction of Scales to Capture Work-Related
Identity and Commitment Occupational Identity . . . . . 83
4.7.5 Organisational Identity . . . . . . . . . . . . . . . . . . . . . . . 84
4.7.6 Modelling the Connections Between Identity and
Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.7.7 Occupational Commitment . . . . . . . . . . . . . . . . . . . . 85
4.7.8 Organisational Commitment . . . . . . . . . . . . . . . . . . . 85
4.7.9 Work Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.7.10 Example of an Analysis of the Measurement Model
(Performed by Johanna Kalvelage and Yingy
Zhou, ! 6.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7.11 International Comparisons . . . . . . . . . . . . . . . . . . . . . 87
5 Developing Open Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1 Expert Specialist Workshops for Identifying Characteristic
Professional Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1.1 The Preparation of Expert Specialist Workshops . . . . . 91
5.1.2 Implementation of the Workshop . . . . . . . . . . . . . . . . 92
5.1.3 External Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.1.4 Evaluation of the Validation . . . . . . . . . . . . . . . . . . . 95
5.2 An Open Test Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2.1 Representativeness . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2.2 Authenticity/Reality Reference . . . . . . . . . . . . . . . . . 98
5.2.3 Difficulty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2.4 The Description of the Situation . . . . . . . . . . . . . . . . 99
5.2.5 Standards and Rules to be Complied with . . . . . . . . . 99
5.3 Cross-Professional and Subject-Related Test Tasks . . . . . . . . . . 100
5.4 Test Arrangements for Related Vocational Training Courses
with Different Levels of Qualification . . . . . . . . . . . . . . . . . . . . 101
5.4.1 The S II Test Arrangement . . . . . . . . . . . . . . . . . . . . 102
5.4.2 The Post-SII Test Arrangement . . . . . . . . . . . . . . . . . 102
5.4.3 The Third Test Arrangement: Graduates of
Professionally Qualifying Bachelor Programmes
As Primary Test Groups . . . . . . . . . . . . . . . . . . . . . . 103
5.4.4 Validity of the Test Tasks for Different Training
Courses and Test Arrangements . . . . . . . . . . . . . . . . . 104
5.5 Description of the Solution Scopes . . . . . . . . . . . . . . . . . . . . . . 104
5.6 Evaluation and Choice of Test Tasks: The Pre-Test . . . . . . . . . . 105
5.6.1 Determining the Test Group(s) . . . . . . . . . . . . . . . . . 107
5.6.2 Training of Test Task Authors . . . . . . . . . . . . . . . . . . 107
5.6.3 Calculating of the Finn Coefficient . . . . . . . . . . . . . . 113
5.6.4 Rating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.6.5 Interviewing the Test Participants . . . . . . . . . . . . . . . 122
xiv Contents

5.6.6 Selection and Revision of Test Tasks and Solution


Scopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.7 Test Quality Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.7.1 Objectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.8 Difficulty Level: A Problematic Quality Criterion for Test
Tasks Intended to Measure Professional Competence . . . . . . . . 132
5.8.1 Standardised Test Tasks . . . . . . . . . . . . . . . . . . . . . . 132
5.8.2 Criteria-Oriented Test Tasks . . . . . . . . . . . . . . . . . . . 136
5.8.3 The Variation Coefficient V: A Benchmark for the
Homogeneity of the Task Solution . . . . . . . . . . . . . . . 145
5.8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6 Psychometric Evaluation of the Competence and Measurement
Model for Vocational Education and Training: COMET . . . . . . . . 147
6.1 What Makes it So Difficult to Measure Professional
Competence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.1.1 Procedures Based on the Analysis of the Covariance
Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.1.2 Mixed Distribution Models . . . . . . . . . . . . . . . . . . . . 149
6.2 Ensuring the Interrater Reliability of the COMET Test
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.2.1 Example: Securing Interrater Reliability (COMET
Vol. I, Sect. 4.2, Birgitt Erdwien, Bernd Haasler). . . . . 150
6.3 Latent Class Analysis of the COMET Competence and
Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.3.1 On the Connection between Test Behaviour and
Personal Characteristics . . . . . . . . . . . . . . . . . . . . . . 155
6.3.2 On the Relationship Between the Structure and
Modelling of Vocational Competences . . . . . . . . . . . . 156
6.3.3 Mathematical Properties of a Test Model . . . . . . . . . . 156
6.3.4 Characteristics of a Competence Model for
Vocational Education and Training . . . . . . . . . . . . . . 157
6.3.5 Dilemma: Dependent Versus Independent Test
Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.3.6 The Search for a Bridge Between Theory and
Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.3.7 The Example COMET . . . . . . . . . . . . . . . . . . . . . . . 158
6.3.8 Empirical Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.3.9 On the Reliability of the COMET Rating
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.3.10 On the Content Validity of the COMET Rating
Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.3.11 Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.3.12 Step 1: Determination of Interrater Reliability . . . . . . . 161
6.3.13 Step 2: Sorting of Task Solutions . . . . . . . . . . . . . . . 161
Contents xv

6.3.14 Step 3: Verification of the Homogeneity of the


Competence Criteria . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.3.15 Step 4: Identification of Typical Competence
Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.3.16 Distribution of Tasks Among the Competence
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.3.17 Step 5: Longitudinal Analysis of Competence
Measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.3.18 Discussion of the Results . . . . . . . . . . . . . . . . . . . . . 169
6.3.19 Need for Research . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.3.20 Prospect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.4 Confirmatory Factory Analysis . . . . . . . . . . . . . . . . . . . . . . . . 171
6.4.1 The I-D Model (! 4, Fig. 4.5) . . . . . . . . . . . . . . . . . 171
6.4.2 Original Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.3 Confirmatory Factor Analysis for the Original
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.4.4 Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.4.5 Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.7 Explorative Factory Analysis . . . . . . . . . . . . . . . . . . . 181
6.4.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.4.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.4.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.4.11 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.4.12 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.4.13 Overall Discussion EFA . . . . . . . . . . . . . . . . . . . . . . 184
6.4.14 Considerations for Further Action . . . . . . . . . . . . . . . 184
6.5 Validity and Interrater Reliability in the Intercultural
Application of COMET Competence Diagnostics . . . . . . . . . . . 185
6.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.5.2 Preparation and Translation . . . . . . . . . . . . . . . . . . . . 186
6.5.3 Cultural Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5.4 Rater Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.5.5 Analysis of the Reliability and Validity of the
Evaluation Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.5.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
7 Conducting Tests and Examinations . . . . . . . . . . . . . . . . . . . . . . . . 193
7.1 How Competence Diagnostics and Testing are Connected . . . . . 193
7.1.1 Reviews of Professional Competence: The New
Examination Practice . . . . . . . . . . . . . . . . . . . . . . . . 194
7.1.2 Context Reference: Work and Business Processes . . . . 197
7.1.3 Levelling of Test Results . . . . . . . . . . . . . . . . . . . . . 201
7.2 The Measurement of Professional Competence . . . . . . . . . . . . . 202
xvi Contents

7.2.1 COMET as the Basis for a Competence-Based


Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
7.2.2 Examination Format for the Extended Final
Examination (GAP) . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.2.3 Procedure for the ‘Operational Order’ (OO)
Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.2.4 The Examination Result . . . . . . . . . . . . . . . . . . . . . . 216
7.3 Interrelationship Analyses Between Examinations and
Competence Diagnostics for Automotive Mechatronics
Technicians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.3.1 Comparison of the Examination and (COMET) Test
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.3.2 Results of the Statistical Influence Analysis . . . . . . . . 219
7.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7.4 Measuring the Test Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 225
7.4.1 Preliminary Study: The Time Scope of the Test as
Influence on Test Motivation . . . . . . . . . . . . . . . . . . . 225
7.4.2 Explorative Factor Analysis of the Relationship
Between Test Motivation and Test Performance . . . . . 230
7.4.3 Influence of Processing Time on Overall
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
7.4.4 Results of the Comparative Study: Test and
Examination Motivation among Motor Vehicle
Trainees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
7.4.5 The Cultural Dimension of Test Motivation . . . . . . . . 244
7.4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.5 Planning and Executing COMET Projects . . . . . . . . . . . . . . . . . 248
7.5.1 Research Design and Research Strategies . . . . . . . . . . 248
7.5.2 Defining the Project Design . . . . . . . . . . . . . . . . . . . . 257
7.5.3 Selecting and Developing the Test Items, the Test
Documents for the ‘Commitment’ Survey and
Performing the Context Analyses . . . . . . . . . . . . . . . . 264
7.5.4 Informing about the Objectives and the
Implementation of the Test . . . . . . . . . . . . . . . . . . . . 274
7.5.5 Research as a Cooperative Project between Science
and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.5.6 Transfer Activities . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8 Evaluating and Presenting the Test Results . . . . . . . . . . . . . . . . . . . 285
8.1 Classification of Individual Performance in Professional
Competence Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
8.1.1 Determination of the Scores for the Three
Competence Dimensions . . . . . . . . . . . . . . . . . . . . . . 286
8.1.2 Sub-Competences, Competence Dimensions and
Competence Levels . . . . . . . . . . . . . . . . . . . . . . . . . 287
Contents xvii

8.1.3 Classification of Individual Performance in


Professional Competence Levels . . . . . . . . . . . . . . . . 287
8.2 Graphical Representation of the Test Results . . . . . . . . . . . . . . . 290
8.2.1 Competence Levels . . . . . . . . . . . . . . . . . . . . . . . . . 290
8.2.2 Differentiation according to Knowledge Levels . . . . . 292
8.2.3 Transfer of Competence Levels Differentiated
according to Knowledge Levels into a Grading
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
8.3 Competence Development as a Competence Profile . . . . . . . . . . 297
8.3.1 Homogeneous versus Selective Competence
Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
8.4 Heterogeneity of Professional Competence Development . . . . . . 301
8.4.1 Heterogeneous Levels of Competence . . . . . . . . . . . . 302
8.4.2 Percentile Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
8.4.3 The Heterogeneity Diagram . . . . . . . . . . . . . . . . . . . 307
8.4.4 The Causes of Heterogeneity . . . . . . . . . . . . . . . . . . . 309
8.5 Measuring Identity and Commitment . . . . . . . . . . . . . . . . . . . . 311
8.5.1 On the Construction of Scales . . . . . . . . . . . . . . . . . . 311
8.5.2 Calculating the Results . . . . . . . . . . . . . . . . . . . . . . . 312
8.5.3 ‘Commitment Lights’ . . . . . . . . . . . . . . . . . . . . . . . . 313
8.5.4 Commitment Progression . . . . . . . . . . . . . . . . . . . . . 313
8.5.5 Four-Field Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.5.6 Identity and Commitment Profiles . . . . . . . . . . . . . . . 320
8.6 Identity and Commitment as Determinants of Professional
Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.6.1 Professional and Organisational Identity as
Determinants of the Quality of Vocational
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.6.2 Professional Identity . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.6.3 Organisational Identity . . . . . . . . . . . . . . . . . . . . . . . 325
8.6.4 Professional Commitment . . . . . . . . . . . . . . . . . . . . . 327
8.6.5 Organisational Commitment . . . . . . . . . . . . . . . . . . . 328
9 The Contribution of COMET Competence Diagnostics to
Teaching and Learning Research . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.1 A New Dimension for Teaching-Learning Research in
Vocational Education and Training . . . . . . . . . . . . . . . . . . . . . 331
9.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.1.2 Teaching and Learning Research Based on
COMET Research Data . . . . . . . . . . . . . . . . . . . . . . 333
9.1.3 Competence Diagnostics . . . . . . . . . . . . . . . . . . . . . . 335
9.1.4 Teachers as Determinants of Professional
Competence Development . . . . . . . . . . . . . . . . . . . . . 336
9.1.5 Professional Competence Development and
Professional Identity/Professional Commitment . . . . . 338
xviii Contents

9.1.6 Professional Competence Development: Context


Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
9.2 Professional Competence and Work Ethic . . . . . . . . . . . . . . . . . 342
9.2.1 Introduction: From a Function-Oriented to a
Design-Oriented Vocational Training Concept . . . . . . 342
9.2.2 The Characteristics of Vocational Education and
Training (Chap. 3) . . . . . . . . . . . . . . . . . . . . . . . . . . 345
9.2.3 Competence Profiles for the Representation of
Competence Development and Professional Work
Ethic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
9.2.4 The Relationship Between the Level of Competence
and the Homogeneity of Competence
Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
9.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
9.3 Professional Identity and Competence: An Inseparable
Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
9.3.1 Justification of the Hypothesis . . . . . . . . . . . . . . . . . . 354
9.3.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . 356
9.3.3 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
9.3.4 Conclusions and Perspectives . . . . . . . . . . . . . . . . . . 360
9.4 Training Qualities and Competence Development . . . . . . . . . . . 361
9.4.1 The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.4.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . 362
9.4.3 Results on the Relationship between Competence
and Training Quality . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
9.5 The Training Potential of Vocational Schools . . . . . . . . . . . . . . 373
9.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.5.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . 373
9.6 Teachers and Trainers Discover Their Competences:
A “Eureka” Effect and Its Consequences . . . . . . . . . . . . . . . . . 374
9.6.1 The Development of Test Items . . . . . . . . . . . . . . . . . 374
9.6.2 The Changed Understanding of the Subject Shapes
the Didactic Actions of Teachers . . . . . . . . . . . . . . . . 377
9.6.3 Context Analyses: The Subjective View of Learners
on the Importance of Learning Venues . . . . . . . . . . . . 378
9.6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
9.6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
10 Measuring Professional Competence of Teachers of Professional
Disciplines (TPD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
10.1 Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
10.2 Fields of Action and Occupation for Vocational School
Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Contents xix

10.2.1 Proposal for a Measurement Method by Oser, Curcio


And Düggeli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
10.2.2 Competence Profiles . . . . . . . . . . . . . . . . . . . . . . . . . 394
10.2.3 Validity of the Oser Test Procedure . . . . . . . . . . . . . . 394
10.2.4 The Action Fields for TPD . . . . . . . . . . . . . . . . . . . . 394
10.3 The “TPD” (Vocational School Teacher) Competence
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
10.3.1 The Requirements Dimension . . . . . . . . . . . . . . . . . . 400
10.3.2 The Contextual Dimension . . . . . . . . . . . . . . . . . . . . 402
10.3.3 The Behavioural Dimension . . . . . . . . . . . . . . . . . . . 402
10.4 The Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10.4.1 Operationalisation of the Requirements Dimension
(Fig. 10.5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10.4.2 The Competence Dimensions . . . . . . . . . . . . . . . . . . 404
10.4.3 The Competence Levels . . . . . . . . . . . . . . . . . . . . . . 405
10.4.4 Operationalisation of Competence Components for
Teachers of Professional Disciplines (TPD) (Rating
Scale A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.4.5 Vocational Competence . . . . . . . . . . . . . . . . . . . . . . 405
10.4.6 Vocational/Technical Didactics . . . . . . . . . . . . . . . . . 406
10.4.7 Technical Methodology (Forms of Teaching and
Learning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
10.4.8 Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10.4.9 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10.4.10 Teaching and Training Organisations . . . . . . . . . . . . . 408
10.4.11 Social Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.4.12 Social-Cultural Embedment . . . . . . . . . . . . . . . . . . . . 409
10.4.13 Creativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
10.5 Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.5.1 Test Tasks for Measuring Cognitive Dispositions
(Conceptual-Planning Competence) . . . . . . . . . . . . . . 410
10.5.2 Time Scope of the Test Tasks (for Large-Scale
Projects) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6 State of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.6.1 A Pilot Study with Student Teachers . . . . . . . . . . . . . 411
10.6.2 The Research Programme: Competence
Development of Teachers and Lecturers in
Vocational Education and Training in China . . . . . . . 412
10.6.3 Investigating the Link Between Measured Teacher
Competence and Quality of Teaching . . . . . . . . . . . . 418
10.7 Evaluation of Demonstration Lessons in the Second Phase of
Training Teachers with Professional Discipline (TPD): A Test
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
10.7.1 The Lesson Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
xx Contents

10.7.2 Class Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . 419


10.7.3 The Interview (Following the Demonstration
Lesson) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
10.7.4 Final Evaluation of the Examination Performance . . . . 420
10.8 Development and Evaluation of the Model “Social-
Communicative Competence of Teachers” . . . . . . . . . . . . . . . . 421
10.9 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
10.9.1 Psychometric Evaluation of the Competence
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
10.9.2 Investigating the Link Between Measured Teacher
Competence and Quality of Teaching . . . . . . . . . . . . 422
11 The Didactic Quality of the Competence and Measurement
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
11.1 The Learning Field Concept Provides Vocational Education
and Training with an Original, Educational-Theoretical
Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
11.1.1 Professional Action Fields as a Reference Point
for the Development of Learning Fields . . . . . . . . . . . 429
11.2 Designing Vocational Education Processes in Vocational
Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
11.2.1 Professional Knowledge . . . . . . . . . . . . . . . . . . . . . . 430
11.2.2 The Training Paradox . . . . . . . . . . . . . . . . . . . . . . . . 431
11.2.3 Designing Learning Tasks . . . . . . . . . . . . . . . . . . . . . 432
11.2.4 Designing Teaching-Learning Processes . . . . . . . . . . . 447
11.2.5 Dealing with Heterogeneity . . . . . . . . . . . . . . . . . . . . 459
11.2.6 Step 5: Evaluating the Task Solution (Self-
Assessment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
11.2.7 Step 6: Reflection on Work and Learning
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
11.2.8 Step 7: Presenting and Evaluating the Task Solution,
Work and Learning Process as Well as Learning
Outcomes (External Evaluation) . . . . . . . . . . . . . . . . 466
11.2.9 Step 8: Systematising and Generalising Learning
Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
11.3 COMET as a Didactic Concept in Nursing Training at Higher
Technical Colleges in Switzerland: Examples of Teaching and
Examinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
11.3.1 The Higher Vocational Nursing Schools in
Switzerland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
11.3.2 COMET in the Context of the BZ-GS [Health and
Social Education Centre] . . . . . . . . . . . . . . . . . . . . . . 472
11.3.3 Example Lesson: Nursing Relatives and Palliative
Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Contents xxi

11.3.4 Example Lesson: CPR—Cardiopulmonary


Resuscitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
11.3.5 Examinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
11.3.6 Suggestion for Lesson Preparation . . . . . . . . . . . . . . . 483
11.3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485

Appendix A: The Four Developmental Areas . . . . . . . . . . . . . . . . . . . . . 487


Appendix B: Rating Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Appendix C: Examples for Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Appendix D: Four-Field Matrix (Tables) . . . . . . . . . . . . . . . . . . . . . . . . 509
Appendix E: Correlation Values for the Correlation Between
Occupational Competences and I-C Averages . . . . . . . . . . . . . . . . . . . . . 511
List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Chapter 1
Introduction

Methods for developing and measuring professional competence, professional iden-


tity and professional commitment have been developed and widely introduced in
COMET1 projects since 2006 (COMET vols. 1–5; Fischer, Rauner, & Zhao, 2015a,
2015b). Numerous publications have also reported on the methodological aspects of
this research and the VET practice based on it. In discussion both with the test
community and with the VET administrations responsible for quality assurance, and
not least with the many teachers and trainers involved in the COMET projects,
interest in a summary of the methods of the COMET project has meanwhile been
expressed. The methodical and methodological interest is directed not only at the
theoretical test aspects of vocational competence diagnostics, but also at the didactic
significance of the competence model on which the competence diagnostics process
is based: What can teachers and trainers learn from modelling and measuring
vocational competence and vocational identity development for the design and
organisation of vocational training processes? Are test and learning tasks related to
each other and what distinguishes them from each other?
In terms of test theory, the questions involved seem to have been underestimated
in the previous discussion on the methodology of competence diagnostics in voca-
tional education and training. As the paradigm of ‘right/wrong’ test tasks in the
search for innovative, practical and creative solutions to professional tasks in the
working world is only of very limited significance for specific aspects of profes-
sional task solutions, the standards-oriented test format loses its significance—
especially in the widespread form of multiple-choice testing. It is replaced by real-
life tasks, whose variety of possible solutions are sometimes difficult to understand,
even for experts. A heating engineer’s routine task of advising a customer in the
modernisation of his heating system, taking into account the variety of technical

1
Note on the spelling of KOMET/COMET: The spelling as COMET has been applied since the
international COMET conference organised by the European Training Foundation (ETF) in 2010.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 1
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_1
2 1 Introduction

possibilities, their environmental compatibility, their practical value and their invest-
ment and follow-up costs, and integrating this project into the operational work plan,
already shows that professional competence is characterised by exploiting the scope
for solutions and design in line with each specific situation, taking into account
competing criteria (and values). In general, this requires the development of com-
petence diagnostics guided by the realisation that professional specialists are
involved in the execution of large and small tasks in design and responsibility
processes that always involve the search for intelligent compromises. A bicycle
mechanic, for example, is distinguished by the fact that he is able to find out which
bicycle configuration might be the most suitable for a customer when talking to him.
Thomas MARTENS and Jürgen ROST therefore appropriately classify the mea-
surement of professional competence in the context of the competence-diagnostic
discussion, in which they state ‘In the COMET project, an ability model is examined.
The aim is to model how testees, whose solutions have different degrees of devel-
opment, can cope with open professional tasks’ (Martens & Rost, 2009, 98).
Whether and how it is possible to establish solid international comparative
competence diagnostics, both in terms of content and in accordance with psycho-
metric criteria, in a world with very different vocational training systems and, in
addition, with open test tasks, requires convincing answers. This already applies due
to the fact that the abundance of problems to be solved seems to be an insurmount-
able hurdle (cf. Baethge, Achtenhagen, Babie, Baethge-Kinsky, & Weber, 2006).
The successful empirical review of the COMET test procedure (2007–2012)
resulted in an available competence and measurement model, which opens up access
to the complex world of occupations, occupational fields and the almost conspicuous
diversity of vocational training courses and systems for competence diagnostics.
However, this methodological manual deals not only with the fundamental questions
of modelling vocational competence and the psychometric evaluation of the
COMET test procedure, but also with the following topics:

1.1 The Possibilities and Limitations of Large-Scale


Competence Diagnostics

A mere glance at the job descriptions of occupations shows that numerous profes-
sional competences can be measured with an acceptable amount of effort using
methods of competence diagnostics. Those professional skills that can be easily
recorded empirically and those that can only be recorded empirically with greater
effort are described in the first chapter. The third and sixth chapters describe how
methods of competence diagnostics can be used to improve the quality of tests.
1.4 Modelling and Measuring Professional Identity and Professional Commitment 3

1.2 Modelling Professional Competence

A methodological manual cannot avoid the simple question of what constitutes


professional competence. The answer to this seemingly simple question is made
more difficult by the fact that vocational educational literature offers very different
answers. The need to take up this question from the perspective of modelling
professional competence in addition to competence diagnostics and the design of
examinations calls for a convincing, internationally compatible answer. The
COMET competency model is presented in the third chapter and the underlying
categorical framework in the second chapter.

1.3 The Format of the Test Tasks

The concept of the test tasks and their development procedure is one of the acid tests
that prove in practice whether the test procedure can be applied beyond a national
framework. Prior experience with the COMET project shows that the participation of
the countries involved in international comparison projects primarily depends on
whether the subject didactics or the subject teachers and trainers assess these as
representative and as valid for the respective occupation or training course, even if
they have not participated in the development of the test tasks. Chapter 5 is devoted
in detail to the complex questions of the COMET test arrangement.

1.4 Modelling and Measuring Professional Identity


and Professional Commitment

A special feature of the COMET test procedure is the modelling and ‘measuring’ of
professional identity and professional commitment. In addition to measuring moti-
vation as a variable that is used to interpret the measured competence, this is a central
concern (objective) of vocational education and training. In vocational education, the
development of professional competence and professional identity is regarded as an
interdependent, indissoluble relationship (Blankertz, 1983). The expansion of the
competency and measurement model by this aspect of professional development is
dealt with in Sect. 4.6.
4 1 Introduction

1.5 Modelling and Measuring the Competence of Teachers


in Vocational Subjects

After the participation of teachers and trainers in the ‘student’ tests had led to new
insights into the transfer of teachers’ professional competence profiles to their
students, the COMET competence diagnostics method was also developed and
tested for teachers of vocational subjects (LbF [TPD]). The eighth chapter describes
and explains the COMET competence and measurement model. The current state of
research shows that a toolkit is now available for large-scale projects as well as for
the training and further education of LbF [TPD]).

1.6 The Quality Criteria for Professional Competence


Diagnostics and the Design of Tests

As the methods of competence diagnostics in vocational education and training quite


obviously differ in essential points from those of general education, it must be
clarified how the quality criteria for competence diagnostics in vocational education
and training must be interpreted.
This book is designed in such a way that the hurried reader can skip in-depth
discussions of test-statistical procedures and methodological questions.
The COMET method manual is intended for
• Teachers, trainers and personnel developers who want to familiarise themselves
with the state of development and research in competence diagnostics in voca-
tional education and training.
• Students and scientists of vocational education, vocational fields and their didac-
tics as well as vocational education research, who want to focus on this
research area.
• Vocational training planners and members of vocational training management,
who are faced with the task of utilising the methodological competence measure-
ment and development instruments for quality assurance and development in
vocational education and training at the level of designing and organising voca-
tional learning processes.
Chapter 2
Professional Competence as a Subject
of Competence Diagnostics

2.1 Design Instead of Adaption

The determination of the objectives of vocational education and training has always
been characterised by the tension between the educational objectives aimed at the
development of the personality and qualification requirements of the world of work
and the (training) objectives derived from these. In the vocational education discus-
sion, a large number of attempts can be made to resolve this tension in the form of a
holistic vocational training concept (Heid, 1999; Ott, 1998). The tradition of mastery
(in the broader sense) is often referred to as an example of holistic vocational
training. Richard Sennett has examined the social-historical and philosophical
roots of mastery in his book Handwerk and has attributed it a significance that
goes far beyond institutionalised craftsmanship by opposing mastery to the world of
fragmented skills (Sennett, 2008). The emphatic formula of ‘education in the
medium of occupation’ probably best represents the ever-new attempts to reconcile
education and qualification (Blankertz, 1972). With his deskilling thesis, Harry
Braverman classifies such attempts as idealistic misconceptions of reality in the
work environment, which is based on the principle of deskilling, at least in industrial
work with its processes of the progressive mechanisation of human labour—and
subject to the conditions of capitalist value realisation (Braverman, 1974). In the
sociological studies initiated by the Federal Institute for Vocational Education and
Training Research (BBF) in 1969 on changes in qualification requirements in
industrial technical work, the authors confirm this thesis or modify it to form the
so-called polarisation thesis, according to which the larger number of those deskilled
is contrasted by the smaller number of those more highly qualified: the winners of
rationalisation (Baethge et al., 1976; Kern & Schumann, 1970). This stance can
occasionally be found in more recent contributions to the discussion. Nico Hirtt sees
the Anglo-Saxon tradition of ‘competency-based education’ as an expression of the

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 5
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_2
6 2 Professional Competence as a Subject of Competence Diagnostics

‘marketisation of education’ and as a response to a technologically and economi-


cally induced ‘low skilled work force’ (Hirtt, 2011, 172).
Two strategies were set against the trend of the supposedly progressive
Taylorisation of vocational work.
1. The withdrawal of vocational education and training to the safe terrain of cross-
vocational education under the protective umbrella of state education policy
provides access to general and therefore also to ‘academic’ education (Kruse,
1976, 262; Grünewald, Degen, & Krick, 1979, 15).
2. The occupational research initiative around The Humanisation of Work was
supported by research and development programmes of the Federal Government
and the federal states (cf. in summary Brödner & Oehlke, 2008). These opposed
the progressive division of labour with a holistic concept of complete work
processes and justified this in terms of action theory according to which a
human action comprises a number of defined, successive steps in order to once
again view and design the work process as a unit of planning, execution and
evaluation. This provides a theoretical (scientific) base for the central idea of the
complete work process (cf. Hacker, 1973; Volpert, 2003). On closer inspection,
the concepts of humane and socially acceptable work design are based on
establishing refuges for humanisation goals in a world of work that seems to be
determined by technical and economic change.
It was only in the 1980s, in the course of a critical examination of economic and
technological determinism, that the foundations for the paradigm of design and the
need for design of ‘work and technology’ were laid and the guiding principle of the
empowerment to participate in shaping the world of work was formulated (Rauner,
1988). In this case, technology is no longer seen as a factor that determines
professional action (and thus also professional qualification requirements). Rather,
it is assumed that there is an interaction between technological development and the
design and organisation of work and education. The qualification of employees is no
longer defined by qualification requirements, but rather as a relatively independent
potential for innovations in the work process. Dieter Ganguin justifies this change of
perspective from an economic point of view: ‘If flat organisational structures,
cooperative management, teamwork and autonomous decisions are essential char-
acteristics of future work organisation, this must be both taught and trained.
Vocational training must therefore take a completely new approach [. . .]. The
basic pattern of a mature, responsible and socially active citizen must become the
guiding principle of all education’ (Ganguin, 1992, 33)1. The paradigm shift in
educational programmes and educational theory from one towards vocational train-
ing aimed at adapting to the world of work and at (co)shaping it marks a consistent
step towards modern vocational education. It is hard to understand today that
numerous sciences, research traditions and politics adhered to a

1
Member of an IBM working group on the development of an open architecture for integrated
information systems in the manufacturing industry (1984/85).
2.1 Design Instead of Adaption 7

technico-deterministic understanding of the world until the end of the 1980s


(cf. Lutz, 1988, 16 ff.). Until the implementation of non-deterministic vocational
education and training, which is consistently oriented towards the guiding principle
of shaping competence, it was a path marked by a variety of diversions and
aberrations, which is still not mature enough to enable its effortless pursuit in
vocational education and training practice. A milestone can be seen in the agreement
reached by the Conference of Ministers of Education on vocational schools in 1991
and the claim formulated with regard to the general educational mandate of voca-
tional schools to enable trainees to take part in shaping the working world and
society in socially and ecologically responsible manner (KMK, 1991). The resulting
discussion of the Vocational Education and Training Subcommittee of the Confer-
ence of Ministers of Education and Cultural Affairs soon led to the realisation that
this change in perspective from adaptive-oriented to design-oriented vocational
education and training requires a fundamental reform of the curriculum (Gravert
& Hüster, 2001, 89). With the far-reaching reform project concerning the introduc-
tion of framework curricula based on learning fields, which is still underestimated in
the debate around vocational education today and which intends to aim vocational
education and training at shaping competence, the paradigm shift to a
non-deterministic understanding of the world and the resulting guiding principle of
shaping competence was implemented in education planning. Now that the difficul-
ties of implementing such a fundamental reform in educational practice—a process
that continues to this day—have become evident, there is a certain risk that the
reform project, which can be classified as historic, will fail, nonetheless.
There is a great temptation to turn to new pedagogical concepts that offer a
convenient way out of the reform project ‘Learning-field-oriented curriculum devel-
opment’, which involves some significant effort. With the European Qualifications
Framework (EQF), the European Union offers an apparently handy toolkit for
committed teachers and educational planners to try out something new—this time
even something international. The development of a National Qualifications Frame-
work (NQF) opens up the possibility of being integrated in an international trend that
has its starting point in the development of a modular certification system (National
Vocational Qualifications, NVQ) in Great Britain. This development is considered
highly problematic from an educational perspective (Granville, 2003; Grollmann,
Spöttl, & Rauner, 2006; Hirtt, 2011; Young, 2007).
Another attractive invitation to turn away from pedagogically demanding guiding
principles and projects seems to be the empirical turning point in educational
research and education policy. The success of the international PISA project clearly
fuels the regularly recurring educational-policy-inspired wishes to finally put the
pedagogical art of good education on a calculable basis, so that verifiable outputs can
also be offset against state inputs in the form of educational resources. With the PISA
project, empirical educational science suggests that the economic input and output
calculations can now also be applied to measuring pedagogical returns, promising an
empirically founded pedagogy that allows educational processes and systems to be
organised according to defined standards and with effective management instru-
ments (Klieme et al., 2003). The authors of the PISA 2000 study point out this risk
8 2 Professional Competence as a Subject of Competence Diagnostics

themselves and also address the scope of their project. ‘It cannot be overemphasised
that PISA has no intention of measuring the horizon of modern general education. It
is the strength of PISA in itself to refuse such fantasies of omnipotence. . .’ (Baumert
et al., 2001, 21).
A comparable attempt at the education system’s technological renewal was
already pursued by the federal and state governments with the educational technol-
ogy reform project of the 1970s. The development and testing of computer-
supported forms of teaching and learning determined the fantasies and attempts to
substitute and program teaching work for more than a decade (cf. BLK 1973, 75).
For a while, it seemed possible to objectify educational processes and make them
technologically available. The Society for Programmed Instruction (GPI) and Cyber-
netic Pedagogy promised the liberation of educational systems and processes from
pedagogy as an art that could not be unified by educational policy up until now, but
which many educators somehow possess to varying degrees, and which had so far
eluded all attempts at rationalisation (Frank, 1969). However, the attempts associ-
ated with every new information technology to cope with the change in educational
technology in pedagogy have lost their power, since the ever faster succession of the
failure of IT-supported educational reforms—most recently the interest was directed
towards the Internet—has contributed to the insight that the control of educational
technology in educational processes was possibly a fixed—and also an expensive—
idea from the very beginning (Heinze, 1972).
It is foreseeable that future attempts to control education systems through mea-
surable outputs and inputs will also fail, since the more important educational goals
and contents will evade ‘input/output didactics’ shaped by economic calculation
(Young, 2009).
The hastily concluded considerations of education experts to control educational
processes via standards, the success of which can also be measured in the form of a
large-scale assessment, reduce education to the measurable. This is where the affinity
to the educational technology reform project lies. The excessive expectations of
large-scale competence assessment as a comprehensive pedagogical reform idea are
also problematic because a realistic assessment of the pedagogical-didactical and
educational-political potentials of competence diagnostics can significantly enrich
the actors’ toolbox.

2.2 The Possibilities and Limitations of Large-Scale


Competence Diagnostics (LS–CD)

The differentiation of vocational skills according to qualifications and competences


is of some importance for the examination of vocational aptitude and the recording
of vocational competences.
An examination provides information about
2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics. . . 9

Table 2.1 ‘Qualification’ versus ‘Competence’ (COMET Vol. I, 33)


Qualifications Competences
Object-sub- Qualifications are objectively given by Competences are sector-specific abil-
ject-relations the work tasks and processes and the ities and strategies in line with psy-
resulting qualification requirements. chological performance dispositions;
they are application-oriented.
Learning In the process of acquiring qualifica- The acquisition of competences is
tions, the human being is a carrier part of personality development and
medium for qualifications, a (human) also includes the skills resulting from
resource that enables people to per- the educational goals.
form specific activities through
training.
Objectifiability Qualifications describe the not yet Professional competencies are pri-
objectified/mechanised skills and abil- marily aimed at the difficult or
ities and define people as carriers of impossible to objectify skills of pro-
qualifications that are derived from the fessional specialists, who go beyond
work processes. current professional tasks and aim to
solve and process future tasks.

• Whether the skills defined in a job description and in the corresponding training
regulations are mastered in terms of qualification requirements,
• Whether the required competency is achieved.
This requires differentiation according to
• Abilities/qualifications that must be fully and safely mastered, as they may be
relevant for safety reasons,
• Skills/qualifications that have to be mastered to a certain degree and finally
according to,
• Skills/qualifications that are not core qualifications and are therefore classified as
more or less desirable (Table 2.1).
An examination must include all qualifications and requirements relevant to
employability. Practical skills must necessarily be tested in real professional situa-
tions (situational testing). In contrast, cognitive dispositions in the form of action-
guiding, action-explanatory and action-reflecting knowledge of work processes can
be tested with standardised examination methods.
The COMET method of competence diagnostics to identify competence levels
and competence profiles, to carry out comparative competence surveys with the aim
of comparing educational programmes and education systems, goes far beyond the
examination in the context of regulated vocational training programmes and the
examination of ‘learning success’ in relation to the learning objectives defined in a
specific curriculum.
In particular, international comparative LS-CD projects do not primarily define
the contextual validity of the competency survey in curricular terms, since it is an
essential goal of competence research to gain insights into the strengths and weak-
nesses of national educational structures (including curricula). In line with the
International World Skills (IWS), the contextual validity of the test tasks in the
10 2 Professional Competence as a Subject of Competence Diagnostics

LS-CD projects in vocational training is based on professional validity (Hoey,


2009).
Various aspects of professional skills—in individual cases also significant ones—
are beyond the methods of measurement. Not infrequently, for example, the ‘Tacit
Knowledge’, or implicit knowledge (cf. Polanyi, 1966a; Neuweg, 1999; Fischer,
2000a, 2000b), constitutes important professional skills that can only be proven in a
practical examination. This requires an extended competence and measurement
model with a corresponding rating procedure (! 4.6, 7.1).

2.2.1 Implicit Professional Knowledge (Tacit Knowledge)

Implicit skills can be observed, and their quality can be assessed in the performance
of professional activities and, above all, on the basis of work results. Although they
are largely beyond an explicit technical description and explanation, they are often of
central importance for professional ability and therefore also subject to examina-
tions. The rating procedures developed in the COMET project also allow the
determination of tacit skills.

2.2.2 Professional Competence (Employability)

As a rule, professional competence is determined using the more or less traditional


forms of examination. In addition to the examination of professional knowledge, the
most important thing in an examination is to test the qualification requirements
defined in the job descriptions as practical skills in real professional work situations.
Examinations therefore include proof of sufficient practical experience during train-
ing. The qualifications defined for employability in the defined occupational profiles
are also examined. This is necessary for a professional examination practice to
facilitate the certification of professional competence, which is usually also
connected with the granting of authorisations.
With its methods of standardised assessment of professional competences, com-
petence diagnostics provides a set of instruments with which the requirements for the
test quality criteria can be met (Table 2.2).

2.2.3 Craftsmanship

Craftsmanship is an essential criterion of professional qualification for a large


number of professions—not only in the arts and crafts (Sennett, 2008). Craftsman-
ship requires a high degree of practice based on a minimum of kinaesthetic intelli-
gence (cf. Gardner, 2002). Not only dental technicians and goldsmiths but also
2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics. . . 11

Table 2.2 Possibilities and limitations of measuring professional competencies


Measuring
. . .can only be achieved with the
. . .is possible for appropriate effort
Cognitive domain-specific Situated professional qualifications
Performance dispositions
Competence levels Implicit professional knowledge (tacit
Vocational and cross-occupational, independent of knowledge)
the forms and structures of educational programmes Individual situated professional ability
of test groups based on individual test results (professional competency)
Competence dimensions in the form of competence Craftsmanship
profiles
The heterogeneity of the competence dimensions
In combination with the data from the context sur- Social competences (with limitations)
veys, this provides insights into a large number of Abilities that are expressed in the interac-
control and design relevant interrelationships. tive form of the work (with limitations)
Among other things: Competences expressed in creative action
• Education systems and programmes. (e.g. in the arts and crafts)
• Contents and forms of professional learning.
• Cooperation between learning locations and edu-
cational plans.
• Work organisation.
• School organisation.
• International comparisons.

toolmakers and other industrial-technical professions belong to a class of professions


in which craftsmanship is an essential part of professional ability.
Skill measurement is also possible. Rating methods such as those commonly used
in the field of gymnastics, for example, are used here.

2.2.4 Social and Key Competences

Social skills play a very important role in vocational work and thus also in vocational
education and training. It is controversial whether social skills can be measured as
‘key’ skills across all occupations. According to Jochen Gerstenmaier, research into
learning and expertise disproves the thesis of devaluing knowledge in terms of
content in favour of general skills such as ‘problem-solving’. However, it can be
shown that the competence to solve problems is based on domain-specific knowl-
edge (Gerstenmaier, 1999, 66, 2004, 154 ff.).
Grob and Maag Merki have made an interesting attempt to approach the empirical
survey of interdisciplinary competences. They measured interdisciplinary compe-
tences on the basis of a large number of scales (Grob & Maag Merki, 2001), not as
‘key competences’, but rather as competences that promote the professional execu-
tion of work tasks at a general level. It is indisputable in this context that, for
example, professional work necessarily involves cooperation with other specialists
12 2 Professional Competence as a Subject of Competence Diagnostics

from the same community of practice and with experts from related fields and thus
represents a central dimension of professional competence. Within the framework of
the COMET project, the context survey provides information on the concepts of
professional cooperation among respondents.

2.2.5 Abilities That Are Expressed in the Interactive


Progression of the Work

These abilities are based on the type of creative action—in contrast to the type of
purposeful action (cf. Brater, 1984). According to Brater, artistic action is the
prototype of this form of action. The results of this type of action can only be
anticipated to a limited extent in terms of planning and concept.
Especially in the area of secondary technical work (maintenance, troubleshooting,
etc.), ‘. . . the situation must be turned into opportunities, ideas must be born,
solutions must be found. Here it is not adherence to plans but originality that is
required’ (Brater, 1984, 67). The established forms of measuring professional
competence reach their limits here, which are given by the open form of working
processes. This applies in particular to occupations with a highly intersubjective
share, e.g. in the education and healthcare sector. The interactive aspect of profes-
sional work can, to a certain extent, be covered by the open structure of the LS-CD
test tasks or by a rating based on observations.
Chapter 3
Categorial Framework for Modelling
and Measuring Professional Competence

When it comes to identifying the requirements for competency modelling in voca-


tional education and training, pedagogical discussion and vocational training
research are confronted with a wide variety of competency definitions and terms
whose significance and manageability for the design of vocational education and
training processes and the empirical recording of vocational competencies differ
widely. The proposals made in the vocational pedagogical discussion to develop
competence models based on learning goal taxonomies or on the concept of voca-
tional competence introduced by Heinrich Roth are critically evaluated in compe-
tence research.
Dieter Euler critically assesses vocational educational attempts to model profes-
sional competence:
1. ‘The works apply different competency models whose connectivity to existing
models remains open for in-company and school-based vocational training.
2. Developments remain partial with regard to a comprehensive concept of compe-
tence [. . .], i.e. only individual dimensions and facets of competence are taken up
and covered. There is a close focus on expertise, particularly in the developments
for the commercial sector. The alleged references to social competences (Winther
& Achtenhagen, 2008, 531) lack a sound theoretical foundation.
3. [...] However, it remains questionable whether these developments can be trans-
ferred into the standard practice of final examinations [...]’ (Euler, 2011, 60).
The Klieme Commission shares this critical assessment in its own critical assess-
ment of abstract concepts of competence as the basis of competence modelling
(Klieme et al., 2003). In this context, Tenorth emphasises that ‘cross-disciplinary
key competences’ such as social, personal and methodological competences, which
are often equated with the concept of competence, are not suitable either for the
establishment of educational standards or for competence modelling’ (Tenorth,
2009, 14).

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 13
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_3
14 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.1 On the relationship between guiding principles of vocational education and the measure-
ment of vocational competence

The criticism of the guiding principle of vocational competence for vocational


training as a connection between technical, personnel and social competence, with
reference to Heinrich Roth (1971), must not be misunderstood as a criticism of this
ground-breaking guiding principle of vocational training. Instead, the criticism is
aimed at the attempts to use this guiding principle of vocational education and
training as a competence model in vocational education and training research. The
Klieme Commission’s expertise and the subsequent justification of the DFG Priority
Programme convincingly reasoned that the function of a competence model consists
in mediating between the guiding principles and objectives of a subject or learning
area and the development of test tasks (and learning tasks) (Fig. 3.1). In this respect,
competence models also have a didactic function. However, this does not mean that
didactic models are also suitable as reference systems for the development of test and
evaluation tasks. Conversely, according to the Klieme Commission, competence
models should also be characterised by a fundamental didactic function
(cf. Katzenmeyer et al., 2009). Educational goals and their classification,
e.g. according to BLOOM’s learning goal taxonomy (cf. Anderson & Krathwohl,
2001; Brand, Hofmeister, & Tramm, 2005; Lehmann & Seeber, 2007, 26), can be
used for the explanatory framework if they prove to be internationally connectable.
Competency models, on the other hand, are not.
This results in further requirements for competency models and the measurement
models derived from them. These have to
• Include general educational goals and guiding principles which can be justified in
educational theory and which can be taught internationally, as well as domain-
specific educational goals of training systems and training courses,
• Be in line with the basic findings of teaching, learning, development and evalu-
ation research,
• Be tailored to learning areas that allow the content dimensions of the competency
models to be sufficiently concrete,
• Represent sufficiently concrete instructions for the development of test tasks
(Fig. 3.2).
These requirements imply the need for simple models. Simple models are those
with no more than three dimensions and no more than four to five competence levels.
3 Categorial Framework for Modelling and Measuring Professional Competence 15

Fig. 3.2 Justification framework for the COMET competence model

In addition, Weinert proposes not to include personality traits that lie outside the
definition of competence as domain-specific cognitive performance disposition in
competency models or to model them separately.
Additional requirements for competency models are listed as follows:
• The suitability of the competence model for developing test procedures and for
recording and evaluating learning outcomes in relation to educational monitoring,
school evaluation and feedback to teachers/trainers and students/trainees on
levels and development of competence,
• Its suitability for international comparative competence surveys.
For vocational training, the following requirements are also indispensable:
• The application of open complex (holistic) test tasks, since only these can serve to
map real professional tasks;
• The solution or processing of professional tasks usually require weighing up
between alternative solutions, which can be assigned to a solution space staked
out by defined requirement characteristics—‘staked out’, since the solution
spaces must in principle be open for unforeseeable solutions (COMET Vol.I);
• The recording of professional commitment and professional identity as a dimen-
sion of the professional development process that is equivalent to the develop-
ment of professional competence;
• A competence assessment that also allows comparisons across educational
programmes. This results in the demand for a competency model that regards
vocational education and training as one area of learning and which at the same
time enables vocational and occupational field-specific implementation;
• A distinction from the tradition of assessment as a form of outcomes verification,
as used to determine credit points for the skills defined in the national certification
16 3 Categorial Framework for Modelling and Measuring Professional Competence

systems. This is a form of verification of the qualifications defined in module


descriptions, which has little to do with professional competence.
The standardisation of examinations and test tasks has a long tradition in voca-
tional education and training. With the help of ‘multiple choice’ tasks, an attempt
was made to rationalise the examination procedure while simultaneously introducing
quality criteria. The determination of the degree of difficulty and the selectivity of
test tasks has since been regarded as proof of good test quality (see e.g. Schelten,
1997). In a 1975 report commissioned by the Federal Institute for Vocational
Education and Training Research, Hermann Rademacker pointed out that this format
of examination tasks is fundamentally unsuitable for the examination of vocational
skills (Rademacker, 1975).
The justification framework for the modelling of vocational competence com-
prises the description and justification of the special features of vocational education
and training:
• Their guiding principles and goals,
• The basic theories of vocational learning and development,
• The concept of professional identity and commitment
• The vocational concept and the guiding principle of work design that is conducive
to learning.

3.1 The Occupation Type of Societal Work

In the discussion on the professional organisation of societal work in the 1990s, the
assessments of social historians and sociologists in particular consolidated into the
thesis of the erosion of professionalism.
As early as 1972, Giddens predicted the reasonable expectation that new forms of
democratic participation would gradually emerge and that civil society would
replace the working society. Kern and Sabel publish a committed plea against
professionally organised skilled work, justified with the fact that the occupational
form of work, especially in industrial production, leads to company demarcations,
which hinder the necessary flexibilisation in modern companies. The adherence to
the tradition of professional skilled work at best enables companies to reproduce
what already exists, but not to innovate. As a way out, they recommend the Japanese
model of company organisation development, which does without the occupational
form of work and thus also without a vocational education system (Kern & Sabel,
1994). Finally, biographical research has coined the term ‘patchwork biography’ and
wants to draw attention to an erosion of professionally organised wage labour
(cf. Beck, 1993). Ulrich Beck and others argue in this context for a reflective
modernisation with which the division into instrumental economic action and
communicative political action can be overcome (Beck, Giddens, & Lash, 1996).
No later than with the publication of Richard Sennett’s book The Corrosion of
Character (Sennett, 1998), this discussion takes a turn. SENNETT deals with the
3.1 The Occupation Type of Societal Work 17

development of flexible working structures under conditions of specialisation, per-


manent re-engineering embedded in a new neoliberal global economy. If flexibility
leads to the dissolution of professional work, then this is accompanied by the erosion
of professional ethics, with fears for the future, with the devaluation of experience
and therefore the loss of personal identity: ‘Our experience can no longer be cited in
dignity. Such beliefs endanger the self-image, they are a greater risk than that of the
gambler’ (Sennett, 1998, 129). He criticises the postmodern model of patchwork
biography and picks up on Max Weber’s concept of the profession and work ethic in
his criticism: If professional careers are sacrificed to this new flexibility, ‘there are
no longer any paths that people can follow in their professional lives. They must
move as if on foreign territory’ (ibid., 203). Sennett’s analysis can be interpreted as a
reason for a modern career. The withdrawal of function-oriented operative
organisational concepts and their overlapping by business process-oriented opera-
tional structures in modern companies are a decisive contribution to increasing
operational flexibility. This is not contradicted but rather met by the concept of a
modern professional life. In the reform dialogue on vocational education and
training at the turn of the century, the concept of a modern profession emerged
(KMK, 1996; Heidegger & Rauner, 1997). In 2007, Wolfgang LEMPERT initiated a
vocational education discussion on the question ‘Profession without a future?
Vocational education without professions?’ with reference to the work ‘Die
Berufsform der Gesellschaft’ (Kurtz, 2005) presented by Thomas Kurtz. The results
of the discussion, as summarised by Wolfgang Lempert, are in clear contradiction to
the sociologically based forecasts of the 1970s to 1990s.
That’s why there’s little point in pursuing abstract and general guesswork about the future
of the profession in our society. To question its future formally and generally is wrong.
Instead, we should ask about a future-oriented professional concept and the conditions for
its implementation. This is how I understand Meyer and Rauner: both assume that it will still
make sense in the future to structure the production and use of human working capacity
professionally (Lempert, 2007a, 462).
Open, dynamic bundling of employment-related potentials for action, which would have to
replace many conventional training occupations, [fulfil] fully the abstract, formal criteria
that Kurtz—in connection above all with Max Weber—emphasises as primary characteris-
tics of occupations (ibid., 463).

Under the heading ‘Perspectives of the rescue, regeneration and future consoli-
dation of an (also) professionally accentuated organisation of societal work’,
Lempert concludes
By including the academic professions, the nightmare vision of a total ‘disposal’ of the
professional principle would become absurd and be banished from the outset (ibid., 463).

The concept of open, dynamic careers and core occupations (Heidegger &
Rauner, 1997) has meanwhile also emerged in the European Vocational Education
and Training Dialogue as a guiding principle with an impact on vocational research
and development. One prominent example is the development of the ‘European’
profession of ‘motor vehicle mechatronics technician’ (Rauner & Spöttl, 2002).
18 3 Categorial Framework for Modelling and Measuring Professional Competence

3.1.1 Employability or Professional Competence

All vocational training is aimed at the employability of its trainees. An apprentice


becomes employable once he/she has acquired the knowledge and skills defined in
the respective job description and is therefore in a position to practise the respective
profession in a qualified manner. During the examination of professional compe-
tence in the form of testing an apprentice’s professional knowledge (theoretical
examination) and professional ability (practical examination), the skills that must
be mastered are of particular importance: they must all be mastered safely and
without exception. This applies above all to safety and health-related tasks and, to
a certain extent, also to environment-related tasks.
In principle, each occupation must ultimately be learnt in practice (during the
work process) in order to achieve employability (Harold Garfinkel). Therefore, the
dual organisation of vocational education and training—learning a profession—is an
indispensable basic form of vocational learning. There are three forms of duality:
1. Single-phase—integrated—duality,
This form of dual organisation of vocational education and training has its
roots in the master craftsman apprenticeship. In Germany, it is regulated primarily
in the Vocational Training Act and is therefore the basic form of vocational
training in all sectors of the employment system.
2. Two-phase—alternating—duality,
This form of dual vocational training is widespread in academic education. A
course of study (first phase) is often followed by a regulated second phase of
academic professional training (e.g. with doctors, teachers, lawyers).
3. Informal duality.
All forms of vocational (university) education and training which are not
followed by a regulated second phase of vocational education and training have
in practice developed informal forms of familiarisation with a profession. For
example, university-trained engineers in Great Britain are certified as ‘chartered
engineers’ after a certain period of time and in compliance with defined
regulations.

3.1.2 Architecture of Parallel Educational Paths

However, the increased vocationalisation of higher education as a result of the


Bologna reform entails risks for the quality of both academic and vocational
education and training. This is due to the one-dimensional systems of classification
of successive educational levels, which describe the structures of national education
systems (ISCET, ISCO, EQF). Common to all one-dimensional classification sys-
tems is that the lower (vocational) and upper (academic) levels are clearly defined by
the definition of higher education qualifications. Higher education is academic
education, which is subject to the constitutionally defined freedom of scientific
3.1 The Occupation Type of Societal Work 19

teaching and research. The entitlement to award the degrees bachelor, master, PhD
lies—internationally—with the universities. These include—in a more or less dif-
ferentiated way—qualifications and training courses in vocational education and
training. The barrier between vocational and academic education is high; it almost
hermetically separates the two worlds of education: academic-scientific and
executive-oriented vocational education.
All attempts to make this educational architecture more accessible have led to the
vocationalisation of academic and vocational education and training and therefore to
a development that impairs the quality of both educational traditions. In contrast, an
architecture of parallel educational pathways holds the potential for a new quality of
vertical permeability and the realisation of the equivalence of vocational and aca-
demic education. At the same time, the establishment of a continuous path of dual
education creates a new dynamic in the interaction between the education and
employment system. The idea is a concept of modern professionalism, a necessary
basis for the implementation of an architecture of parallel educational paths. Even if
the constitutional freedom of teaching and research protects universities from
aligning their teaching with the qualification requirements of the employment
system, it can be expected that the occupational profiles developed in the processes
of vocational training planning will trigger a new discussion on professionalisation
in academic education. This could also contribute to a significant reduction in the
proliferation of specialisation in degree programmes and to the participation of
organisations in the world of employment in the design and organisation of (dual)
vocational training courses at universities (Rauner, 2015a), modelled on the Voca-
tional Training Act.

3.1.3 Professional Validity of Competence Diagnostics

Assuming that the internationalisation processes cover not only the academic pro-
fessions, but also the professional organisation of work in the intermediary employ-
ment sector, then there is every reason to identify professional work as the reference
point for substantiating the validity of competence diagnostics in the field of
vocational education and training (! 4.7).
The curricular validity of tests would limit their function in the investigation of
different forms of vocational education and training, including the quality of voca-
tional curricula. On the other hand, test tasks whose contextual validity is based on
reference to vocational work (vocational validity) make it possible to identify
strengths and weaknesses of various vocational training systems and arrangements.
In particular, it is possible to check whether trainees/students have a vocational work
concept (Bremer, 2006) as well as vocational qualification upon completion of their
vocational training—and not just technical and functional knowledge and skills, as is
taught in typical (university) forms of vocational training or in traditional basic
vocational training. The professional fields of action therefore apply to
20 3 Categorial Framework for Modelling and Measuring Professional Competence

internationally comparative occupational competence diagnostics. The International


WorldSkills (IWS) can be referred to as a reference system (cf. Hoey, 2009).

3.2 The Design of Work and Technology: Implications


for the Modelling of Professional Competence

The identification of ‘important work situations’ for the development of vocational


competences as a pivotal point for vocational training plans oriented towards
learning fields (KMK, 1996) can be based on fundamental theories of vocational
competence development and expertise research (Lave & Wenger, 1991; Röben,
2004). It therefore seems obvious to deal with the labour and occupational scientific
tradition of work (process) analysis and design. Both research traditions inevitably
transcend the postulate of purposeless science whenever they interpret themselves as
formative sciences (Corbett, Rasmussen, & Rauner, 1991). In summary, this is
shown by a list of the characteristics of work design by Ulich (Table 3.1).
Since the vocational-pedagogical and above all the vocational scientific discus-
sion is often based on theories, methods and research results from occupational
science in order to clarify the connections between working and learning and to
shape them from a pedagogical perspective, some conceptual clarifications will be
made below.

3.2.1 Professional Work Tasks and Professional Competence

A professional work task describes a specific task to be performed by an employee in


relation to its results. This must relate to work contexts that allow employees to
understand and evaluate their function and significance for a higher-level operational
business process. The structuring and organisation of professional work according to
work tasks forms the basis of the concept of work mediating an understanding of
context (Laur-Ernst, 1990). Professional work tasks are always normative in two
respects. First of all, professional work tasks are embedded in a profession. These,
however, are developed in negotiation and research processes guided by interests
(cf. Schmidt, 1995). For this reason alone, the expression ‘objective’ qualification
requirements, from which job descriptions and training regulations could be derived,
is misleading. Furthermore, the design of work tasks results from competing con-
cepts of the organisation of social work. Here, the tradition of developing and testing
humane work design and work organisation can be continued. Emery and Emery,
Hackman, Oldham and Ulich in particular have dealt with the justification of
characteristics for a humane work design. Since it has been shown that humane
work design and ‘Human Centred Systems’ (Cooley, 1988) are competitive in the
implementation of computer-aided work systems or even create competitive
3.2 The Design of Work and Technology: Implications for the Modelling of. . . 21

Table 3.1 Characteristics of task design based on Emery and Emery (1974), Hackman and Oldham
(1976) and Ulich (1994, 61)
Design feature Assumed effects Realisation by. . .
Holistic character Employees recognise the impor- . . . tasks with planning, executing and
tance and value of their work controlling elements and the possibility
Employees receive feedback on of checking the results of one’s own
their own work progress from the activities for compliance with
activity itself requirements
Variety of Different skills, knowledge and . . . tasks with different demands on
requirements abilities can be applied body functions and sensory organs
One-sided demands can be
avoided
Possibilities for Difficulties can be overcome . . . tasks, the accomplishment of which
social interaction together suggests or presupposes cooperation
Mutual support helps to cope
better with demands
Autonomy Strengthens self-esteem and will- . . . tasks with disposition and decision
ingness to take responsibility possibilities
Provides the experience of not
being without influence and
meaning
Opportunities for General mental flexibility is . . . problematic tasks for which existing
learning and maintained qualifications must be used and
development Vocational qualifications are extended or new qualifications acquired
maintained and further developed
Time elasticity Counteracts inappropriate work . . . creating time buffers when setting
and stress-free consolidation target times
adjustability Creates leeway for stress-free
thinking and self-chosen
interactions
Sense of purpose Makes you feel involved in the . . . products whose social benefits are
creation of socially useful prod- not questioned
ucts . . . products and production processes
Provides certainty that individual whose ecological harmlessness can be
and social interests are in checked and guaranteed.
harmony

advantages, these concepts have found their way into operational organisational
development (Ganguin, 1992).
The identification of professional work tasks must therefore consider the norma-
tive aspects of professional development and work organisation, as well as both in
their context. HACKER comes to a similar conclusion in his analysis of diagnostic
methods to expert knowledge:
As a preliminary consequence for the diagnosis of knowledge, it seems advisable to consider
a paradigm shift from [...] a reproducing to a (re-)constructing process of the task-related
performance prerequisites with individual and cooperative problem-solving and learning
offers for the experts (Hacker, 1986, 19).
22 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.3 Professional work in the field of tension between work contexts and work practices
(Rauner, 2002a, 31)

Professional work tasks can be divided into subtasks. Subtasks are characterised
by the fact that their sense for the employee is not derived from the subtasks
themselves, but only from the context of the higher-level work tasks. If the subtasks
of a superordinate task are delegated to different persons who do not work together
in a working group, the employees lose sight of the working context. According to
this organisational model, the subtasks dissolve the work context not only
organisationally, but also in the subjective perception (as an understanding of
context) and in the subjective experience of the employees.
In this context, occupational science primarily deals with questions of order and
condition analysis, the division of human–machine functions and, above all, with
questions of stress and less with the aspect of professionally organised work as a
point of reference for educational processes. Therefore, a detailed subdivision of
work tasks into subtasks, work actions and occasionally beyond that in operations
can be quite appropriate when carrying out empirical work analyses. In VET
research, on the other hand, if tasks and work actions become context-free reference
points for the design of VET plans and processes—detached from the work context
(Fig. 3.3)—this induces decontextualised learning that stands in the way of teaching
VET competence aimed at understanding and shaping the world of employment (see
Connell, Sheridan, & Gardner, 2003).
3.2 The Design of Work and Technology: Implications for the Modelling of. . . 23

Scientific interest in the working process is also directed towards the structure of
the complete working process. The vocational pedagogical and occupational scien-
tific interest in this occupational scientific concept is based on its normative inter-
pretation through design-oriented occupational science: Employees should learn to
plan, carry out and evaluate their work (cf. Table 3.1). Accordingly, a professional
activity that is based on performance alone is an incomplete work activity. As a
pedagogical category, however, the term ‘complete work activity’ is only suitable if
the meaning or content-related aspect of the work activity is not excluded. In this
context, Frieling refers to the limited range of standardised analytical methods as
developed by McCormick (1979), Frei and Ulich (1981), Volkert, Oesterreich,
Gablenz-Kollakowicz, Grogoll, and Resch (1983) and other occupational scientists.
Although these instruments could be used as a structuring aid for recording essential
aspects of work activity (Frieling, 1995, 288), the abstract formulation of the items is
unsuitable for the analysis and evaluation of concrete work contents in their signif-
icance for the working persons (Lamnek, 1988). This critical assessment is of central
importance for the design of vocational curricula and vocational training processes.
A further source for the educational theoretical development of a vocational
competence concept is the work of the VDI on technology assessment and the
corresponding philosophical discussion on the ethics of technology. An essential
aspect of technology assessment is the technology impact assessment, which is
oriented towards policy advice (Ulrich, 1987). The concept of technology assess-
ment already has the potential to be expanded by technology genetics research and
the concept of technology design (Sachverständigenkommission Arbeit und
Technik, 1986). Its guideline on technology assessment, the VDI committee ‘Fun-
damentals of Technology Assessment’ states: ‘Technology assessment here means
the planned, systematic, organised procedure that [...] derives and elaborates options
for action and design [from the assessment of technical, economic, health, ecolog-
ical, human, social and other consequences of technology and possible alternatives]’
(VDI, 1991).
In this guideline developed by the VDI, technology is understood as an objecti-
fication of values and related interests. In this case, the quality of ‘responsible’
technical development is assessed with reference to the overriding criteria of per-
sonality development and the quality of social development. Six ‘values in technical
trading’ can be assigned to these superordinate values (Fig. 3.4).
These are
• Functionality (usability, effectiveness, technical efficiency),
• Economic efficiency (in line with individual economic profitability),
• Prosperity (in line with macroeconomic benefit),
• Security (for individuals and humanity),
• (well-being, health protection)
• Environmental quality (natural and cultural components) (ibid., 7 ff.).
The second root of a ‘technical education’, used to establish the connection
between the technically possible and socially desirable (Rauner, 1986), is the
discussion on technology philosophy, which gained momentum in parallel with
24 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.4 Relationship between goals and values for petrol engines (VDI, 1991, 3–5)

‘work and technology’ research (Hastedt, 1991; Lenk & Ropohl, 1987; Meyer-
Abich, 1988). Heiner Hastedt in particular deals with the possibilities of technology
design in his research on ‘basic problems regarding the ethics of technology’
(Hastedt, 1991, 138), whereby he defines very similar evaluation and design cate-
gories as the VDI. Technology design implies not only interdisciplinarity, but new
forms of participation, according to the motto formulated by Walter Bungard and
Hans Lenk: ‘Technology is too important, now and in the future, to be left to the
technicians alone’ (Bungard & Lenk, 1988, 17).

3.3 Task Analyses: Identification of the Characteristic


Professional Work Tasks

Vocational education and training are a form of education and qualification in the
world of employment as well as an intentional process of learning for the world of
employment, dependent on knowledge of the expertise and skills required in the
work process. Three questions need to be answered:
• What are the skills that enable ‘skilled’ workers to carry out their work
adequately?
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 25

• What other skills must they have in order to participate in the process of
operational organisational development—both within and beyond their own
area of responsibility?
• Which skills are developed in the work process itself or how should ‘learning’
and ‘qualifying’ work processes and work systems be designed?
In vocational education and training practice, some of these questions are rarely
asked because the teaching and learning content and the related educational and
qualification objectives are specified in organisational systems—the training regu-
lations with their framework training plans for in-company and (framework) curric-
ula for school-based vocational education and training. They represent the
occupational profiles in an operationalised form. However, since occupations are
now mostly traditional and fixed attributions of tasks for the organisation of social
work, embedded in the industrial-cultural development of regions and countries,
occupations represent general socio-economic and less the qualification require-
ments aimed at company organisational development. The increasingly rapid pace
of technological and operational innovations in industry, commerce and crafts
requires the examination of occupational profiles and occupational regulations
with regard to their topicality and prospectivity and to relate them to the reality of
work in their attributions of tasks. Here, the working reality is understood not only
as one which is empirically given, but also as one to be developed.
In the study of a professional field (occupational field science)1, vocational
scientific work studies therefore play a central role. The subject matter of profes-
sional scientific work studies is described in more detail below, and information is
provided on their methodological implementation.
The vocational sciences deal with the contents and forms of skilled work in
established and developing occupations and occupational fields, with vocational
learning processes for the world of employment and with implicit and explicit
learning in the work process. In the analysis, design and evaluation of vocational
training processes and work processes that promote learning, the link must be
established between the work processes of vocational working reality, the learning
and educational processes and the systems of vocational organisation (Fig. 3.5).
Figure 3.5 shows three correlations between the world of employment and
vocational education and training. The widespread idea that in a first step, the
means of vocational classification can be derived from the analysis of the reality
of work and that the contents and forms of vocational training processes result from
this in a linear connection is called qualification determinism. This deterministic
misunderstanding is as widespread in the everyday actions of vocational educators as
it is in vocational training planning and research. On closer inspection, this linear
relationship evaporates and gives way to a differentiated, non-deterministic concept
of correlations between the three poles of the outlined relationship. This is where the
studies and development tasks for design oriented vocational education and training

1
The more common term ‘professional science’ is used below.
26 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.5 The relationship between professional work and education processes

can be found (! 2.4). Figure 3.5 illustrates two widespread reductions and deficits in
the vocational education activities of teachers and trainers and in vocational educa-
tion research.

3.3.1 Professional Scientific Task Analyses Include

1. The identification of the work contexts and qualification requirements character-


istic of a job description—the occupational profile.
2. The identification of work tasks covering the job description. A distinction must
be made between the core tasks and the industry- and application-specific pro-
fessional areas of responsibility.
3. The logical systematisation of developmental tasks according to criteria as
suggested by the novice expert paradigm.
4. The differentiation of the work task according to the categories.
– Subject of skilled work,
– Methods, tools, organisation of skilled work,
– Requirements for the subjects and the forms of skilled work.
The professional work tasks do not arise from a process of aggregation of elemen-
tary, abstract basic skills and knowledge, as they are assumed to be the smallest units
in the microanalysis of work activities. Conversely, the higher-level, meaningful
work context and the profession with its potential for creating identity become the
starting point for identifying the work tasks that constitute the profession. For a
profession that requires about 3 years of training, experience has shown that between
15 and 20 professional tasks can be specified that meet the criteria for professional
working contexts. A differentiation of each of these work tasks into their subtasks
would make sense when specifying and operationalising the work tasks but is not
necessary as a starting point for the task analysis. Professional work tasks are only
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 27

those that can be formulated as action-oriented and as part of corporate value-


creating processes. Professional work tasks have very different qualities in terms of
the required professional experience, the degree to which they can be routinised, and
the extent and level of theoretical knowledge required to master them. While some
professional tasks can only be mastered safely and effectively after many years of
professional experience, other tasks can be performed from the very beginning of a
career, without this necessarily meaning that these tasks have a lower priority with
regard to their qualification requirements. Since all vocational work tasks are
acquired in the course of learning a profession from a beginner (novices) to mastery
level, it is obvious that the vocational work tasks are arranged as a starting point for
the development of vocational training plans in such a way that they support the
process of vocational training and qualification on the way to mastery or specialist.

3.3.2 Identifying Professional Work Tasks: Expert Specialist


Workshops (EFW)

Expert specialist workshops are suitable for the identification of professional work
tasks. This procedure is based on the ‘Design A Curriculum’ (DACUM) concept
developed by Bob Norton at NCRVE2 (Ohio State University) in the 1980s and on a
task analysis procedure tested in the Leonardo project ‘Car-Mechatronic’. A two-day
workshop for experts is the core component of this process (! 4.1).

Professional versus Experience-Based Description of Work Tasks

In this form of vocational scientific task analysis, the methodological challenge is to


transform the context-related experiences of ‘expert specialists’ into a context-free
description of occupational tasks. Only if this is successful will the work tasks
identified with EFW form the basis for the development of test tasks. At the same
time, professional work is always tied to subjectivity and situativity and is therefore
unique. This means that the quality of professional work processes and tasks can
only be assessed in context-related manner as a first step. This has far-reaching
consequences for EFW’s method. The workshop participants are experts in their
work. The source of their expertise, which they can contribute to the workshops, are
their reflected work experiences. The vast experience from research practice shows
that attempts to question the expert specialists as professional experts about the
characteristic professional work tasks—and not about their actual competence: their
subjective work experience—lead to failed analyses. The expert specialists would be
given the role of qualification researchers to provide information on findings that

2
National Center for Research in Vocational Education (at Ohio State University until 1988 and
then at the University of California, Berkley).
28 3 Categorial Framework for Modelling and Measuring Professional Competence

require a process of scientific reduction of empirical data and the associated content
analyses. Their real competence as experts of their work experience would fall by the
wayside.
When planning, implementing and evaluating the EFW, it must be considered
that teleological elements cannot be avoided in the description of professional tasks
and developmental processes from beginner to expert. Professional action always
includes dealing with and weighing the environmental and social compatibility of
professional task solutions (! 3.2). It is critical to note in this context that the logical
approach to educational research has so far been developed mainly with reference to
developmental psychology or even merges into it. For vocational education and
training and for all forms of technical education, in which the technical competence
to be imparted is expressed in educational goals, the logical approach to educational
research largely misses its central subject: the educational contents. Therefore, in the
further logical approach to the research and design of vocational work and educa-
tional processes, it is important to clearly work out the specifics of these develop-
ment processes, for example, in comparison with general education (Table 3.2).

Participants: Expert Specialists

In order to be able to identify expert specialists, the following characteristics should


be fulfilled:
• With their professional biography, their professional competence and their cur-
rent work tasks, expert specialists represent a background of experience and
knowledge that can be used to determine future-oriented working contexts. In
the last few years, the experts should have passed through several stages of their
careers, know various departments of the company and have been involved in
innovative projects.
• Expert specialists are not representatives who have been shaped by the given
professional structure, but rather embody innovative and forward-looking profes-
sional practice (prospectivity) for a particular professional field. Depending on
the occupational field, the industry structures must also be considered when
selecting experts so that a profession can be covered in its core and marginal
tasks.
• In practical EFW, 10–12 participants have emerged as a favourable group size.
Two-thirds of the participants should be at the skilled worker level and one-third
at the superior level. The representatives of the advanced skilled workers (fore-
men, master craftsmen and workshop managers) represent the work-oriented
management perspective in this process. Above all, they are the ones who can
assess the professional work tasks in relation to the company’s task organisation.
Table 3.2 Differentiation in developmental analysis and design of educational processes
Sciences to identify the
Development development structure of
Education Personality Subject of development structures Teaching/learning contents teaching/learning content
General Children and Development of Development steps Largely interchangeable General pedagogy, developmen-
(formal) adolescents – cognitive of tal psychology
education – social – cognitive
– moral – social
competence – moral
competence
Natural sci- Pupils of natu- Acquirement (development) of Stages of increasing Scientific facts/learning Natural sciences and their
ence ral science scientific theory specialist content didactics
education education competences
Vocational Trainees (ado- Acquirement/development of Stages of increasing Training regulations Vocational (field) science and its
education lescents, professional skills in specific professional according to BBiG, profes- didactics, vocational pedagogy
adults) professions competence sional curricula
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks
29
30 3 Categorial Framework for Modelling and Measuring Professional Competence

Researcher and Moderator

The workshop is usually conducted by two researchers, at least one of whom has
relevant professional training—and if possible, also relevant work experience. The
‘second’ researcher acts as moderator and pays special attention to the methodical
approach and the realisation of a trusting and creative workshop atmosphere, which
enables all participating experts to contribute all their experience and competence to
the analysis process. The ‘first’ researcher leads the expert discussion, clarifies
technical contradictions and deepens the discussion through technical suggestions
and interventions.

3.3.3 Professional Scientific Work Process Studies


Goals and Structure of Work Process Studies

Work process studies can be used to gain insights into the skills incorporated into
practical professional work. In the tradition of didactics in vocational education and
training, this question is rather undervalued. The method is widely used to derive
specialist knowledge from objective scientific knowledge in a process of simplifi-
cation—of didactic reduction or transformation—in order to teach it to students or
trainees in specialist instruction (Schein, 1973). It is assumed that this ‘knowledge’
must have a connection to professional action. In this tradition, knowledge contents
in the form of ‘subject theory’ are regarded as objectively given facts whose
objectivity is based on the specialist sciences. However, the real importance of this
knowledge for practical professional action remains unclear. What we do know is
that this context-free knowledge can only be used as a basis for professional
competence when it is incorporated into concrete professional activities. Parts of
this context-free theory are safely transformed into work process knowledge in the
process of professional work. The foundation of vocational education and training
on the basis of in-depth knowledge of work process knowledge marks a fundamental
change of perspective in vocational education and training practice: ‘If it is possible
to find access to what constitutes the practical skills, the incorporated knowledge of
vocational work, its findings will be invaluable and exert a lasting, if not revolu-
tionary influence in many areas—for example in curriculum and evaluation
research’ (Bergmann, 1995, 271). In this regard, vocational training and work
appear in a new light. There are interesting references to historical developments
in which, for example, the art of building was not based on engineering science, but
on the knowledge of the great master builders of the working process, which had
developed into a process of work experience over centuries.
Work process studies are therefore an important instrument of qualification
research for the identification of professional knowledge and skills as a basis for
competence diagnostics.
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 31

The Steps of Occupational Scientific Work Process Studies (Table 3.3)

If a work process study is carried out with the aim of developing professionally valid
test tasks for competence diagnostics, then the identification of ‘important work
situations’ for professional competence development (KMK, 1999) is the focus of
research interest.
In addition to the criterion of qualitative representativeness or exemplariness of
the work process in terms of the purpose of the study, another selection criterion is
the clarity of the work process, which is given if it is possible for the researcher
qualified in vocational science to record the work situation in all essential objective
and subjective moments under the given operational framework conditions and
within the time available for examination. Finally, the work process should be
directly accessible to the researcher so that he can be present in the work situation.
The main criteria for the selection of the object of investigation are therefore
• Validity of content through qualitative representativeness and exemplariness,
• Manageability of the work process (limitation of the field of investigation while
maintaining its complexity of content),
• Accessibility of the work process (for an emphatically action-oriented process-
oriented research).
The analysis of professional work processes requires professional competence on
the part of the researchers, which enables them to conduct expert talks and discus-
sions at the level of domain-specific technical language. This includes knowledge of
the work process and context to be analysed, i.e.,
• The technical issues: work object, work equipment and tools, and work processes,
• Specifications for the professional or operational work tasks in which the work
process to be examined is integrated,
• The instructions and documentation available for the execution of the
corresponding work tasks,
• The subject-systematic (theoretical) connections as far as these are of importance
for the competent working action.
This preparatory step can also include the practical handling of work objects and
tools to such an extent that the work process to be examined is technically clear for
the researcher and he can fully concentrate on the research of the concrete work
action of the actors and the specific and general competences expressed therein. The

Table 3.3 Steps of occupational scientific work process studies


• Selection of the work process.
• Analysis of the objective conditions shaping the work process.
• Definition and formulation of preliminary research questions and hypotheses.
• Preparation of the study: Approach to the research field.
• Implementation of the work process study.
• Evaluation of the study.
32 3 Categorial Framework for Modelling and Measuring Professional Competence

analysis and—if necessary—the appropriation of the objective side of the work


process is therefore of particular importance, as the mystification of implicit abilities
and intuitive competence in action (tacit knowledge, tacit skills), which can often be
found, and the ‘arationality’ of competent action can be avoided in the terminology
of Dreyfus and Dreyfus (1987). Since the researcher has the theoretical and—if
necessary—also a certain practical professional competence for the work process to
be analysed, an essential prerequisite is given for comparing the process-related
working action and the interpretation of this action by the actors with their own
interpretations in the research situation and, if necessary, to couple the differences
back to the actors. This is an essential prerequisite for the validity of the test results.

Definition and Formulation of Preliminary Research Questions


and Hypotheses

In identifying and formulating the preliminary research questions and hypotheses,


general questions such as the following are taken up:
• How do beginners, advanced and expert workers act in the work context under
investigation?
• Which professional, social and methodological competencies enable the actors to
act professionally and competently?
• How do theoretical and experience-based action intermesh and how is subjective
theory formed among the actors (work process knowledge)?
• To what extent and in what quality is competent working action obligatory?
• To what extent and how do skilled workers use the tools available for their work?
• Which (de)qualifying effects can be identified in the work process and how are
they triggered and favoured?
In contrast to the social science research context, the focus here is not on
developing generalisable theories about occupational work structures, vocational
learning (learning theories) or company organisational development. The aim of the
occupational scientific analysis of work contexts is to determine the relationship
between work content and professional competence for a specific occupation or for
specific professional work contexts and processes.
Despite the professional competence of the researcher, given by his professional
scientific qualification and his professional preparation for the study, the researcher
can assume that the actors’ knowledge of the work process can only be made
accessible through the research process. It is therefore essential to avoid prematurely
deducing subjective working behaviour and professional knowledge and compe-
tence from the objective conditions of the work process. The preliminary hypotheses
are therefore open, so that they can be clarified, modified or even completely rejected
and replaced by others as the investigation progresses. This process-oriented
research aims at a dialogue consensus between researcher and actor(s) and includes
the questions and hypotheses of the investigations (cf. Scheele, 1995).
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 33

Preparation of the Study: Approach to the Research Field

After the work process to be examined has been selected, justified by professional
science and analysed from the perspective of theory-based work and a suitable
examination situation (company, skilled worker, supervisor, etc.) has been selected,
the actors are informed and made interested in the project.
It should normally be in the interest of professionals and management to enable
and actively participate in vocational work studies, as they aim to improve the design
and organisation of work and vocational qualification.
When presenting the project, the researcher also points out his professional
qualifications in the area of responsibility to be examined. This not only favours
the relationship of trust always demanded for the question, but also defines the
examination situation as one ‘among experts’. The intention of the investigation, the
form of the investigation and the methodical procedure are presented to the partic-
ipants. The situations to be examined are commented on from a technical perspective
on the basis of the previous analysis of the objective side of the work and the
objective/interest of the investigation is justified. By emphasising the technical
content of the investigation, the investigator becomes somewhat of a participant in
the research process. The aim is to guarantee or promote:
• The acceptance of the researcher by the study participants,
• The greatest possible scope for action and design in the investigation;
• Extensive identification of the parties involved with the investigation project;
• A definition of verbal and non-verbal communication at the level and in the
quality of professional and work-related professionalism; those to be examined
know what they can expect of the researchers in terms of content and that they can
communicate without distorting their accustomed forms of expression.
• An emotional opening of the persons to be examined
• A climate of trusting cooperation based on specialist professional collegiality.

Implementation of the Work Process Study

The empirical basis of the occupational scientific work study is provided by


• The structure of the operational work process,
• The operational organisation of work processes and tasks,
• The persons involved in the study and their professional, technical and socio-
occupational competences,
• The concrete work contexts to be analysed in their technical and company-
specific contents as well as their forms, represented by the concrete work action
and the objective circumstances.
It is necessary for the researcher to be able to understand the (technical and
methodological) content of the work context to such an extent that he can understand
the significance of the concrete work steps for the processing of a work task in an
34 3 Categorial Framework for Modelling and Measuring Professional Competence

examination and interpret them with regard to strategic, spontaneous, creative,


‘programmed’ and systematic/technical procedures as well as evaluate incorrect
and inappropriate procedures. This professional competence enables the researcher
to sound out the work situation to the necessary depth. Therefore, the researcher
should possibly acquire any missing specialist skills before starting the study
(Frieling, 1995, 285; Bergmann, 2006).

The Action-Oriented Expert Discussion

The key questions for the technical discussion are formulated in advance in a
discussion guideline. Interviewing different groups of people naturally also requires
different key questions. It is important to assign the main questions to the higher-
level research questions. Which questions and combinations of questions should be
used to cover which aspects of the study? The main questions rather have the
function of a ‘checklist’, which allows the researcher to get deeply involved in the
work process to be investigated, since he can always return to the reflection level of
the more detached analyser with the help of the main questions. The researcher takes
part in the work situation and encourages (e.g.) the skilled worker to voice his
thoughts about what he is doing at the moment by means of appropriate impulses
and questions. The interview is conducted according to the situation, very close to
the work process.

Paraphrasing

If the researcher has the impression during the ‘technical discussion’ that an utter-
ance remains on the surface of the work situation and is misleading or that it is even
incomprehensible to him, it makes sense to repeat the utterance interpretively, so that
the actor has the possibility of clarifying, deepening or even just clarifying the
previous utterance.
Example
Researcher: ‘I have now understood . . .’.
Dialogue partner: ‘Not quite, e.g. if I . . .’.

Enquire, Reflect and Clarify

Enquiries usually lead to a certain interruption of the work situation. During work
situations, the researcher may also find himself in clarifiable technical situations,
which he may understand in their content but not in relation to the work action of the
skilled worker. If the researcher now assumes that a specific work action is of
particular importance for one of his research questions, this requires a more
in-depth enquiry and, if necessary, a special expert discussion about the specific
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 35

‘case’ and its processing possibilities. Such conversational situations usually mean
an interruption of the work action. An explicit interruption of the work situation is
achieved through an intervention that the researcher initiates with remarks such as
‘Just a moment, isn’t what they are doing risky?’ or ‘Couldn’t we solve the problem
this way?’ A technical intervention is appropriate,
• If the researcher cannot re-enact a work action that he understands from its
technical side;
• If only an intervention can clarify why the skilled worker prefers one of several
possible work steps;
• If it seems expedient to encourage the skilled worker to apply an alternative
procedure or to play through it mentally.

Qualitative Experimentation

The experiment has a dual function in work studies. First of all, experimental testing
is part of the repertoire of the theory- and experience-led work trade of skilled
workers. One very typical aspect is fault isolation in technical systems in which an
experimental approach promises success. This is the case, for example, if the
incremental elimination of error causes from a large number of possible error causes
leads to the identification of an error cause. This work action in the form of the
systematic and experimental cause of an error is underestimated in the relevant
investigations. Skilled workers very often refer to their ‘empirical values’, but
often acquire them by experimenting with their work. This includes the form of
thought experiments.
Another important aspect in the context of occupational scientific work studies is
the qualitative experiment. In contrast to the laboratory experiment, the researcher
creates a quasi-experimental situation in the form of explorative, heuristic experi-
mentation through his intervention in the process of action-oriented observation.
Often, detached observation and active experimentation are understood as two
opposite forms of researcher behaviour towards his ‘subject’. In reality, these two
knowledge-generating methods are mutually (dialectically) intertwined. The exper-
iment becomes significant only through precise observation and—conversely—
observation becomes significant through the systematic and systematising activities
of the researcher. There is a considerable need for development in advancing forms
of explorative and qualitative experiments for research into complex work situations.
Two forms of qualitative and explorative experiments are available.

Planned Explorative-Experimental Work Process Studies

In this case, the researcher influences the variation of two or more factors assumed to
be relevant for a typical, real work context in such a way that their effects on the
36 3 Categorial Framework for Modelling and Measuring Professional Competence

management of the working context to be examined or even only certain aspects


thereof can be examined by it.

Situative ad-hoc Experiments

The participating investigation of work processes, as it is based on action-oriented


research discussions and action-oriented exploration, suggests using the explorative
character of the participating observation to redesign work situations in a quasi-
experimental way. Idioms such as the following can be used to create such quasi-
experimental situations ad hoc.
• ‘What would you do if you didn’t have computer self-diagnostics’?
• ‘Could you demonstrate how to work on the problem without the manual’?
Such interventions can also challenge ‘thought experiments’.

Documentation of the Research Process: Tape and Memory Records

The methods recommended in the relevant methodological manuals—especially in


empirical social research—can be used to document the study. The tape and memory
records are particularly important in this context.
The shorter the time interval between recording and the event for which a record
is created, the better the quality of the memory records. A two-column procedure is
recommended for the memory record.

The Memory Records (Table 3.4)

Table 3.4 The memory record


Documentation of the researcher’s actions and
Documentation of the work process their justification/spontaneous impressions
This column describes the work process in as In an action- and process-oriented work pro-
much detail as possible in all its dimensions and cess study, the actions of the researcher are of
aspects. Written and other documents are particular importance. The actions of the
documented in the appendix or noted as researcher—synchronous with the actions of
sources. the actor—are documented in the memory
If a tape recording is available, it is important to record and briefly explained if this does not
document in the memory record those facts result from the context of the action. In this
which cannot be taken directly from the tape or column the spontaneous impressions are also
video recording. noted according to the motto: ‘What went
through my mind’.
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 37

Situation Film as Supplementary Documentation of the Working Reality

In contrast to the method of thick description, it is not important to document a work


situation as completely as possible when creating cinematic documentations and
representations of the work contexts to be examined. The result is probably a hardly
manageable mismatch between the few ‘key scenes’ and the abundance of unusable
recordings. The concept of the situation film assumes that the researcher has a deep
understanding of the work situation and is therefore in a position to document ‘key
situations’ on film. This means that ‘recognisable’ situations are already selected
during the documentation process. This is feasible within the framework of profes-
sional scientific research, because the work context to be investigated is a field
familiar to the researcher. Nevertheless, in practice, a ‘yield’ of only about 20%
usable film material is assumed. In a series of research projects, the resulting
cinematic documentaries were didactically processed into ‘situation films’ for voca-
tional training. Unlike conventional educational films, the viewer is challenged to
interpret the documented situation himself or to develop a (or competing) interpre-
tation(s) in a discussion process between the participants and researchers. Therefore,
such situation films are particularly suitable for work process studies with actors
outside the concrete work situation. The situation film represents a quasi-work
situation for the actors. He sees his own work situation through the film and projects
the film onto it. This makes it possible to hold a technical discussion between
researcher and actor that is very close to the reality of the actor’s work. The
possibility of referring to cinematic scenes that have an affinity to their own work
situations significantly increases the actors’ willingness to express themselves and
their possibilities of expression. The more directly the working reality documented
in the situation film corresponds to that of the actors, the better this succeeds
(cf. Müller, 1995; Petermann, 1995).

Evaluation of the Study

Determining the direction of analysis.


The direction in which the analysable material is to be evaluated is determined
first. Here one needs to ask whether the object of the analysis is
• The subjects of the study, their actions and competences,
• The object of work (e.g. a new tool) and the associated work method/procedure,
• The object of work and the organisation of work, or,
• The qualification requirements (and how they are reflected in the work process).
38 3 Categorial Framework for Modelling and Measuring Professional Competence

Qualitative Text and Material Analysis

Only then does the content analysis begin. The procedures proposed here go back to
Mayring (1988) and can be distinguished by summarising, explicating and structur-
ing analyses.
The purpose of the summary content analysis is to reduce the material in such a
way that the meaningful value is retained and at the same time a compressed short
text is created. In several text passages: [original sentence is incomplete] (Fig. 3.6).
In explicative content analysis, it is important—rather in reverse to a summary
content analysis—to clarify the meaning of initially unclear text passages with the
aid of memory protocols, other materials and theoretical assumptions. Qualitative
content analyses in occupational science can be distinguished between

The text passages not relevant for the investigation


Reformulation as well as redundant and repeating text parts are
deleted, the content-relevant text passages are
(Paraphrasing) reformulated at the desired output level - without
changing the content of the statements.


The cleaned text is structured according to
First reduction: categories, which result from the research
Generalising text objectives and the material, according to
passages with the same corresponding text groups. Within the groups,
content redundant texts are omitted, as are the texts that do
not have the same content.


In the text groups divided into categories, identical
and similar statements are now bundled and
Second reduction: summarized. Theoretical assumptions about
Summary of the texts objective conditions of the work activity can be
used as an aid, if the meaning of the statements
remains the same.

The thus constructed and integrated text is
compared with the categories formed at the
Checking the categories
beginning and it is examined whether and how if
and the scientific
necessary, the categories would need a change or
relevance of the
further development. The short text can now be
compiled text
examined with regard to its importance in
vocational science.

Fig. 3.6 Processing steps of the interview material


3.4 Guiding Principles and Objectives of Vocational Education and Training 39

• Explications along the skilled worker statements to determine the specific quality
of the skills expressed in the work actions. The observed work actions themselves
are used for the explication.
• An extended context analysis including all documents relevant to the work
situation (company data, socio-cultural background, geographical data, data on
technological innovations, etc.).
Structuring content analyses aim at examining cross-sectional aspects and filter-
ing out the corresponding texts from the statement material (core statements).
Structuring analyses usually require hypothesis-driven and therefore theoretically
justified work process analyses. This results in the categorial framework for the
structuring content analysis.
All forms of content analysis already lead to an interpretation of the statements
thus obtained, either during the analysis or after presentation of the evaluated
material, taking into account the corresponding professional and subject-theoretical
contexts. In the final step of the work process studies, this ultimately leads to the
review, reformulation, clarification and further differentiation of hypotheses, test and
development tasks for the design and evaluation of vocational training processes.

3.4 Guiding Principles and Objectives of Vocational


Education and Training

3.4.1 Professional ‘Gestaltungskompetenz’ (the Ability


to Shape or Design One’s Professional Future)

In the world of employment, skilled workers are confronted with more or less
pronounced scope for creativity and solutions when solving professional tasks.
When weighing up alternative solutions and solutions, a ‘good’ compromise must
always be found between the criteria of functionality, environmental and social
compatibility, efficiency and sustainability, as well as the design of work and
business processes to be related to a specific situation. The ability to exploit the
specific scope for solutions and design in everyday professional work is based on
professional ‘shaping competence’ (cf. KMK, 1991). In 1996/1999, the KMK
formulated this guiding principle as an educational mandate for dual vocational
training and based the introduction of the learning field concept on it: ‘Vocational
schools and training companies fulfil a joint educational mandate in dual vocational
training. The vocational school is an independent place of learning... [It] aims at
basic and specialised vocational training and expands previously acquired general
education. In this way, it aims to enable people to fulfil their professional tasks and
to play a part in shaping the world of employment and society with social and
ecological responsibility’ (KMK 1999, 3; 8). The Alliance for Jobs, Vocational
Training and Competitiveness (1999, 54) assumes this educational mandate for the
‘structural further development of dual vocational training’.
40 3 Categorial Framework for Modelling and Measuring Professional Competence

3.4.2 Design-Oriented Vocational Education.

The concepts of the design of work tasks and technology developed in technology
evaluation practice and occupational science research found their way into the
establishment of design-oriented work and technology research in the mid-1980s,
in which education and qualification as an inseparable factor associated with this
research were considered and taken into account from the very beginning
(Sachverständigenkommission Arbeit und Technik, 1986, 1988).
The Enquete Commission of the German Bundestag, ‘Future Education Policy—
Education 2000’ included the concept of design-oriented vocational training in the
documentation of its recommendations: ‘If the humanity of our future society
depends decisively on whether it is possible to stop divisions and fragmentation
[...] then education must first and foremost help to develop the will to design [...] and
must strive for designability [...]’ (Deutscher Bundestag, 1990).
Professional Gestaltungskompetenz refers to the contents and scope of design in
the solution of professional tasks. The central pedagogical idea of the ‘ability to help
shape the world of employment’ presupposes the organisation of learning in the
process of vocational work in such a way that the work contexts to be mastered—the
work tasks—challenge this creative competence. This was already reflected in 1991
in an agreement of the KMK on the vocational school (KMK, 1991).
Here, vocational education and training with its educational mandate goes far
beyond ‘pure’ academic education. In a world that has become historic—especially
in the world of employment with its conditions, which basically arise from the
objectification of purposes and the interests and needs contained therein, the skilled
workers are always challenged to weigh up various technical, ecological and social
criteria when solving professional tasks. For example, every technology can there-
fore only be understood in its context of what is socially desirable and technically
possible. Table 3.5 compares central categories of vocational education and training
that are ideally suited to the prevailing technology and one that is aimed at the design
of work and technology.

3.4.3 Professional Competence

The guiding idea of professional competence, as defined, for example, in the


Vocational Training Act, goes back to Heinrich Roth, who founded professional
competence as a connection between material, personnel and social competence
(Roth, 1971).
Article 1 (3) of the Vocational Training Act reads ‘Vocational training must
impart the professional skills, knowledge and abilities (professional competence)
necessary for the exercise of a qualified professional activity in a changing working
environment in an organised training course’.
The basic commentary on the Vocational Training Act of Nehls and Lakies states:
3.4 Guiding Principles and Objectives of Vocational Education and Training 41

Table 3.5 Comparison of characteristics of adaptation-oriented and design-oriented vocational


education and training with regard to the analysis of vocational work tasks for curriculum devel-
opment (Rauner, 2002a, 42 f)
Adaption-oriented vocational
Characteristics education Design-oriented vocational education
Basics The personality is qualified as a human Ability to participate in shaping the
Guiding edu- resource for specific tasks; the qualifi- working world; education as a pre-
cational cation requirements are derived from requisite for an autonomous, self-
principle the organisational and technological confident and self-responsible per-
innovations. Technology and work are sonality; educational content and
predefined, and the quality require- educational goals are regarded as
ments appear as dependent variables. simultaneously dependent and inde-
pendent factors in relation to work
and technology.
Qualification Identification of activity requirements Identification and description of pro-
research for partial tasks defined in business fessional work tasks for work contexts
Goals of qual- management and technology and the in the context of open dynamic pro-
ification resulting execution of actions and pro- fessionalism as a basis for task analy-
research fessional skills. sis.
Activities and activity structures are Deciphering the work and work-
seen as given and thus as an indepen- process-related content of working
dent variable in relation to qualifica- and learning, with consideration to
tion requirements. professional competence
development.
Analysis Complementary analysis: Definition Optimisation of the tool character in
strategies and identification of (residual) activi- human–machine interaction;
ties in human-machine interaction; improvement of the tutorial quality of
‘operation’ as key competence. computer-aided tools; logical task
analysis; ‘shaping’ as key
competence.
Analysis area (remaining) activities in the context of Professional work tasks as Para-
operational functional areas; profes- digmatic and developmental tasks, as
sional competence to act. a basis for logical developmental
learning from beginner to reflected
mastery.
Analysis Operational functions, processes and Work contexts and their breakdown in
dimensions resulting work functions; work opera- • The object of work.
tions (activities); activity requirements; • The methods, tools and organisation
requirements for quality control; work of work.
and work processes in order processing • The demands on work from asocial,
in the context of operational functions. subjective, company and customer
perspective.
As dimensions of working and learn-
ing in the context of company busi-
ness processes.
Theoretical S-R (behavioural) theory; Humanistic personality theory; dia-
framework Human resources development lectics of education and qualification;
(HRD); deterministic planning and (vocational training theory); (par-
control concepts; function-oriented tially) autonomous working groups
company organisation. and decentralised control concepts;
(continued)
42 3 Categorial Framework for Modelling and Measuring Professional Competence

Table 3.5 (continued)


Adaption-oriented vocational
Characteristics education Design-oriented vocational education
business process-oriented company
organisation.
Analysis Experimental analyses; quantitative Situational experimentation; action-
methods methods of empirical social research; oriented expert discussions; voca-
expertise research. tional qualification research; methods
of qualitative empirical social
research.

The wider concept of competence is used in the debate on vocational education and training
policy. The general term competence initially refers to abilities, knowledge, attitudes and
values, the acquisition, development and use of which relate to a person’s entire lifetime.
Competence development is seen from the perspective of the subject, his or her abilities and
interests as well as his or her social responsibility. (...) Competence development should
create professional competence and a skill that enables working actions to be carried out
with extensive co-determination and participation in work and undertakings. Reflective
professional competence means the conscious, critical and responsible assessment and
evaluation of actions on the basis of experience and knowledge (Nehls & Lakies, 2006, 52).

The manifold attempts to enrich the category of professional competence devel-


oped by Heinrich ROTH, which has found its way into the Vocational Training Act
in one aspect or another, do not go beyond ROTH’s explanations. In their totality,
they instead contribute to the confusion in the discussion around professional
competence. Herwig Blankertz has succeeded in further developing the guiding
idea of professional competence. His concept ‘Education in the medium of the
profession’ has found its way into the discussion on vocational education.
Professional competence includes the connection between knowledge and skills.
This results in its importance for the examination of professional competence in the
sense of examinations in accordance with the Vocational Training Act.

3.5 Theories of Vocational Learning and Professional


Development

3.5.1 The Novice-Expert Paradigm: Competence


Development in Vocational Education

The KMK agreement (1999) on the development of vocational curricula, the con-
tents of which are to be oriented towards ‘significant occupational work situations’
and company business processes, aims to replace the previous systematic structuring
and systematisation of vocational training plans with learning fields: ‘Didactic
reference points (for the design of vocational training processes) are situations
that are important for vocational training’ (ibid., 10). What is remarkable about this
3.5 Theories of Vocational Learning and Professional Development 43

agreement is the associated fundamental change of perspective in curriculum devel-


opment practice: The tradition of curricula structured according to subject systems is
to be replaced by one that emphasises the work and business processes characteristic
of a profession as a reference point for curriculum development. At the same time,
however, the processes formulated as objective requirements constitute a subject-
related quality of the curriculum. This is what the change of perspective mentioned
above depends on. The learning field concept is not based on a factually systematic
sequence of material, but on the thought of a meaningful connection between
important professional situations of action, which trainees should learn to cope
with better and better. If the factually systematic sequence of material no longer
forms the starting point for curriculum development, but rather the professional
requirements concretised in situations, then the subject of learning in the form of the
professionally competent actor also comes into focus. The principle ‘Actions should
promote a holistic understanding of professional reality, e.g. include technical,
safety, economic, legal, ecological and social aspects’ (ibid., 10) emphasises the
concept of holistic solutions for professional tasks.
With the emphasis on learning as a subjective construction process, the more
recent didactics discussion and teaching and learning research have more clearly
than ever emphasised the fundamental difference between instruction aimed at
knowledge and knowledge-acquiring learning.
The concepts of educational science implicitly taken up by the KMK together
with the learning field concept correspond to further theories of pedagogical impor-
tance that start with the development of competences. Vocational training courses
can be systematised not only technically but also as a development process from
beginner level (novices) to reflected mastery (experts) (cf. Benner, 1997; Dreyfus &
Dreyfus, 1987; Lave & Wenger, 1991; Rauner, 1999). In development theory, the
objective side—that is, the side that presents the subject with the requirements of
learning—always remains in place. This reflects the idea of development tasks
(Bremer, 2001; Gruschka, 1985; Havighurst, 1972) that are facing someone who
has not yet solved them: What someone cannot do at first—due to a lack of
developed competences—he or she learns in confrontation with the task that triggers
the appropriate competence development. Due to this basic developmental-
methodological pattern, the concept of development tasks is particularly suitable
for structuring vocational learning processes. Characteristic work tasks ‘paradig-
matic’ for professional work (Benner, 1997) are referred to when the work contexts
characteristic of a profession are simultaneously given a quality that promotes
professional competence development. Their identification first requires an analysis
of the objective conditions constituting a defined profession: the subject of profes-
sional work, the tools and methods and the (competing) requirements for profes-
sional work.
44 3 Categorial Framework for Modelling and Measuring Professional Competence

The reconstruction of work tasks that are important for professional competence
development (KMK, 1996) is most successful on the basis of ‘expert specialist
workshops’ (! 5.1)3.
For the application of the methodological instruments of the expert specialist
workshops, this means above all that the respective work context in which the work
tasks are embedded must be consistently incorporated in the survey situation. Both
difficulties can be countered by professional scientific studies that address the
analysis of professional work processes and tasks in their situation (Lave & Wenger,
1991, 33; Becker, 2003; Kleiner, 2005).
The five levels of competence development were identified by Hubert L. Dreyfus
and Stuart E. Dreyfus and the corresponding four learning areas arranged in devel-
opment theory.7
For the application of the methodological instruments of the expert specialist
workshops, this above all means that the respective work context in which the work
tasks are embedded must be consistently taken into account in the survey situation.
Both difficulties can be countered by professional scientific studies that address the
analysis of professional work processes and tasks in their situation (Lave & Wenger,
1991, 33; Becker, 2003; Kleiner, 2005).
The five levels of competence development identified by Hubert L. Dreyfus and
Stuart E. Dreyfus and the corresponding four learning areas arranged in development
theory (Fig. 3.7) have a hypothetical function for identifying thresholds and levels in
the development of vocational competence and identity as well as a didactic function
in the development of work- and design-oriented vocational training courses.
Development tasks and their functional equivalents are also of central importance
for competence development in expertise research. Patricia Benner, for example,
highlights the paradigmatic importance of development tasks for the gradual devel-
opment of professional competence in that of nurses4. With Benner, these develop-
ment tasks refer to ‘paradigmatic work situations’ in line with cases that challenge
the skills of the nursing staff5.
It took almost two decades in Germany before the impetus given by the attempt to
justify competence development in vocational education and training in terms of
development theory was translated into didactic concepts. Over the last fifteen years,
extensive projects have been carried out to this end, both in educational theory and
empirical research. For the profession of car mechatronics, for example, a

3
In the practice of domain-specific qualification research, the expert-specialist workshops are
supplemented by management workshops and evaluating expert surveys, above all to increase the
prospective quality of the results.
4
Benner bases her domain-specific qualification research in the nursing field and its curriculum
development on the novice expert paradigm developed by Dreyfus and Dreyfus (Benner, 1997;
Dreyfus & Dreyfus, 1987).
5
Theoretically and practically, there is a difference between BENNER’s concept of ‘paradigmatic
work situations’, which she identifies in reference to the novice expert concept formulated by
Dreyfus and Dreyfus with methods of expertise research, and Gruschkas hypothesis-led studies of
beginners (cf. Rauner & Bremer, 2004).
3.5 Theories of Vocational Learning and Professional Development 45

Fig. 3.7 Professional competence development ‘From beginner to expert’ (Rauner, 2002b, 325)

developmentally structured curriculum was developed in a Europe-wide pilot project


(Rauner & Spöttl, 2002).
In the pilot project ‘Business and work process-oriented VET’, training courses
were also developed and tested for five core industrial occupations that are based on
development theory assumptions (see in detail Bremer & Jagla, 2000; Rauner,
Schön, Gerlach, & Reinhold, 2001).

3.5.2 Work Process Knowledge

Work process knowledge is regarded as a central knowledge category in the context


of the change in the didactics of vocational education and training in relation to work
and work processes; it arises from the reflected work experience; it is the knowledge
incorporated into practical work. Work process knowledge is a form of knowledge
46 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.8 Work process knowledge as the connection between practical and theoretical knowledge
as well as subjective and objective knowledge (Rauner, 2002b, 34)

that guides practical work; as context-related knowledge, it goes far beyond context-
free theoretical knowledge. The pilot projects ‘Decentralised Learning’ and ‘Learn-
ing at the Workplace’ (cf. Dehnbostel, 1994) already incorporated this development
by shifting training back into the work process. Since then, however, the vocational
educational discussion on ‘Learning at the Workplace’ has been characterised by the
fact that terms such as workplace, work process, professional action, professional
activity and work situation have not been used very clearly. The phrase ‘Learning at
the Workplace’ has now been largely displaced by the phrase ‘Learning in the Work
Process’. Despite all the vagueness of the terms that characterise the relevant
discussion, the shift to the concept of the work process takes into account the
structural change in the organisation of operational work and business processes:
The principle of function-oriented organisation is increasingly superimposed by that
of orientation to operational business processes. This has increased awareness of the
process character of work and organisation into a technique that must only be
developed in the process of operational implementation and organisational
development.
Following the discussion on work process knowledge initiated by Wilfried Kruse
(Kruse, 1986), this central category for vocational learning was identified and
developed in numerous research projects as a fundamental form of knowledge for
vocational learning (cf. Fischer, 2000a, 2000b).
In a first approximation, work process knowledge can be characterised as the
connection between practical and theoretical knowledge (Fig. 3.8). The development
of a scientific and pedagogical knowledge framework used to model vocational
competence suggests the introduction of distinctions which enable the differentiation
3.5 Theories of Vocational Learning and Professional Development 47

Fig. 3.9 The three


successive levels of work
process knowledge

between three successive levels of knowledge in work process knowledge based on


Hacker: action-guiding, action-explaining and action-reflecting knowledge
(Fig. 3.9).
Action-guiding knowledge comprises the rules and regulations relevant for pro-
fessional action. It can therefore also be characterised as rule-based knowledge. The
traditional form of in-company instruction aims at rule-based knowledge, i.e. ‘know
that’.
The level of action-explaining knowledge is aimed at understanding the rules to
be observed in a profession. Professionals who not only have knowledge that guides
their actions, but who also comprehend their professional tasks. They understand
what they are doing and are able to act on their own responsibility based on their
insights into their professional tasks. This level of knowledge has a certain affinity to
the concept of ‘know-how’.
Action-reflecting knowledge (know-how) facilitates the exploitation of a smaller
or greater scope for shaping professional work projects, the most varied situation-
based approaches and solution possibilities as well as in dialogue with the client and
to balance all relevant criteria in the process. At this (highest) level of work process
knowledge, professionals are able to answer the question: Why like this and not in
any other way? (Know Why).
Erpenbeck’s differentiation of knowledge into ‘explicacy’ and ‘value’ also stems
from a differentiation of the knowledge category owed to the research object. This
subdivision largely corresponds to the insight formulated in the discussion on
technical theory and didactics into the indissoluble connection between what is
technically possible and what is socially desirable (cf. Rauner, 1995). In this context,
‘value’ refers to technology as the process and the result of the objectification of
48 3 Categorial Framework for Modelling and Measuring Professional Competence

social purposes and the interests and needs incorporated therein. ERPENBECK uses
this distinction into ‘pure’ knowledge and the knowledge representing the expedi-
ency of social facts for a four-field matrix (Erpenbeck, 2001, 113), with which he
illustrates the proof that explicit, pure knowledge, as it exists in the form of scientific
fact and legal knowledge, only contains very little knowledge relevant for compe-
tence development.
The differentiation of the category of practical knowledge as a dimension of the
work process enables domain-specific knowledge research, which allows more
detailed information about work process knowledge and therefore also promises
results about the mediation of work process knowledge in or for professional work
processes. However, this only partly answers the overriding question of whether the
disintegration of validity resulting from the accelerating change in the working
world fundamentally devalues this knowledge as a point of reference for profes-
sional competence development. According to a popular thesis, technical compe-
tences are devalued by the disintegrating validity of professional knowledge. The
professional dimension is therefore virtually shifted to a meta-level at which it is
only important to have appropriate access to the expertise documented in convenient
media, knowledge stores and knowledge management systems. The situational
development of the ‘knowledge’ required for the specific work tasks—knowledge
management—is therefore essential6. Studies on the exponential increase in ‘objec-
tive knowledge’ seem to confirm this assumption.
Professional competence would then evaporate as a form of domain-specific
methodological competence. However, this thesis was refuted in the extensive
studies on the change in skilled work and qualification requirements, especially in
the field of diagnostic work. On the contrary, relevant vocational-educational studies
have confirmed the thesis that professional work process knowledge, which provides
the basis for professional expertise, has tended to increase in importance7.
To the extent that domain-specific qualification research succeeds in regaining
ground under the feet of empirical curriculum research, the diffuse formula of key
qualifications loses its placeholder function. At the same time, expertise and qual-
ification research supports the concept of vocational learning in the context of
important work situations and thus the guiding principle of a curriculum structured
according to learning fields. The orientation of vocational learning towards

6
The thesis of the de-specialisation of vocational education and training has been supported at the
latest since the flexibility debate in the 1970s. According to the central argument, vocational
education and training that takes account of the accelerated technological change must urgently
strive to promote and maintain the associated, necessary basic scientific and social understanding
necessary and only impart knowledge and skills specific to activities on a secondary level (Kern/
Schumann, quoted by Grünewald, Degen, & Krick, 1979, 115). Wilfried Kruse comes to very
similar conclusions in his assessment of qualification research in the 1970s: ‘The expansion of
qualification in the state school system and the extensive separation of vocational training from
direct production are expressions of the increase in general, more theoretical elements in the
change in the production of the working capacity of young workers’ (Kruse, quoted from
Grünewald et al., 1979, 121).
7
Cf. Drescher (1996), Becker (2003), Rauner and Spöttl (2002).
3.5 Theories of Vocational Learning and Professional Development 49

(vocational) work and business processes—in a design-oriented perspective—


assumes an autonomy of working action beyond the one-dimensionality of scientific
rationality as it is characteristic for the subject-systematic curriculum (cf. Fischer &
Röben, 2004). According to this, the category of ‘subject’ knowledge is problematic
in that it refers to subject-systematic knowledge, the sources of which are based on
the specialist sciences. Professional action and design competence are not based on
(scientific) specialist knowledge, but on the knowledge of action and work processes
incorporated into practical professional work.

Tacit Knowledge (Polanyi, 1966b; Neuweg, 2000)

With the theory of implicit knowledge (Tacit Knowledge) Polanyi has drawn
attention to a dimension of knowledge that Neuweg attributes a paradigmatic
meaning for professional ability. Since then, the concept of Tacit Knowledge has
been regarded as a key category for the development of the concept of professional
competence. This special weighting of implicit knowledge as the basis for competent
professional action can also be attributed to the fact that social science-based
attempts to approach the specificity of professional knowledge and skills had to
fail simply because the theoretical and empirical access to knowledge incorporated
in practical professional work is largely blocked (cf. Bergmann (1995); Garfinkel
(1986)).
Once the concept of Tacit Knowledge had been formulated, this triggered
approval far beyond the discussion of knowledge psychology, especially in educa-
tional practice, illustrated by numerous examples. It removed vocational training
practice and, to a certain extent, vocational training research from the requirement to
decipher and name the knowledge incorporated into practical vocational work. The
withdrawal of the surveyed experts to the position that ‘these are empirical values’
was and is often accepted as the last answer to the many unanswered questions about
qualification requirements. Georg Hans Neuweg has presented a differentiated
development of this knowledge concept and examined its didactic implications for
academic vocational education in German-speaking countries. With his comprehen-
sive theory of implicit knowledge, Neuweg characterises the didactic concept of
subject-systematic knowledge as a reference point for professional competence
development in the field of an ‘intellectualistic legend’. The widespread assumption
in vocational education that subject-systematically structured knowledge represents
some kind of shady professional action, which—in procedural terms—leads to
professional ability, is based on a fundamental category mistake (cf. Fischer, 2002;
Neuweg, 2000).
Using his own experience in dealing with Ohm’s law as an example, Matthew
Crawford illustrates the difference between theoretical and practical knowledge and
the limited relevance of theoretical knowledge for action (Crawford, 2010, 215 f.).
Theo Wehner in particular pointed out the danger of mystifying professional
skills with the category of Tacit Knowledge. A large proportion of the implicit
knowledge could be explicated if qualification and knowledge research were to
50 3 Categorial Framework for Modelling and Measuring Professional Competence

improve its research methods. Similar to Garfinkel, Theo Wehner and Dick (2001)
see the challenge of qualification and knowledge research in identifying work
process knowledge and not hastily qualifying this knowledge as ‘tacit’.
Professional competence is therefore developed in a process of reflected practical
experience (reflection-in-action). Schoen’s professional competence development is
based on the expansion of the repertoire of unique cases. In this context, at best, one
can speak of systematic learning. On the other hand, competence development
cannot be substantiated by technical system.

3.5.3 Practical Knowledge

In the following, the category of practical knowledge will be examined in more


detail, as it has so far hardly found its way into curriculum research. This is
particularly serious for vocational education and training, as it is directly related to
work experience, knowledge and skills. Here, we should refer to the current discus-
sion on the basis of a theory of social practices, such as that initiated by Andreas
Reckwitz from a sociological perspective. His reference to the implicit logic of
practice, as expressed, for example, in the artefacts of the working world and the
knowledge, interests and purposes objectified in them, is of interest to vocational
science and vocational education.
Central to the practical understanding of action is that, although action also contains
elements of intentionality [...], the status of intentionality, normativity and schemata are
fundamentally modified if one assumes that action within the framework of practices can
first and foremost be understood as knowledge-based activity, as an activity in which
practical knowledge, ability in the sense of ‘know-how’ and practical understanding are
used (Reckwitz, 2003, 291 f.).

Practical knowledge according to Reckwitz includes in practical theory


1. ‘Knowledge in line with an interpretative understanding, i.e. a routine assignment
of meanings to objects, persons, etc.,
2. A methodical knowledge of script-like procedures, how to produce a series of
actions competently,
3. A motivational-emotional knowledge, an implicit sense of what one actually
wants, what it is about and what would be unthinkable’ (ibid., 292).
With this definition, Reckwitz hides a dimension of practical knowledge relevant
to vocational science and education. The materiality of practice, as emphasised by
Reckwitz, reduces technical artefacts to the dimension of the technical as a social
process, just as in established sociological research on technology. In curriculum
theory, an expanded concept of technology is required that includes the dimension of
knowledge about the technical itself.
In researching the paradigmatic work situations and tasks for nurses, Patricia
Benner attaches a constitutive importance to practical knowledge for professional
competence and takes up Schoen’s epistemological positions, which he founded in
3.5 Theories of Vocational Learning and Professional Development 51

Table 3.6 The six dimensions of practical knowledge (based on Benner, 1997; Rauner, 2004)
Dimensions of practical
knowledge
Sensitivity With increasing work experience, the ability to perceive and evaluate
increasingly subtle and the subtlest differences in typical work situ-
ations develops.
Contextuality The increasing work experience of the members of the professional
practice groups leads to the development of comparable patterns of
action and evaluations as well as to intuitive communication possi-
bilities that go far beyond linguistic communication.
Situativity Work situations can only be adequately understood subjectively if
they are also understood in their genesis. Assumptions, attitudes and
expectations guided by experience lead to comprehensive awareness
and situational action and constitute an extraordinarily fine differen-
tiation of the action plans.
Paradigmaticity Professional work tasks have a paradigmatic quality in the sense of
‘development tasks’ if they raise new content-related problems in the
development process, which force us to question and newly establish
existing action concepts and well-coordinated behaviours.
Communicativity The subjective significance of the communicated facts is highly
compliant in a practice community. The degree of professional
understanding is far higher than that of external communication; the
context-related language and communication can only be fully
understood by members of the practice community.
Perspectivity The management of unforeseeable work tasks on the basis of the
fundamentally incomplete knowledge (knowledge gap) is character-
istic for practical work process knowledge. This gives rise to a meta-
competence that enables us to deal with non-deterministic work
situations.

his ‘Epistemology of Practice’ (Schoen, 1983). She distinguishes six dimensions of


practical knowledge (Benner, 1997). With reference to results of qualification
research in industrial-technical domains, these dimensions of practical knowledge
will be outlined below in order to further differentiate the category of work process
knowledge (Table 3.6).
From the point of view of ethnomethodology (Garfinkel), practical knowledge
has its own quality, which results from the mode of its origin. Harold GARFINKEL
has defined ethnomethodology generally as ‘the exploration of the rational charac-
teristics of indexical expressions and other practical activities as a contingent of
evolving appropriation of organised and artistic practices of daily life’ (Garfinkel,
1967, 11). This suggests an expanded or modified concept of competence able to do
justice to the complex dynamics of circulation between the two
ethnomethodologically basic concepts of ‘producing’ and ‘acquiring’ (in each case
from practice). The ‘methods’ that ethnomethodology uses to research what social
52 3 Categorial Framework for Modelling and Measuring Professional Competence

reality both creates and allows to understand cannot be presented without corre-
spondingly complex competence8.
The proximity to the theory of multiple intelligence founded by Gardner is
obvious. Both, the debate on knowledge and competence, and the departure from
the concept of universal intelligence, refer to the diversity of human abilities. In the
preface of his work ‘Frames of Mind: The Theory of Multiple Intelligences’,
Gardner formulates his central thesis: ‘If we want to grasp the entire complex of
human cognitions, I think we have to consider a much larger and more comprehen-
sive arsenal of competencies than we are used to. And we must not deny the
possibility that many and even most of these competences cannot be measured
with those standard verbal methods that are predominantly tailored to a mixture
of logical and linguistic skills’ (Gardner, 1991, 9).
Almost a decade before Gardner, Donald Schoen’s analysis of the problem-
solving behaviour of different professions provides comparable insights into profes-
sional skills and cognitive requirements to Gardner. Gardner’s analyses are
concerned with the psychological (cognitive) performance requirements for compe-
tent action (Professional Knowledge Systems). Schoen’s merit is to prove,
corresponding to the category of practical intelligence, the fundamental importance
of practical competence and professional artistry as an independent competence not
guided by theoretical (declarative) knowledge. At the same time, this leads him to a
critical evaluation of academic (disciplinary) knowledge as a cognitive prerequisite
for competent action. Schoen summarises his findings on practical competence in the
following insight:
I have become convinced that universities are not devoted to the production and distribution
of fundamental knowledge in general. There are institutions committed, for the most part, to
a particular epistemology, a view of knowledge that fosters selective inattention to practical
competence and professional artistry (Schoen, 1983, VII).

In this context, he quotes from an examination by a medical practice: ‘85% of the


problems a doctor sees in his office are not in the book’. Schoen sees the deeper
cause for the inability of the education system to impart knowledge that forms the
basis of professional competence in disciplinary subject-systematic knowledge: ‘The
systematic knowledge base of a profession is thought to have four essential proper-
ties. It is specialized, firmly bounded, scientific and standardized. This last point is
particularly important, because it bears on the paradigmatic relationship which
holds, according to Technical Rationality, between a profession’s knowledge base
and its practice’ (ibid., 23).
He takes a critical look at the concept of didactic reduction that was developed in
the USA in connection with the term ‘Applied Academics’. Thus, for example, the
concept of ‘contextual learning’ in high schools is not interpreted as imparting

8
With the ethnomethodological research concept of ‘Studies of Work’, Harold Garfinkel has
established a research strand that can be made fruitful in many ways in vocational education and
training research. The theories of ‘Tacit Knowledge’ and ‘Studies of Work’ assume a multiple
concept of competence without it already unfolding in its dimensions.
3.5 Theories of Vocational Learning and Professional Development 53

practical knowledge and problem-solving competence, but rather as a form of


learning for acquiring ‘academic knowledge’ (cf. also Grollmann, 2003). Theoretical
knowledge (Academic Knowledge) is then taught in an application-oriented manner.
Schoen notes critically: ‘This concept of ‘application’ leads to a view of professional
knowledge as a hierarchy in which ‘general principles’ occupy the highest level and
‘concrete problem solving’ the lowest’ (Schoen, 1983, 24).
This training and curriculum practice is in stark contradiction to the results of his
analyses of the thoughts and actions of ‘professionals’ (Schoen, 1983, 138 ff.).

Practical Terms and Practice Communities

The concepts of practical knowledge and reflection on and in action correspond to


the concept of practical terms by Klaus Holzkamp (1985, 226 f.), according to which
the terms that people subjectively have are basically practical, in so far as their
aspects of meaning, their scope of meaning and the fields of meaning (as the sum of
the aspects of meaning and their linkage) are shaped by the respective development
processes. Pursuant to Schoen, it is therefore not important in training to teach and
learn scientifically defined terms. These represent only a fraction of the meaning of
practical terms and thus justify only very limited (professional) competence to act.
The relationship between theoretical-scientific and practical terminology will be
examined in more detail using the example of the category ‘electrical voltage’.
Electrophysically, electrical voltage is defined as follows:
A small body carrying the constant amount of electricity Q travels a distance S from a
starting point to an end point in an electric field. The field forces on the body perform a work
A12, which is proportional to the amount of electricity Q. The quotient A12/Q is therefore a
variable independent of Q and assigned to path S from 1 to 2. This is called electrical
voltage U between 1 and 2, in short U1,2 (see in detail Adolph, 1984, 107 ff.).

According to this, electric voltage is a field quantity that cannot be understood


without insight into field theory. True to the pedagogical-didactical rules of the
systematic and scientific consolidation of professional experience or work-related
learning, it would be important to skilfully convey this definition. Didactic dexterity
is characterised by the use of forms of inductive learning such as experimental or
action-oriented learning (cf. e.g. Pätzold, 1995).
The scientific definition of electrical voltage serves to define the physical phe-
nomenon of electrical voltage as it can be experimentally reproduced. The real
technical and economic facts of electrical voltage, on the other hand, are something
completely different. The technical realisation of electrical voltage follows the
specifications (serviceability properties), which are defined for the unlimited variety
of different voltage sources and forms from the single cell to the 400-kV high-
voltage system. The immense variety of voltage forms and sources—and with them
the available forms of electrical voltage—in principle have an infinite number of
practical value properties and object meanings (in the sense of meaningful knowl-
edge). The technical facts of ‘electrical voltage’ make very different demands on
54 3 Categorial Framework for Modelling and Measuring Professional Competence

consumers, development engineers, specialists, teachers, nurses or economists. The


action-relevant aspects of meaning and fields of meaning of the respective practical
terms of electrical voltage are manifold and at the same time highly relevant for
competent action (cf. in detail Rauner, 2004).
Didactical and professional research is faced with the task of determining the
preliminary understanding and subjective fields of meaning of beginners’ technical
terms and of opening up the professionally related fields of meaning of central
technical terms of experts. Only then can teaching and learning strategies be
developed that facilitate the gradual transformation of the fields of meaning and
structures of everyday terms and theories into professionally related fields of mean-
ing. However, the decisive factor here is that the action-guiding technical terms are
not categorically restricted but are retained in their scope as practical terms and are
constantly developed further.

3.5.4 Multiple Competence

According to Klieme and Hartig (2007, 17), the reference to ‘real life’ is regarded as
a key feature of the concept of competence. In this context, Andreas Gruschka
considers a concept of competence necessary that is not limited to individual actions:
‘Competences are not bound to a specific task content and a correspondingly
narrowly managed application, but allow for a variety of decisions. They certainly
have this in common with education, since in the acceptance and solution of such
open situations and tasks it is preferably updated as a progressive movement of the
subject’ (Gruschka, 2005, 16).
In this sense Connell, Sheridan and Gardner (2003) succeed in fundamentally
contributing to the categorical differentiation between abilities, competencies and
expertise, an important step towards establishing a theory of multiple competence.
The concept of multiple competence, based on Howard Gardner’s concept of
multiple intelligence, takes account of the state of competence and knowledge
research, according to which several relatively autonomous competences can be
distinguished in humans, and which can vary greatly among individuals—depending
on their professional socialisation and qualification.
The concept of multiple competence can be based on the results of expertise
research and vocational qualification research, which have shown that vocational
competences are domain-specific and, above all, that vocational-specific practical
knowledge has its own quality (Haasler, 2004; Rauner, 2004). According to this,
practical knowledge does not arise from theoretical knowledge as it exists in the
objectified form of subject-systematic knowledge in the system of sciences. It has its
own quality, which is based on its mode of its origin.
In this context, Gardner points out that theories and concepts with which cross-
vocational (key) competences are assumed cannot be supported on the basis of his
theory. He exemplifies this with the term ‘critical thinking’: ‘I doubt whether this
critical thinking should be seen as a process of thinking in its own right. As I have
3.5 Theories of Vocational Learning and Professional Development 55

explained with reference to memory and other presumed horizontally operating


abilities, their existence becomes questionable upon detailed analysis. The various
functional areas are probably assigned their own forms of thinking and criticism.
Critical thinking is important for musicians, historians, systems biologists, chore-
ographers, programmers and literary critics. To analyse a fugue, however, a
fundamentally different way of thinking is required than to observe and classify
different biological species, to publish poems, to debug a computer program or to
choreograph and work on a new ballet. There is little reason to believe that the
practice of critical thinking in one domain could be identical to the corresponding
training in other fields [...] because each has its own objects, procedures and modes
of connection’ (Gardner, 2002, 130).
This determines a second essential feature of multiple competence. Modern
intelligence research’s criticism of the concept of the one-dimensional concept of
intelligence is comparable to criticism of a reductionist concept of vocational
competence limited to the subject-functional dimension and the associated definition
of a cross-professional area of general vocational (key) competences.
The attempts of Howard Gardner’s research group to transfer the concept of
multiple intelligence to the empirical analysis of vocational competences make it
possible to define the concept of multiple competence more precisely via the abstract
definition as a domain-specific performance disposition (cf. Connell et al., 2003).
With multiple competence, two different aspects of professional competence can
be highlighted:
• Abilities can be conceptualised as functionally integrated intelligence profiles.
The development of specific abilities provides a space for competence develop-
ment (ibid., 140 f.). The concept of multiple intelligence and a model of multiple
competence based on it allows the realistic emphasis of potentials of competence
development given by professional work on the one hand and the intelligences
belonging to the individuals on the other hand. These differ greatly not only from
individual to individual but also from occupation to occupation.
• The designation of the eight components of professional competence, which in
their interaction constitute the ability to solve holistic tasks, as multiple compe-
tence, emphasises the second aspect of a theory of professional competence
differentiating according to competence profiles—and not only according to
competence levels.
Vocational competence (development) is then a process of developing vocational
skills, which is given on the one hand by the individual intelligence potential and on
the other hand by the requirement structure of the holistic solution of vocational
tasks.
Vocational action always takes place in working contexts, which can also sub-
jectively be seen and understood as such in their manifold meaning. Therefore, it is
necessary to supplement the concept of complete working action with requirement
criteria that result from the objective circumstances as well as the subjective
demands on the content and organisation of social work.
56 3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.10 The criteria of the complete (holistic) solution of professional tasks (COMET Vol III, 22)

The criteria for the complete (holistic) solution of professional tasks represent the
partial competencies of professional competence. Their expression according to the
levels of work process knowledge can be included in the modelling of the require-
ment dimensions (Fig. 3.10).
Eight overriding demands are placed on the processing or solution of professional
work tasks, which can be differentiated according to the three levels of work process
knowledge. In each specific case, the professionals must ensure that all or a subset of
these requirements are relevant to the specific task.
These criteria can be described in more detail as follows (COMET Volume III,
pp. 56)9:

Clarity/Presentation (K1)

The result of professional tasks is anticipated in the planning and preparation process
and documented and presented in such a way that the client (superior, customer) can
communicate and evaluate the proposed solutions. In this respect, the illustration and
presentation or the form of a task solution is a basic form of vocational work and
vocational learning. A central facet of professional communication represents the
ability to communicate clearly structured descriptions, drawings and sketches. The

9
For the operationalisation of the criteria in the form of a rating scale, see Appendix B.
3.5 Theories of Vocational Learning and Professional Development 57

appropriateness of the presentation in relation to the respective facts is an expression


of professional action.

Functionality (K2)

The functionality of a proposed solution for professional tasks is an obvious core


criterion for their evaluation. Functionality refers to the instrumental expertise or the
context-free, professional knowledge and the technical skills. Proof of the function-
ality of a task solution is fundamental and decisive for all other requirements placed
on task solutions.

Sustainability (K3)

Finally, professional actions, procedures, work processes and work orders always
refer to a customer whose interest lies in the sustainability of the work result. In
production and service processes with a high degree of division of labour, the
sustainability and utility value aspects of the division of labour in the execution of
subtasks and in vocational training reduced to the action aspect often evaporate. In
addition to direct use by the user, the avoidance of susceptibility to faults and the
consideration of aspects of easy maintenance and repair in industrial and technical
occupations are important for the sustainable solution of professional tasks. To what
extent a problem solution will remain in use in the long term and which expansion
options it will offer in future are also central evaluation aspects for the criterion of
sustainability and practical value orientation.

Efficiency/Effectiveness (K4)

In principle, professional work is subject to the aspect of economic efficiency. The


context-related consideration of economic aspects in the solution of professional
tasks distinguishes the competent action of experts. In doing so, it is important to
continuously assess the economic efficiency of the work and to take into account the
various costs and impact factors. Costs incurred in the future (follow-up costs) must
also be included in decisions on the economic design of vocational work. For
decision-making purposes, the ratio of expenses to operating benefits is accounted
for. In addition, economically responsible action also distinguishes the level of social
assessment. Not all strategies that are coherent on a business management level are
also economically and socially acceptable.
58 3 Categorial Framework for Modelling and Measuring Professional Competence

Orientation on Business and Work Process (K5)

It comprises solution aspects that refer to the upstream and downstream work areas
in the company hierarchy (the hierarchical aspect of the business process) and to the
work areas in the process chain (the horizontal aspect of the business process).
Especially under the conditions of working with and on program-controlled work
systems in networked operational and inter-company organised work processes, this
aspect is of particular importance. The conscious and reflected perception and
execution of professional work tasks as part—and embedded in—operational busi-
ness processes are based on and promote contextual knowledge and understanding
as well as the awareness of quality and responsibility based on it.

Social Compatibility (K6)

It concerns above all the aspect of humane work design and organisation, health
protection and, if necessary, also the social aspects of professional work that go
beyond professional work contexts (e.g. the frequently different interests of clients,
customers and society). Aspects of occupational safety and accident prevention are
also taken into account, as well as possible consequences that a solution of profes-
sional tasks has on the social environment.

Environmental Compatibility (K7)

It has become a relevant criterion for almost all work processes. This is about more
than the aspect of general environmental awareness, namely, the professional and
technical requirements for professional work processes and their results, which can
be assigned to the criteria of environmental compatibility. The extent to which
environmentally compatible materials are used in solutions must be taken into
account, as well as the environmentally compatible design of work in coping with
the task at hand. Furthermore, energy-saving strategies and aspects of recycling and
reuse are aspects that must be taken into account for the environmental compatibility
of a solution.

Creativity (K8)

The creativity of a solution variant is an indicator that plays a major role in solving
professional tasks. This results from the highly varied scope for the solution of
professional tasks depending on the situation. The ‘creative solution’ criterion must
be interpreted and operationalised in a special way for each profession. Creativity is
a central aspect of professional competence in the design trade. In other professions,
the ‘creative solution’ criterion is relatively independent as a concept of professional
3.5 Theories of Vocational Learning and Professional Development 59

work and learning. The creativity of a solution variant also shows sensitivity to the
problem situation. Competent experts are looking for creative and unusual solutions
in their professional work that also serve the purpose of achieving goals.
It is implicitly assumed with professional competence that the professionally
competent person is not only able to carry out professional actions completely, but
also able to classify and evaluate the professional actions in their professional and
social significance, hence the relevance of the relevant criteria.
For example, the legal regulation that came into force in 2009 prohibiting the use
of incandescent lamps—for reasons of efficient use of electrical energy—has a direct
impact on the design and operation of electrical lighting systems. In the implemen-
tation of heating systems, for example, the objective conditions include not only a
wide variety of heating technologies, but also the equally diverse controls for their
efficient use and design of heating systems in the specific application situations in
accordance with environmental, safety and health requirements. The objective
circumstances, together with the customers’ subjective requirements for practical
value, sustainability and aesthetic quality as well as the subjective interests of the
employees in a humane and socially acceptable work design and organisation, form
the solution space in which the specific solutions of professional work tasks can be
located. On the basis of the eight criteria shown, the dimension of requirements can
be determined in terms of content in the sense of a holistic action and design concept.
Completeness is required in that the solution of professional tasks in all sectors of
social work always refers to not overlooking any of these solution aspects. For
example, if the aspect of the technological solution level is over-estimated in a work
order and the aspect of financial feasibility or user-friendliness is underestimated or
forgotten, then this can mean the loss of a work order. If safety and environmental
aspects are overlooked in order processing and work design, this may even have
legal consequences.
If one refers the steps of the complete work action to the criteria of the holistic
solution of vocational tasks, then the concept of the complete work action results
from the basic concept of the complete (holistic) task solution for the organisation of
vocational education processes and the modelling of vocational competence.
The objective of domain-specific qualification research is to determine which
qualification requirements and which content-related characteristics are included
with which weight in the processing and solution of professional tasks and how
the respective requirement profile can be described as a domain-specific qualification
and competence profile.
This can also form the basis for describing the scope for solving and shaping
professional tasks.
Chapter 4
The COMET Competence Model

4.1 Requirements for Competence Modelling

The outline of the explanatory framework suggests developing a competence struc-


ture model for vocational education and training that is open to the specific content
of the profession. Competence models must convey the connection between their
theoretical and normative justification as well as their empirical validation by
subject-didactic and learning psychology research. Competence development as an
object of competence diagnostics and research refers to questions, methods and
results that cannot escape the inextricable connection between intentionality
(normativity) and empirical-analytical factuality.
With the expertise of the Klieme Commission (Klieme et al., 2003) for the
development of educational standards and the establishment of the DFG priority
programme ‘Competence models for recording individual learning outcomes and for
balancing educational processes’ (Klieme & Leutner, 2006), vocational education
and training research was challenged to develop a competence model based on the
superimposed objectives and guiding principles of vocational education and training
which not only takes into account the characteristic features of vocational education
and training, but is also oriented to the criteria for the development of competence
models predicated by established educational research.
The report convincingly explained that the function of a competence model is to
mediate between the guiding principles and goals of a subject or learning area and
the development of test items (see Fig. 3.1).
The definition of competence by established competence research (cf. Klieme &
Leutner, 2006) can be guided by the interest in describing competence as narrowly
and precisely as possible with regard to the context of empirical questions, as this is
the only way to expect meaningful empirical research results. Klieme and Leutner
therefore define context-specific service arrangements for the DFG priority
programme, which functionally refer to situations and requirements in specific

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 61
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_4
62 4 The COMET Competence Model

domains. Competences are therefore learnable and conveyable performance dispo-


sitions. This concept of competence, now established in competence diagnostics, is
suitable for
1. differentiation from general cognitive abilities (such as intelligence),
2. differentiation from motivation1,
3. the assessment of the adequacy for the objectives of study courses,
4. the modelling of competence and therefore the development of test tasks.
With restriction to the measurement of domain-specific cognitive performance
dispositions, a far-reaching distinction is made for the verification of vocational
qualification requirements, as they are defined, for example, in the occupational
profiles of regulatory instruments in Germany. If it is possible to develop a compe-
tence model which, on the one hand, maps the specific requirements of the ‘VET
learning area’—across all occupations—in the form of competence levels and
simultaneously represents a guide for selecting the occupation-specific content for
the construction of test tasks, then this would be a major step for the development of
competence research. At the same time, this requires clarification of the connections
between testing and competence diagnostics. The latter addresses the following
questions in particular:
1. Which competence levels do the test persons reach in their profession or in the
phases of their competence development from beginner to expert?
2. This is related to the question of the characteristic competence forms and com-
petence profiles—in line with the competence model—for individuals, educa-
tional pathways, forms and systems of education.
3. What is the degree of heterogeneity (between individuals, groups, etc.)?
4. Under what conditions does professional competence develop?
5. How does vocational identity and commitment develop under the various condi-
tions of vocational education and training?
It must therefore first be clarified whether the criterion of developing competence
models for a specific subject or learning area can be applied to vocational education
and training. Can vocational education and training be justified as one learning area?
Or should one define the individual professions as ‘subjects’ instead? As this
criterion is rather vaguely defined by the terms ‘subjects’ and ‘learning areas’,
there is much to suggest that a definition of the spectrum of learning areas should
obviously be avoided. A brief glance at vocational education and training shows that
the large number of training occupations – in Germany this is already 350 within the
scope of the Vocational Training Act – would result in the development of
occupation-specific competence models. The training contents for such different
professions as stonemason and insurance broker make it seem hopeless at first to
develop a cross-occupational competence model. At the same time, it is immediately

1
The COMET project follows the path proposed by Weinert (2001) to separately capture motivation
in the form of professional commitment (! 4.7).
4.2 The Levels of Professional Competence (Requirement Dimension) 63

obvious that the development of hundreds of different competence models as the


basis for a large-scale competence survey covering a significant proportion of
established occupations would be doomed to failure for practical reasons alone.
The knowledge gained by competence research on the basis of occupation-specific
competence models would also be largely worthless from an educational planning
and education policy perspective, as the basis for cross-occupational and system-
comparative competence research would not be given. In this regard, VET research
is obviously facing a dilemma, to which the COMET project offers a solution.
The following section will examine whether the COMET competence model in
the form of a framework model can also be applied to vocational education and
training as a characteristic learning area.
If one has a competence model based on education theory and empirically
validated, then this can be used to mediate between educative goals and the con-
struction and evaluation of concrete test tasks.
Competence models show a structure of competence dimensions (cf. Klieme &
Hartig, 2007). This can be used to describe the cognitive requirements that a learner
should have in order to – in the case of vocational education and training –
(conceptually) solve occupation-specific tasks and problems. Whether and to what
extent the competence dimensions and their components are interrelated can then be
examined empirically, for example by means of dimensionality analyses.
For vocational education and training, the guiding principles and objectives of
vocational education and training as well as the basic theories of vocational learning,
as presented in the explanatory context, can be translated into a three-dimensional
competence model (Fig. 4.1).
The COMET competency model distinguishes between
• The requirement dimension (competence levels),
• The content dimension and,
• The action dimension.

4.2 The Levels of Professional Competence (Requirement


Dimension)2

The requirement dimension reflects the successive levels of professional compe-


tence, which are defined on the basis of skills resulting from working on and solving
professional tasks (s. Fig. 3.3, ! 2.3). The objective and subjective requirements for
processing and solving professional tasks directly refer to professional skills.
The requirements dimension of the COMET model takes up the criteria of holistic
task solving (! 3.5.4) and therefore enables the concrete description of empirically
ascertainable competences at different competence levels (Table 4.1): How, for

2
Cf. Rauner (2006), Rauner, Grollmann, and Martens (2007).
64 4 The COMET Competence Model

Fig. 4.1 The COMET competence model of vocational education and training

example, does a more or less highly competent, skilled worker solve a professional
task? Of interest here are the qualitative and quantitative competence differences that
exist between the competence levels as well as the competence profiles of the test
groups that result from the recording of the eight competence components (! 8.3).
The evaluation of the test results allows a criterion-oriented interpretation of the
quantitative test results (performance values).
The eight criteria (partial competences) of the competence level model with its
four competence levels serve as an interpretation framework (s. Fig. 3.10). The
criteria-oriented interpretation of quantitative values includes a pragmatic justifica-
tion of rules, since quantitative limit values must also be defined for the transitions
between two competence levels and rules according to which a test participant is
assigned to a competence level (! 8.1, 8.2). This distinguishes the COMET
diagnostic procedure from standards-oriented test procedures, which justify the
gradations between competence levels by the complexity or degree of difficulty of
the test tasks.
A level model implies that the competence levels represent a value in the form
of increasingly high-quality competences. In the COMET concept, the first level of
competence is the lowest and the third level of competence is the highest level of
competence to be achieved. The competence levels that a trainee achieves or can
achieve apply irrespective of the time of his training.
This established competence model facilitates the qualitative and quantitative
determination, on the basis of open test tasks, to which competence level a test
4.2 The Levels of Professional Competence (Requirement Dimension) 65

Table 4.1 Competence levels in scientific and industrial-technical vocational education and
training
PISA, basic scientific
Competence levels Bybee (1997) COMET 2008 literacy
Nominal I nominal literacy: I nominal competence/ I. Nominal compe-
Some technical terms literacy: tence:
are known. However, Superficial conceptual Simple factual knowl-
the understanding of a knowledge that does edge and the ability to
situation is essentially not guide action; the draw conclusions does
limited to the level of scope of the profes- not go beyond every-
naive theories. Slim sional terms remains at day knowledge.
and superficial the level of their collo- II functional compe-
knowledge. quial meaning. tence I:Everyday sci-
Functional II functional literacy: II. Functional compe- entific knowledge
In a narrow range of tence/literacy: justifies the ability to
situations and activi- Elementary specialist assess simple contexts
ties, scientific vocabu- knowledge is the basis on the basis of facts
lary is used for technical- and simple rules.
appropriately. The instrumental skills. III functional compe-
terms are not very well ‘Professionalism’ is tence II (scientific
understood and con- expressed as context- knowledge):Scientific
nections remain free expert knowledge concepts can be used
incomprehensible. and corresponding to make predictions or
skills (know that). give explanations.
Conceptual- III conceptual and pro- III procedural compe- IV conceptual-
procedural cedural literacy:Con- tence/literacy: procedural compe-
cepts, principles and Professional tasks are tence I:Elaborated sci-
their connections are interpreted and entific concepts can be
understood as well as processed in relation used to make predic-
basic scientific ways of to company work pro- tions and give
thinking and working. cesses and situations. explanations.
Work process knowl-
edge establishes pro-
fessional ability to act
(know-how).
Multidimensional, IV multidimensional IV holistic shaping V conceptual-
holistic literacy:At this level, competence/literacy: procedural compe-
an understanding of the Professional work tasks tence (models):
nature of science, its are completed in their Analysing scientific
history and its role in respective complexity studies with regard to
culture and society is and, taking into design and the tested
achieved. account the diverging assumptions, simply
requirements, solved in developing or apply-
the form of wise ing conceptual
compromises. models.

performance can be assigned, irrespective of the level of competence development in


the course of several years of training. The cross-over test arrangement described
below also enables the measurement of professional competence development over
66 4 The COMET Competence Model

time. In this context, we speak of competence development stages pursuant to the


novice-expert paradigm.
When dealing with concepts of competence measurement in empirical educa-
tional research, we encounter the term ‘literacy’. In the context of the PISA study, for
example, basic scientific education was interpreted as ‘literacy’3. In line with the
concept of studying literacy levels presented by Bybee (1997) and taken up in a
variety of ways, four corresponding competence levels can also be distinguished for
vocational education and training with reference to the developed explanatory
framework (Table 4.1).
The differentiation of Bybee’s concept of scientific education (literacy) was
conducted by the Science Expert Group (2001) on the basis of an analysis of the
test items. The functional and conceptual-procedural competence levels were
divided into two subgroups. It remains to be seen whether this will contribute to
an understanding of basic scientific education. The didactic explanatory value of the
Bybee concept certainly lies in the fact that a clear distinction is made between
functional and process-related competence or literacy. From a didactic perspective,
there is also an interesting parallel between the competence level of
multidimensional literacy (Bybee) and holistic shaping competence at COMET.
The emphasis on the processual aspect in the form of procedural competence in
the PISA texts for the 2003 survey (‘Prozess und Prozeduren’, ‘Wissen wie’, Prenzel
et al., 2004, 19) establishes a further affinity between the models of professional and
scientific competence.
Process orientation is considered a key category in the reorganisation and further
development of occupational order means and the design of vocational training
processes no later than with the introduction of vocational educational concepts of
work process knowledge and business process orientation. In contrast, the theoret-
ical distinction between declarative and procedural knowledge is considered to be of
rather limited value in curriculum research (Gruber & Renkl, 2000; Minnameier,
2001; Neuweg, 2000).
The operationalisation of competence levels by means of the eight requirement
criteria for the solution of professional tasks or the corresponding competence
components is based on the following reasoning.
The functionality of a task solution and its clear presentation must first be given
before the other solution criteria can unfold their significance. If economic effi-
ciency, practical value and sustainability as well as business and work process
orientation are taken into account when solving the test tasks, then the test subjects
have a professional work concept and thus the level of ‘procedural competence’—in
contrast to a merely limited technical-scholastic, functional understanding of the task
(Fig. 4.2).

3
In contrast to the didactics of general education, ‘literacy’ has not yet found its way into vocational
education.
4.2 The Levels of Professional Competence (Requirement Dimension) 67

Fig. 4.2 Professional competence: levels, partial competences (criteria) and dimensions

The solutions to tasks that can be assigned to this level of competence show that
the competences that must be considered as a matter of priority from a professional
and company perspective are given.
The third level of competence, one of holistic shaping competence, is defined by
skills that go beyond the perspective of company work and business processes and
refer to solution aspects that are also of social relevance. In this respect, this results in
a hierarchisation of the competence components or solution aspects as an extension
of the professional competence scope of the test persons in accordance with their
problem-solving horizon. Operational and company-related solution competences
are based on a purely functional competence.
Nominal competence is not part of vocational competence if, as here, the devel-
opment of vocational competence is introduced into modelling as a characteristic
criterion for the success of vocational education and training. Trainees who only
reach the level of nominal competence are assigned to the risk group. If one
considers the definition of the first level of competence (functional competence), it
is highly likely that trainees who do not achieve this level of competence will fail to
achieve the training objective: i.e. after completing their training, they will indepen-
dently carry out specialist professional tasks in accordance with the rules typical of
the profession. They are only competent at the level of unskilled and semi-skilled
workers. This does not preclude them from developing into skilled workers in
professional practice on the basis of reflected work experience.
68 4 The COMET Competence Model

Table 4.2 Two examples: Rating scales for the sub-competences ‘functionality’ and ‘environmen-
tal compatibility’ (Appendix B)
The requirement is. . .
not rather
fulfilled not rather completely
at all fulfilled fulfilled fulfilled
Functionality/professionalism
Is the solution working?
Is the ‘state of the art’ taken into account?
Is practical feasibility taken into account?
Are the professional connections adequately
represented and justified?
Are the illustrations and explanations correct?
Environmental compatibility
Are the relevant provisions of environmental
protection taken into account and justified?
Does the solution use materials that meet the
criteria of environmental compatibility?
To what extent does the solution take into
account an environmentally sound work design?
Does the proposed solution take into account and
justify the aspects of recycling, reuse and
sustainability?
Are energy-saving aspects taken into account?

4.2.1 Operationalisation of the Competence Criteria:


Development of the Measurement Model

The measurement of professional competence requires the operationalisation of the


competence criteria (partial competences) in evaluation criteria (Table 4.2).
The raters evaluate the solutions to the open test tasks (! 5.2) on the basis of five
rating items for each sub-competence (Appendix B), whereby they can differentiate
between four possible ratings (! 6.1, 6.2, 6.3).
The authors of the test questions and (finally) the raters of the pre-test procedure
determine which of the 40 rating items are not valid for a test item and do not apply
(! 5.6).

4.3 Structure of the Content Dimension

The content dimension of a VET competence model refers to the vocational fields of
action and learning as a basis for the construction of test tasks. In international
comparative competence diagnostics projects, it is important to identify content that
is considered characteristic of a subject or learning area in line with a ‘world
4.3 Structure of the Content Dimension 69

curriculum’ (PISA). This necessarily abstracts from the specific national or local
curricula. Deriving the test content from vocational training plans is therefore ruled
out for vocational education and training for several reasons.
1. One of the reasons for comparative large-scale competence diagnostics in VET is
that the test results can also be used to compare the weaknesses and strengths of
established VET programmes and systems with their specific curricula
(Hauschildt, Brown, Heinemann, & Wedekind, 2015, 363 f.). For the COMET
project, professional validity was therefore justified as a criterion for determining
the contents of test tasks. The test tasks for the respective professional fields must
prove to be valid. For example, the professional groups manage to agree on job
descriptions (job profiles) for the respective professions and, above all, on the
project tasks for ‘vocational competitions’ with a surprising matter of course. For
the representatives of the respective ‘Community of Practice’, it is almost obvious
what true mastery in their profession looks like.
2. Vocational curricula are geared to specific forms and systems of vocational
education and training. A comparative competence survey cannot therefore be
geared to a specific form of training—e.g. dual vocational training. The voca-
tional curricula in countries with developed dual vocational training, such as
Switzerland, Denmark and Norway, would already be too different. Above all,
the relationship between the definition of higher (national) standards and their
local structure in the form of concrete education plans is regulated very differ-
ently. In both Switzerland and Denmark, responsibility for the implementation of
lean national vocational regulations in concrete vocational training plans lies with
the actors ‘on site’. The structure of vocational training courses in terms of
content and time is based on very different systemisation concepts. In addition
to the systematic structuring of vocational training courses, the timing of the
training content is largely pragmatic. Scientifically based vocational training
concepts are the exception. In Germany, for example, the introduction of the
learning field concept was a move away from framework curricula with a
systematic structure. However, an alternative systematisation structure for the
arrangement of learning fields or training content was not explicitly specified. The
reference to the ‘factually logical’ structure of the learning fields leaves open what
distinguishes them from a subject-systematic content structure.
For vocational education and training, the establishment of a validity criterion for
the content of vocational education and training or the corresponding test tasks is
therefore of particular importance, as training is very different for the same field of
employment. Scholastic, vocational scholastic, in-company and dual forms of train-
ing compete with each other – nationally and internationally. It is indisputable that
vocational training aims at employability. This includes the qualifications that enable
students to pursue a profession. The terms ‘qualification’ and ‘competence’ are often
used synonymously in colloquial language and in vocational education and training
policy discussions. It was explained why it is necessary to distinguish between
the two categories in the scientific justification of testing and diagnostic procedures
(! 2.2). The degree to which different forms of training are capable of teaching
70 4 The COMET Competence Model

cognitive performance disposition at the level of employability within the frame-


work of corresponding vocational training courses is the subject of competence
diagnostics. This also applies to measuring competence development at defined
points in time during vocational training. The decisive quality criterion for the
content dimension of the competency model is therefore professional validity.
If the content dimension is described in the form of a model for systematising the
training content, which can claim general validity for vocational education and
training, then this has two advantages. Firstly, it meets the criterion of providing
for a procedure for identifying training content that allows vocational education and
training to be defined as a learning area. It was explained that the novice-expert
model makes it possible to structure the occupation-specific training contents
according to a learning area model (COMET Vol. I, Sect. 3.3). Franz Weinert
describes the novice-expert paradigm ‘as the most important empirical-analytical
approach to expertise research’ (Weinert, 1996, 148). The paradigmatic meaning of
the model is based on development and learning theories such as
• The theory of situational learning and the ‘Community of Practice’ (Lave &
Wenger, 1991),
• The theory of ‘cognitive apprenticeship’ (Collins, Brown, & Newman, 1989),
• The development theory of Havighurst and its application in (vocational) educa-
tion research (! 3.5),
and, on the other hand, on expertise research, which is consistently based on the
novice-expert paradigm with its models of graduated competence development.
Developmental educational research has been considered a cornerstone of curricu-
lum development and research since the 1970s (Aebli & Cramer, 1963; Bruner,
1977; Fischer, Jungeblut, & Römmermann, 1995; Lenzen & Blankertz, 1973). The
paradigm of developmental logic only emerged gradually during the scientific
support of model experiments (Blankertz, 1986; Bremer & Haasler, 2004; Girmes-
Stein & Steffen, 1982) and in the extensive empirical studies on the development of
competence in educational and nursing professions.
If one assumes that vocational education and training primarily bases its legiti-
macy on the fact that it challenges and promotes ‘growing’ into a profession—the
development from beginner to expert—by giving learners the opportunity to develop
their professional competence by solving professional tasks, then a development
theory-based model for structuring the content dimension of a vocational compe-
tence model presents itself. The systematisation of the work and learning tasks
characteristic of a profession for beginners, advanced beginners, advanced and
expert persons in the field provides a cross-professional basis for the systematic
identification and selection of content for the construction of occupation-specific test
tasks.
The COMET competence model therefore features a content dimension based on
learning and development theory, the didactic implementation of which for the
professional and occupational field-related development of test tasks makes it
possible to implement a cross-occupational test concept in a job-specific manner.
This permits a comparison between the development and levels of competence of
4.3 Structure of the Content Dimension 71

Fig. 4.3 Assignment of test tasks to the VET learning areas as a basis for cross-over design
(cf. COMET Vol. II, 27)

learners in different occupations and different vocational training systems. At the


same time, this concept of structuring training content offers the possibility of
systematically measuring vocational competence at different stages of vocational
training (Fig. 4.3).
When applying and designing the content dimension of the competence model, a
distinction must be made between the stages of competence development (from
beginner to skilled worker) and professional competence at the end of vocational
training. If one classifies the acquisition of vocational qualifications in the gradual
development of vocational aptitude according to qualifications that build on one
another in terms of development logic and the corresponding fields of vocational
action and learning on the basis of domain-specific qualification research, then one
has a task structure as a basis for the development of test tasks.
Whenever one wants to examine competence development over the entire train-
ing period, it is necessary to identify the characteristic professional work tasks and to
arrange them in line with development tasks. The simpler application exists if the
vocational competence level is to be measured towards the end of a vocational
training course. In this case, the reference point for the content of the test develop-
ment is professional competence, which is available in the form of job profiles or job
descriptions. In international comparative studies, it is advisable not to harmonise
the formalised job descriptions in the form of standards or training regulations.
Formal regulations would become disproportionately important and would stand
in the way of test development. According to the experience of the internationally
comparative COMET project (German and Chinese apprentices/students of electri-
cal engineering and electronics), the selection of appropriate (characteristic) test
tasks in terms of content is largely trouble-free. In addition to their common
professional subject, the implicit validity criterion that the involved content
72 4 The COMET Competence Model

specialists apply is specialist work in electrical engineering and electronics. On the


one hand, the understanding of competence development contents takes place at the
level of professional fields of action and, on the other hand, directly through the
selection and development of test tasks.
The operationalisation of the content dimension includes the definition of the
concept of open complex test tasks. In the COMET project, two test tasks were
processed by each test person, with a maximum processing time of 120 min per test
task. The number of open test tasks, which are completed in a test time of around
240 min, must be decided by the content specialists on a job-specific basis. Two
criteria must be taken into account: the types of action to be distinguished in a
profession and the representativeness of the test tasks for the professional fields of
action.

4.4 The Action Dimension

Parallel to the vocational educational differentiation of the categories of vocational


education and vocational competence, the guiding principles of ‘complete work
action’ prevailed in the discussion on labour science and the research on labour
science aimed at the humanisation of working life. The many scientific attempts to
scientifically justify the concept of the complete working process obscure the insight
that this category also has normative roots. The concept of the complete working
process arises from the critical examination of the Tayloristic working structures and
the interest in opposing the deskilling of fragmented work processes with a labour-
scientific design concept. Empirically, the concept of complete work action is based
on a large number of ‘HdA’ (Humanisation of Working Life) or ‘Work and Tech-
nology’ projects in which it could be demonstrated that non-Tayloristic forms of
organisation of social work under the conditions of international quality competition
offer a competitive advantage (Ganguin, 1992).
With reference to Hellpach (1922, 27) Tomaszewski (1981), Hacker (1986) and
Volpert (1987), ULICH highlights five characteristics of the ‘complete tasks’:
1. The independent setting of goals that can be embedded in overriding goals,
2. Independent preparation for action in line with the perception of planning
functions,
3. Selection of means including the necessary interactions for adequate achievement
of objectives,
4. Implementation functions with process feedback for possible corrective action,
5. Control and feedback of results and the possibility to check the results of one’s
own actions for conformity with the set goals (Ulich, 1994, 168).
It is remarkable that Ulich emphasises the category of ‘complete tasks’ and
therefore establishes a connection to work design as a central subject of labour-
scientific research. If we include the action dimension in the COMET competence
model, it is in the tradition of this labour-scientific task design, which always also
4.4 The Action Dimension 73

considers the design of work tasks from the aspect of personal development. The
programmatic significance that the concept of complete action (task design) has
acquired in vocational education has one of its roots here. Another is the degree of
average operationalisation in the form of differentiation of the complete work and
learning action into successive action steps. For the didactic actions of teachers and
trainers, this scheme offers a certain degree of certainty. In the meantime, this action
structure model has also been used internationally in connection with the introduc-
tion of the learning field concept in the development of vocational curricula.
The inclusion of the action dimension in the COMET competence model and its
differentiation in accordance with six action steps was undertaken with the intention
of establishing the concept of complete task and problem solution. This is formed by
the criteria of the requirement and action dimension. This further differentiates the
competence model as a basis for the development of test and learning tasks and the
evaluation of task solutions.

4.4.1 The Importance of Action Types

The description of the action dimension must be restricted, as the steps of the
complete working action lead to the implementation of a structure of rational
didactic action, which does justice above all to the action situations of beginners
and less to those at advanced and expert levels (cf. above all Dreyfus & Dreyfus,
1987). In this context, a distinction is made in the vocational educational discussion
between the rational and the creative-dialogical type of action (Brater 1984). Both
types of action are fundamentally significant in all occupations, each with a different
weight. Professional tasks with a clearly defined goal, e.g. in the form of a specifi-
cation for the solution of a technical task, are characterised by the fact that the
precisely specified goal suggests a well-structured procedure. The purpose deter-
mines the procedure for solving the task. The concept of complete working action
has a clear affinity to this type of rational action. This type of action is particularly
pronounced in specified work projects and processes in which the scope for action
and design is limited. If there is room for manoeuvre in the phase of order formu-
lation, then this is already restricted or eliminated in the work preparation processes
by precisely specified work steps.
An open objective and a course of action that can only be planned to a limited
extent are characteristic of the creative-dialogical type of action. The consequence of
the action steps only results in the work process itself. For example, educational
processes are largely open. Teachers and educators absorb the impulses, suggestions,
questions and answers of the children/students. As subjects of the learning process,
the learners participate in determining the course of the educational process. To a
certain extent, a teacher anticipates the possible reactions of his students when
planning the lessons – he mentally acts out the lesson with its different possible
situations. However, the actual course of lessons can only be anticipated to a very
limited extent. Actions in diagnostic work processes are very similar, for example in
74 4 The COMET Competence Model

personal services as for industrial-technical professions, in which fault diagnostics


play a special role. The creative-dialogue type of action is particularly pronounced in
artistic professions. A painter is guided by an idea of content when painting a picture.
However, the way in which a painting takes on its final form arises from a constant
dialogue between the artist and the resulting painting.
In professional work, both types of action overlap. If the form of design dialogue
predominates in a professional activity, then it is expedient to create open test tasks
in the form of situation descriptions such that the time frame for the test persons
remains manageable and describable in the possibilities and branches of action.
These considerations must be taken into account when designing test tasks. For the
planning and conceptual analysis and processing of a technical task, two test tasks
are justifiable if the rational type of action prevails. The anticipation of an educa-
tional situation in the context of a test with processing times of approx. 120 min
seems rather unrealistic, since the pedagogical-didactic action of educators and
teachers can usually only be anticipated for shorter time cycles. A comparison
with a chess player makes sense. The anticipation of possible moves is based on
the anticipation of the other player’s behaviour. More than four to five moves cannot
be estimated as the number of possible moves increases exponentially (which is why
the actual action of chess players is mainly based on knowledge of typical positional
patterns). We therefore propose increasing the number of open complex test tasks for
occupations with pronounced design and dialogical forms of activity to about four
test tasks. This reduces the processing time to (max.) 60 min per test task. The form
of thinking through branched action processes should be retained, since it is a
characteristic of professional competence or professionalism in occupations with a
high proportion of design and dialogical forms of activity. At the same time, this test
form reaches its limits if the action situations to be anticipated are extended too far in
time. The rule to be observed in this context is therefore: Test tasks must remain
manageable in their alternative courses of action.
Only on the basis of empirical studies will it be possible to define this rule more
precisely. This includes studies that define the limits of standardised competence
diagnostics that are given by the contents of professional work.

4.5 A Cross-Professional Structure of Vocational


Competence

In order to take all occupational fields into account, the definitions of competence
levels must be sufficiently general or supplemented by differentiating references to
the different employment sectors. Thus, for example, in a cross-professional descrip-
tion of procedural competence, terms referring to ‘company work’ are avoided,
since, for example, activities in educational institutions, in the health sector or in
administration are rarely associated with the category of ‘company’. Comparable
4.5 A Cross-Professional Structure of Vocational Competence 75

Fig. 4.4 Adjustment effort for the implementation of the competence model in different occupa-
tional fields (examined on the basis of 48 expert assessments)

editorial corrections are offered for the description of the competence level ‘holistic
shaping competence’ (! 4.2).
At the level of the eight competence components assigned to the competence
levels, the challenge is to define these components in such a way that they trigger a
sufficiently concrete idea among users in the different occupational fields of the
competences to be imparted. An analysis of the content of the explanatory descrip-
tions as part of an empirical review of the criteria for occupations in the education
and health sector and commercial occupations will then reveal the need for
adaptation.
If one differentiates the adjustment effort according to the sectors of industrial-
technical, commercial-service-providing and personal service occupations, then the
adjustment effort increases steadily in this order. If the professional effort for the
adaptation of the criteria and items of the competence and measurement model is
applied to the vertical plane of a two-dimensional diagram, then 100% corresponds
to a completely new version of the criteria and items. On the horizontal plane, the
imaginary distance between the training content and objectives and the electrical
professions involved in the COMET project can be deducted. The greatest assumed
distance in terms of content is to the professions in the education and health sector.
Content specialists estimate the effort required to adapt the formulation of compe-
tence criteria and evaluation to be a maximum of 20% in personal service occupa-
tions (Fig. 4.4).
The ‘subject’ of training is a technical one for industrial-technical occupations
and an economic one for commercial occupations. For pedagogical professions, on
the other hand, it is about the development of personality. This mainly explains the
76 4 The COMET Competence Model

Table 4.3 Adaptation of the evaluation criterion ‘Orientation towards utility value/sustainability’
to different occupational fields (deviating criteria are highlighted in grey) (Appendix B)
Industrial-technical
professions Commercial professions Personal service professions
Is the solution highly practical Is the solution highly practical What are the subjective bene-
for the customer? for the customer? fits of the solution for patients,
qualified medical employees
and doctors?
How user-friendly is the solu- How user-friendly is the solu- What are the objective benefits
tion for the immediate user/ tion for the immediate user/ of the solution for patients,
user/operator? user/operator? qualified medical employees
and doctors?
Is the aspect of avoiding sus- Is the aspect of avoiding sus- Is the aspect of avoiding sus-
ceptibility to malfunctions/ ceptibility to malfunctions/ ceptibility to malfunctions/
unpredictability taken into unpredictability taken into unpredictability taken into
account and justified in the account and justified in the account and justified in the
solution? solution? solution?
Are aspects of long-term Are aspects of long-term Are aspects of long-term
usability and expansion possi- usability and expansion possi- usability and expansion possi-
bilities considered and justi- bilities considered and justi- bilities considered and justi-
fied in the solution (for fied in the solution (for fied in the solution (for
example, creating a reusable example, creating a reusable example, creating a reusable
template)? template)? template)?
Is the proposed solution easy Is the solution adaptable/flexi- Is the task solution aimed at
to maintain and repair? ble? (e.g. quick reactions to long-term success (avoiding
disturbance factors) the revolving door effect)?

differences in the description of competence criteria and items with which the
competence levels are defined (Table 4.3)4.

4.6 Extending the Competence Model: Implementing


Planned Content

The test-methodological argument of restricting competence diagnostics to the


measurement of domain-specific cognitive disposition is omitted if the COMET
test procedure is further developed into an examination procedure. The resources
available for conducting examinations in accordance with the Vocational Training
Act allow practical skills to be included in competence diagnostics as a subject of the
practical examination (! 7.1).

4
For processing the criteria and items assigned to the competence levels when assessing solutions to
tasks in personal services, see appendix.
4.6 Extending the Competence Model: Implementing Planned Content 77

Table 4.4 Steps of the practical exam

4.6.1 Operational Projects/Orders

Established forms of practical examinations are company projects (for IT occupa-


tions), journeyman’s pieces (for some trades) or company contracts (e.g. for elec-
tronics occupations). The practical examination usually includes a planning phase.
In a first step, the project or operational order to be implemented is conceptually
planned before the planned content is practically implemented in a second step. This
can result in corrections to the planned procedure and the expected result (product).
The result of the project/order must be checked (quality control) and documented
before it is handed over to the customer/client (Table 4.4).
This is followed by an expert discussion in which the candidate has the oppor-
tunity to justify his ‘project’ (procedure and result).
A competence-based integrated examination includes the evaluation of the prac-
tical implementation of the project or the operational order as well as the documen-
tation and justification of the work result and the planned procedure.
The documentation of the project comprises three points:
1. The task definition (e.g. an operational order) of the client/customer.
The order/project is described from the customer’s perspective or the utility
value perspective. Specifications in the sense of a specification sheet are avoided
if this already concerns solution aspects. A central aspect of the examination is
that the examinee must translate the customer order (the situation description of
the client) into a specification. It may also turn out that individual customer
wishes are not feasible or individual requirements contradict one another: wish
A excludes the consideration of wish B.
2. Description and justification of the order’s planned solution.
3. Documentation of the project/order result (implementation of the planned con-
tent), quality control and quality evaluation.
A competence-oriented practical examination includes the assessment of profes-
sional competence to act. Therefore, the solution and/or the result and the procedure
for the implementation of the plan are not only documented, but explained in detail
(‘Why like this and not differently’). This documentation (second and third) is
evaluated using the COMET evaluation procedure (extended measurement model,
see ! 7.1).
78 4 The COMET Competence Model

All cases in which the practical implementation of an order includes products


with their own quality that cannot be determined from the documentation require the
inclusion of these in the evaluation.
In all professions where practical competence consists of communication and
interaction skills, i.e. counselling, teaching and educating, loyalty of learners,
clients, customers, etc., requires a rating process based on observation. Professions
with such competence profiles require the development of specific competence
models. The competence and measurement model for measuring vocational teacher
competence is an example for this sector of professional activities.
The first results of the psychometric evaluation of this competency and measure-
ment model are available (Zhao, 2015; Zhao, Zhang, & Rauner, 2016).

4.6.2 The Expert Discussion

A high value is attached to the expert discussion as a central component of the


practical examination (BMBF, 2006b, ! 7.2). In some examination regulations, the
assessment of the practical examination is based exclusively on a (maximum)
30-min expert discussion. This practice represents an excess of expert discussion,
as the entire evaluation of the practical examination depends on its course. The
reliability and validity of such expert discussions are not very high because expert
discussions are dialogues, the course of which inevitably results from the situation
and is decisively influenced by the experiences and expectations of the examiners.
The expert discussion, which is conducted following the rating of the project
documentation in the context of an extended rating procedure, can be based on the
documentation, including the justification of the project result and the course of the
project with a standardised rating procedure (assessment of professional competence
to act). The rating result shows which aspects of the solution were not or insuffi-
ciently taken into account by the candidates. The expert discussion is then given the
function of checking whether the examinee may know more than is shown in his
documentation and justification. The expert discussion can serve to examine, for
example, whether an examinee can justify specific solution aspects on a higher level
of knowledge.

Example In the documentation, an examinee (electronics technician) justified


the decision for overcurrent protection with reference to a VDE 0100 regula-
tion. The expert discussion did serve to clarify whether the examinee can
explain the meaningfulness of this regulation for the specific case and justify it,
if necessary, in consideration to other solution possibilities.
4.7 Identity and Commitment: A Dimension of Professional Competence Development 79

The expert discussion, which takes place after the rating of the project documen-
tation, can be based on the rating results and clarify whether the examinee knows
more than he has described and justified in his documentation. The result of the
expert discussion then either leads to confirmation of the rating result: the candidate
has already documented and justified his project according to his competence or this
results in corrections for individual rating criteria.
For the application of the COMET rating procedure, a double rating is
recommended – as is customary in examining practice: two examiners indepen-
dently evaluate the project result and then agree on a joint rating (for all items) or a
team rating is carried out right from the start. In both cases, this contributes to a
higher degree of consistency in the assessment of examination results. A change in
the composition of the examiner/rating teams is one form of implicit rating training.

4.6.3 Rater/Examiner Training for Assessing


the Practical Exam

Rater/Examiner training is based on the procedure of the COMET rater training (!


4.5.5). Rater/Examiner training is based on selected documentation about company
projects. In order to achieve a high degree of interrater reliability, it is advisable to
make the practical examples as well as the reference values of the already conducted
training courses available nationwide.

4.7 Identity and Commitment: A Dimension of Professional


Competence Development

The novice-expert paradigm describes how beginners become experts from the
perspective of developing professional competence. Herwig Blankertz and Andreas
Gruschka can be merited with having introduced an extended understanding of
development in their work on the logical structuring of professional curricula.
Vocational training is always about a coherent process of competence and identity
development. Herwig Blankertz explained that, without the development of profes-
sional identity, no competence development would be conceivable (Blankertz,
1983,139).
In this context, Walter Heinz points to another aspect of professional identity
development, that of shaping one’s own biography: ‘In the industrialised service
society, the gravitational point of professional socialisation processes shifts (...)
from socialisation (in line with learning conventional social roles) to
individualisation. For professional socialisation, this means that the internalisation
of labour standards is gradually giving way to the formulation of subjective
80 4 The COMET Competence Model

demands on work content and the active shaping of professional biographies’


(Heinz, 1995, 105).
In this situation, educationalists are not the only ones who refer to the importance
of vocational identity as a self-concept and vocational training as a form of education
that protects trainees and employees from disappointed confidence in the care of
companies towards their employees (see also Brown, Kirpal, & Rauner, 2007).
If the four levels of increasing work experience and the corresponding learning
areas (Fig. 3.7) are applied as described above, the successive learning areas can be
assigned levels of identity development. In the transition from vocational choice to
vocational training, the learning area of occupation-oriented work tasks corresponds
more or less to a hypothetical job description which, depending on the quality of
occupation-oriented training, corresponds more or less to the reality of the profes-
sion. Notwithstanding the above, there may already be strong identification with the
training occupation at the start of vocational training. This is particularly true for
trainees who have very strong career aspirations at an early stage. At the start of
the vocational training, the professional identity is shaped by a job description that
the novice has acquired through narratives, literature for children and young adults,
the public media and, increasingly rarely, experience, e.g. parental professional
work. In the best case, trainees have experience gained through work experience
in the course of vocational training. In any case, the subjective occupational pro-
files—the pre-professional identity—are confronted with professional reality. In
vocational education and training with a developmental structure, the vocational
work tasks at the start of training give an idea of ‘what the chosen occupation is
mainly about’. Initially, the outlines of an experience-based occupational profile
emerge, one which develops into a mature subjective occupational profile as voca-
tional training progresses—and above all with increasing breadth and depth of
reflected work experience. Trainees gradually develop a reflected professional
identity that allows them to classify their professional role in the company’s business
processes and the company’s organisational development processes. The develop-
ment of experience-based professional identity goes hand-in-hand with the ability to
experience one’s own work develops as a result of a superordinate context-related
viewpoint from the perspective of cooperation with specialists from other occupa-
tions, with managers from different management levels and with the customers of
the work orders. The dialectic between taking on the professional role and simulta-
neously being able to reflect on it from a distance (role distance) unfolds its effect.
Professional identity development is based on four sources.
• Formal professional role identity is defined by training regulations and regulated
job descriptions. These are reflected in examination and training regulations.
• Informal professional roles represent the expectations of society: occupations
have a social image that trainees and professional specialists are aware of. To
what extent this shapes their role identity is the subject of vocational research.
• Both the formal and informal professional role identities are decisively influenced
by the requirements and expectations of the community of practice and company
managers and trainers. This is associated with the development of professional
4.7 Identity and Commitment: A Dimension of Professional Competence Development 81

identity as a passive or active role identity. For example, early participation in the
processes of company organisational development with an emphatically business
process-oriented training concept will promote the development of an active role
identity.
• The interest in the content of professional tasks is a fundamental determinant for
the development of professional identity. If this interest is very pronounced, then
the other determinants of professional identity development lose importance.

4.7.1 Normative Fields of Reference for Commitment


and Work Morale

In his article ‘The Cultural Embedding of the European Market’, Carlo Jäger (1989)
explains the need to distinguish between work morale and professional ethics, as
both categories refer to different normative fields. Work has lost the odium of curse
in modern culture. With the emergence of wage labour, a normative field has
emerged on a global scale that has been experienced and accepted as one of the
driving forces behind the success story of industrial society. Since then, the central
value of work has been supported in industrial culture by a wreath of different work
ethics, which were later (in the twentieth century) critically described as secondary
ethics (diligence, discipline, punctuality, etc.).
Industrialisation was accompanied by a large exodus of workers from agriculture.
Migration movements and flows reinforced the emergence of a labour market for
everyone’s work (mass work). The development and rapid expansion of mass
production required mass training of the workforce.
Kliebard suggests that performance-related wages have become a characteristic of
mass industrial work and not only the consistent hierarchical and vertical division of
labour. Job satisfaction should be ensured by increasing wages, while the basic
source of motivation was a performance-related work ethic.
Jäger, Bieri and Dürrenberger (1987, 75) understand working morale as ‘a
constitution of conscience that demands that the work—no matter whether laborious
or misunderstood in essence—be carried out in accordance with the contract,
obediently, promptly, precisely, punctually, etc.’. This confirms that scientific man-
agement, as formulated by Taylor, had also found its way into European industry.
For example, a manual from the Central Association of the German Electrical
Industry explains the industrial electrical professions ordered in 1972: ‘The task of
the communication device mechanic is to assemble modules and components, to
assemble simple device parts and devices, and to perform and connect these
according to samples and detailed instructions. He carries out simple tests of
electrical components, assemblies and device parts with the corresponding mea-
surements according to precise testing and measuring instructions. His area of
responsibility also includes simple maintenance and repair tasks’ (Zvei, 1973,
82 4 The COMET Competence Model

13). Until the 1970s, vocational training planning in Germany was clearly influenced
by Taylorism and the normative field of work ethics.

4.7.2 Professional Ethics

With reference to a series of industrial sociological studies, Carlo Jäger shows how
work ethics deteriorated in the second half of the twentieth century. He explains this
process with the wage explosion in combination with the fact that unskilled migrants
are not available in any number, from which he derives the thesis that a European
labour market solely oriented to the normative field of work morale would inevitably
result in mass unemployment and sluggish productivity development (Jäger, 1989,
566). Based on his theoretical and empirical studies, he concludes: ‘Regardless of
work ethics, there seems to be a normative field that emphasises the qualities of
cooperation and communication rather than the character of deprivative duty in
professional life. We call this normative field ‘professional ethics’ (ibid., 567).
In summary, Carlo Jäger comes to an interesting result for vocational education
and vocational training research, which challenges them in their creative tasks:
‘European culture, understood as a comprehensive normative field, developed a
new form of social differentiation and personal identity formation with professional
ethics at the end of the Middle Ages. The social system of the European labour
market, which has been crystallising for several decades, has so far hardly taken this
into account and instead referred to normative fields with their work ethics, which
have become significantly less important in the same period’ (ibid., 570).

4.7.3 Organisational versus Occupational Commitment

The erosion of work ethics is directly associated with the rise and fall of commitment
to organisations, as investigated by commitment research. Since the 1950s, various
forms of commitment have been empirically researched in management and
behavioural research (especially in the USA). Despite all the differences in the
theoretical location of the research approaches in different sciences, there is one
striking commonality.
The categorical distinction between work ethics and professional ethics corre-
sponds to the distinction in commitment research between organisational and occu-
pational commitment (Baruch, 1998; Cohen, 2007). This differentiation can be
interpreted as one between organisational and professional commitment. In commit-
ment research, metastudies have shown that operational commitment has been
declining steadily since the 1970s. As this is based on the employees’ emotional
attachment to the company, this means that these ties are gradually becoming less
strong. The volatilisation of stable relations between companies and employees
confronts commitment research with the erosion of its basic category and opens up
4.7 Identity and Commitment: A Dimension of Professional Competence Development 83

a field of research for vocational training research to elucidate the interactions


between the development of vocational and organisational identity as well as
professional ethics and work ethics (Heinz, 1995, Rauner, 2007a).
Here, one can speak of a paradox, ‘as the flexibilisation of labour markets is not
accompanied by a flexibilisation of professional work: with a departure from the
profession of social work, but—on the contrary—with an upgrade of the profes-
sional form of social work’(Kurtz, 2001).
The stronger commitment of employees to their profession also justifies their
willingness to perform and to take on responsibility in the sense of intrinsic moti-
vation and at the same time emancipates them from a deceptive emotional attach-
ment to a company that may not be able or willing to reciprocate this commitment
and loyalty.

4.7.4 Construction of Scales to Capture Work-Related


Identity and Commitment Occupational Identity

As occupational identity is related to the respective profession, it is not possible for a


cross-professional concept to develop a scale for its coverage that is based on
assumptions about the respective specific content of such an identity. It is therefore
not possible to determine to what extent a specific professional role has been
assumed. This distinguishes the term occupational identity used here from those
frequently used ones, which aim more strongly at acquiring implicit or explicit
knowledge in addition to professional action competence, or being a member of a
certain profession, i.e. sharing a certain universe of thought and action. This also
excludes the more or less successful adoption of a profession-specific habit that
socialisation in a community of practice brings with it and is often equated with
professional identity.
This meta-level carries the risk of excluding essential aspects of growing into a
specific professional role. Therefore, the scale of occupational identity to be mea-
sured does not refer directly to processes of professional socialisation, but to the
subjective disposition to assume the professional role successfully. Martin Fischer
and Andreas Witzel rightly point out in this context that the term professional
identity should not be idealised, for example by deducing a lack of professional
competence from an untrained professional identity, for example due to a change of
occupation (Fischer & Witzel, 2008, 25). It also makes sense to make subjective
dispositions regarding the assumption of the professional role and general profes-
sional values in so far as these can be ascertained independently of the respective
qualification path. The type of training organisation that favours or hinders the
development of such an occupational identity thus becomes an empirical question.
84 4 The COMET Competence Model

4.7.5 Organisational Identity

The organisational identity is defined as the emotional attachment of employees to a


company. The maintenance and increase of this attachment is a central concern of
management research, which sees it as the central cause of professional commitment
(organisational commitment). In a series of meta studies (e.g. by Randall, 1990;
Cohen, 1991) in the last decades of the last century, the declining operational
commitment of employees was identified and interpreted as a crisis of commitment
research. Baruch (1998), for example, aptly expressed this development in his essay
‘Rise and Fall of Organizational Commitment’. Contrary to this trend, Womack,
Johns and Roos (1990) identified a high degree of organisational commitment in the
MIT study on lean manufacturing for the Japanese automotive industry. As a central
feature of Japanese industrial culture, the industrial sociology literature highlighted
the firm and lifelong ties of employees to ‘their’ company, as far as they belong to
the core workforce, as the reason for their proverbial high performance. The profes-
sional form of industrial work and the underlying willingness to perform is alien to
this work culture. The qualification of specialists is embedded in the company’s
organisational development and the processes of continuous improvement (Georg &
Sattel, 1992). The high motivation of the core workforce is also the result of the
structure of the Japanese employment system: the division into core and non-core
workforces. The members of the peripheral workforce have a low wage level and
socially insecure working conditions. This structure of the labour market is regarded
as a decisive determinant of the extraordinarily high motivation of the core work-
force in Japanese companies. The discussion about transferring the Japanese pro-
duction concept to other industrial cultures (cf. Kern & Sabel, 1994) soon dried up,
however, since the European culture of ‘humanising working life’ and introducing
leaner corporate structures geared to business processes soon proved to be just as
competitive and innovative as the Japanese ones. The introduction of broadband core
occupations was seen in this context as a way for the European working world to
support a professional motivation of employees based on occupational identity
(Grollmann, Kruse, & Rauner, 2005).
Commitment research can also be used to develop a scale to capture
organisational identity, but it does not usually distinguish between organisational
identity and organisational commitment.

4.7.6 Modelling the Connections Between Identity


and Commitment

Three questions are of professional and economic interest in this context.


1. How strong is the willingness to perform professionally?
2. Is professional motivation based on factors of intrinsic or extrinsic motivation?
4.7 Identity and Commitment: A Dimension of Professional Competence Development 85

3. To what extent do professional identity, emotional loyalty to the company and the
willingness not to (obediently) question predefined work tasks contribute to
professional motivation?
Comprehensive approaches from commitment research are available for the
empirical recording of organisational and occupational commitment, which can be
described as bonding—mainly affectively conceptualised—as a result of which
commitment in the work activity is expected. There are further attempts to empiri-
cally conceptualise different forms of employee bonding. However, approaches such
as the Job Involvement Scale (Kanungo, 1982) mix precisely the reference fields of
commitment, which here are to be kept as distinct as possible.
Preliminary work in organisational psychology was used to determine
organisational commitment. Among the existing scales for measuring organisational
commitment, the generally accepted scale of Meyer and Allen (1991) was used,
among others.

4.7.7 Occupational Commitment

It is based on identification with the profession. Professional self-confidence and


identity vary depending on the profession and vocational training and therefore have
an impact on the degree of occupational commitment.

4.7.8 Organisational Commitment

It is based on identification with the company and the underlying emotional attach-
ment to the company: ‘I am committed to the company’.

4.7.9 Work Ethics

It makes sense to design the reference field of work ethics pursuant to JÄGER—as
an extrinsic work motivation that undoubtedly accepts external guidelines. A scale
designed in this way should be limited to abstract working virtues. This is shown by
the factor analyses carried out so far.
The term ‘work ethics’ is therefore used to describe a willingness to perform
based on a more or less ‘blind’ execution of instructions. Following Carlo Jäger, it
is an identification with the work ‘in itself’, without consideration of concrete
contents.
The scales used to record occupational identity, occupational commitment,
occupational identity, organisational commitment and work ethics have been
86 4 The COMET Competence Model

psychometrically evaluated several times. They have already been used in an


international context across all professions (COMET vol. IV, 230). Qualitatively
comparable scales can be found for persons who have already completed their
vocational training, but not for trainees. This is one of the particular strengths of the
COMET competence model: the psychometrically evaluated recording of personal-
ity traits of trainees, which are of central importance in the context of competence
diagnostics. The original model showed only one form of identity, the occupational
identity. In connection with the internationally comparative COMET projects, the
situation had to be taken into account that in countries with an underdeveloped
occupational form of social work, emotional ties to companies had to be given
greater weight. This tradition is particularly pronounced in the core workforce of
Japanese companies. The lifelong commitment to a company and the labour market,
which is divided into core and non-core workforces, are regarded as a prerequisite
for the highest motivation of the core workforces.
The psychometric evaluation of the extended identity commitment model con-
firmed a differentiation between both forms of identity among trainees and technical
college students in German-speaking countries (Kalvelage, Heinemann, Rauner, &
Zhou, 2015). A differentiation according to occupational and organisational com-
mitment proved useful in the development of a typology of occupations (! 8.5.4).
These scales already contain the proposed corrections of model verification
based on two extensive confirmatory and exploratory factor analyses (! 8.5.3).
The recommendation to combine the scales for occupational and organisational
commitment, since both scales measure the same, was not included. An alternative
was to revise the items to achieve the required selectivity. The argument in favour of
this approach was that, in an analysis of the individual occupations, the two scales
already proved useful in the first version (cf. the four-field matrix on occupational
and organisational commitment: Abb. 126 and Abb. 127).

4.7.10 Example of an Analysis of the Measurement Model


(Performed by Johanna Kalvelage and Yingy Zhou,
→ 6.4)

In two extensive studies (A: n ¼ 1121; B: n ¼ 3030), the model extended by the
component ‘organisational commitment’ was evaluated using both a confirmatory
and an explorative factor analysis. For the psychometric evaluation of the identity
engagement model, this means defining the possible fields of I-E research as
precisely as possible so that this can be taken into account in the development and
evaluation of the scales.
How are occupational identity and commitment as well as organisational com-
mitment and work ethics connected?
There are many interactions between occupational and organisational identity as
well as occupational commitment, organisational commitment and work ethics. In
4.7 Identity and Commitment: A Dimension of Professional Competence Development 87

the psychometric application, occupational commitment, organisational commit-


ment and work ethics are operationalised as latent constructs by means of indicators.
In this respect, COMET’s instruments enable interdisciplinary and internationally
comparative research. The relationships between the latent constructs can be
modelled empirically (! 6.4). In this way, quantitative surveys can be used to
identify special features for trainees in different occupations and, if necessary, to
develop pedagogically sound interventions.
In this example, there are pronounced correlations between occupational identity
and occupational commitment (r ¼ 0.65). All correlation values shown are highly
significant. The likelihood of error that these results might not correspond to the data
is only 1%. Working with trainees and their teachers and trainers has increasingly
shown that another dimension rooted in personality dispositions could be relevant
for competence diagnostics: organisational identity. The differentiation between
organisational and occupational identity should make it possible to plan pedagogical
diagnostics more precisely and to intervene in different training occupations.

4.7.11 International Comparisons

For international comparative surveys, it is necessary to evaluate the five scales


(Tables 4.5, 4.6, 4.7, 4.8 and 4.9) and—if necessary—to change them so that
comparisons are possible. For example, the categories ‘organisational identity’ and

Table 4.5 Occupational identity scale


Items Cronbach’s Alpha
I like to tell others what profession I have/learn. α ¼ 0.87
I ‘fit’ into my profession.
I would like to continue working in my profession.
I’m proud of what I do.
For me, the job is like a piece of ‘home’.
I’m not particularly interested in my profession. (recoded)

Table 4.6 Organisational identity scale


Cronbach’s
Items Alpha
For me, the company is like a piece of ‘home’. α ¼ 0.90
I would like to stay with my company in the future—even if I have the
opportunity to move elsewhere.
I like to tell others about my company.
I ‘fit’ into my company.
The future of my company is close to my heart.
I feel little connected to my company. (recoded)
88 4 The COMET Competence Model

Table 4.7 Occupational commitment scale


Cronbach’s
Items Alpha
I am interested in how my work contributes to the company as a whole. α ¼ 0.82
For me, my job means delivering quality.
I am absorbed in my work.
I know what the work I do has to do with my job.
Sometimes I think about how my work can be changed so that it can be done
better or of a higher quality.
I would like to have a say in the contents of my work.

Table 4.8 Organisational commitment scale


Cronbach’s
Items Alpha
I try to deliver quality for my company. α ¼ 0.71
I want my work to contribute to operational success.
I like to take responsibility in the company.
Belonging to the company is more important to me than working in my
profession.
I am interested in the company suggestion scheme.
The work in my company is so interesting that I often forget time.

Table 4.9 Work ethics scale


Cronbach’s
Items Alpha
I am motivated, no matter what activities I get assigned. α ¼ 0.69
I am reliable, no matter what activities I get assigned.
I am always on time, even when work does not require it.
I carry out work orders according to instructions, even if I do not
understand them.
Instructions that I consider to be wrong I will still carry out without
contradiction.
For me, work means carrying out professional activities according to precise
instructions.

‘organisational commitment’ are omitted when scholastic vocational training sys-


tems are included.
For such a situation, the other scales also lose some of their significance.
4.7 Identity and Commitment: A Dimension of Professional Competence Development 89

Fig. 4.5 Extended theoretical model on the relationship between commitment, identity and work
ethics

In contrast to measuring vocational competence, international comparisons of


identity and commitment with the previous scales are only possible for comparisons
between countries with a dual vocational training system (Fig. 4.5).
Chapter 5
Developing Open Test Tasks

5.1 Expert Specialist Workshops for Identifying


Characteristic Professional Tasks

An internationally established method for identifying the characteristic professional


work tasks is the method of expert specialist workshops (! 3.3.2).

5.1.1 The Preparation of Expert Specialist Workshops

Researchers acquire the most precise insights and knowledge possible about the
objective prerequisites and conditions constituting the field of activity to be analysed
on the basis of the state of the art in occupational scientific research, relevant
specialist publications on operational and technological innovations and other
sources. In industrial-technical specialist work, this includes the technical work
process-relevant expertise on technical systems, tools and working processes, as
well as the corresponding documentation and working documents. Similar require-
ments apply to commercial occupations and occupations in the health sector.
Additional work experience or relevant professional studies form the basis for a
checklist of questions that can be used if researchers feel that additional questions are
necessary.
The study of the objective side of the professional field of activity to be analysed
should not lead to the formulation of differentiated hypotheses on the job description
and the fields of activity, in order not to limit the dialogue with the experts from the
outset and to draw attention to the framework provided by the formulation of the
hypotheses. This would unacceptably limit the chances of a high internal validity of
the investigation in terms of consensus validity.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 91
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_5
92 5 Developing Open Test Tasks

→ Please name the most important stages (no more than five) of your
professional development as an “expert in skilled work”.
→ For each professional position, please provide three to four typical examples
of the tasks you have carried out in your professional practice.
→ Please note the professional stations and the examples of tasks on the prepared
overhead slide for the presentation of the results.
→ After 15-20 minutes, we will ask you to present your professional career in
plenary.

Fig. 5.1 Work assignment: individual professional career

5.1.2 Implementation of the Workshop

The workshop roughly follows the following temporal and content organisational
scheme.
Assignment 1: Individual Professional Career
The work assignment ‘individual professional career’ contains a list of the most
important stages of professional development, from training to expert level in skilled
work. To avoid too fine a breakdown of the career, the number of stations to be
described is limited to a maximum of five examples. Participants whose professional
development consists of more than five stations must combine several stations or
make a selection of the most important ones. Each of the stations of professional
development mentioned above should assign 3–4 professional tasks to examples
from their professional practice, which they have performed there (Fig. 5.1).
Assignment 2: ‘Challenging and Qualifying Professional Tasks’
After the participants have formulated their individual professional careers, they are
asked to mark the professional examples of tasks which they found particularly
challenging in their current professional practice and in the course of which they
have further qualified themselves.

Which of the professional examples of tasks you mentioned have particularly


challenged and qualified you for your current professional practice? Please
mark these examples of professional tasks.

This additional assignment can also be set during the presentation, so that the
participants can specify the particularly challenging and qualifying professional
tasks at the moderators’ request.
Assignment 3: Presentation of Individual Professional Careers
Participants are given the opportunity to present their professional careers on the
basis of the documents they have prepared.
5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks 93

What was the challenge in the professional examples of tasks you mentioned?
Did these tasks challenge your professional expertise and were you yourself
not yet sufficiently prepared for these tasks?
Difficult task: What was the difficult thing about the tasks? At what point
did you realise it was difficult? How did you overcome the difficulty? How
would you deal with such a difficulty today?
Insufficiently prepared: How come you had to take on a task for which you
were not yet sufficiently prepared? What did you find difficult about the task?
When did you realise that you were insufficiently prepared for the task? How
did you overcome these difficulties?

Assignment 4: Creation of Task Lists in Working Groups


The professional work tasks performed by all team members are compiled, discussed
and documented.
The professional work tasks completed by only a few team members are then
identified and briefly introduced by the team members concerned. Afterwards, the
team discusses and decides whether the mentioned professional tasks should be
included in the common list.
Finally, each team examines whether there are professional tasks that no one in
the team has worked on, but which are nevertheless typical for the respective
profession and which may shape the profession in the near future. Such work tasks
can also be included in the documentation.
After these first four work steps, the jointly identified characteristic professional
tasks are assigned to the four learning areas, alternating between small groups and
the entire team (Fig. 5.2).
This is followed by both an internal validation and an external validation of the
work tasks.
Assignment 5: Internal Validation
Internal validation is carried out by the participants of the expert workshops. After
the results of the workshop have been interpreted and evaluated and the work tasks
assigned to the learning areas, the work result is validated internally.
The questionnaire for internal validation (Fig. 5.3) contains the following items:
• Frequency: How often is the professional task performed?
• Significance: What is the significance of the professional task for the profession?
• Difficulty: What level of difficulty does the task have?
• Significance for one’s own professional development: What significance does the
professional task have for one’s own professional development?
For the items ‘importance’ and ‘frequency’, the future development is assessed
first, i.e. whether the importance or frequency of the professional work task is likely
to increase (") or decrease (#) in the future.
94 5 Developing Open Test Tasks

Fig. 5.2 Systematisation of professional work tasks (Rauner, 1999, 438)

Professional Frequency Significance Difficulty Significance


work task for one’s
own
professional
development
Evaluation Development Evaluation Development Evaluation Evaluation
(1-10) (↑ O ↓) (1-10) (↑ O ↓) (1-4) (1-10)
1.
Professional
work task
2.
Professional
work task
3.
Professional
work task
4. …

Fig. 5.3 Questionnaire for the validation of professional tasks


5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks 95

5.1.3 External Validation

The subject matter of external validation is the result of the ESWs: the characteristic
professional tasks identified for a profession and their assignment to the four learning
areas. As a rule, occupational scientists, employer and employee representatives as
well as trainers and teachers of vocational fields participate in external validation.
The aim of external validation is to check the professional work tasks outside the
operational context in which the expert specialist workshops are held. The possible
influence of company or industry-specific peculiarities on the description of the
skilled work can thus be uncovered and corrected if necessary. Therefore, the
participants in external validation should have sound knowledge of the profession
to be examined in companies of different sizes and in different industries and
regions.

5.1.4 Evaluation of the Validation

In a first step, averages are calculated for the categories ‘significance’ and ‘fre-
quency’ as well as for the assigned development trends from the assessments of the
participants in the validation of the individual occupational tasks. The mean values
are entered in a diagram the axis designations of which correspond to the two criteria
(Fig. 5.4). The averaged values of the development trends can also be translated in
the form of vector arrows, which indicate the future development of professional
tasks in terms of their significance and frequency.
This diagram can be used to determine the core area of a profession. In this
example, the core area is limited by a minimum frequency and significance of 40%.
Professional work tasks outside of this core area can be assigned to a specific
company or sector. The future development of the job description can be estimated
using the trend arrows. For example, professional work tasks that are not yet part of
the core area of the profession at the time of the analysis but will become more
important and frequent in the future can already be taken into account in the job
description.
The evaluation of the item ‘difficulty’ can be used to check the assignment of the
professional work tasks to the four learning areas.
Approximately the same number of tasks are planned for each of the four areas of
responsibility. Particular attention must be paid to ensuring that the tasks fit into the
logic of the regulatory scheme (Fig. 5.2) (cf. in detail Kleiner, Rauner, Reinhold, &
Röben, 2002).
96 5 Developing Open Test Tasks

Fig. 5.4 Evaluation of the items ‘Significance’ and ‘Frequency’

5.2 An Open Test Format

The obvious choice as reference points for the development of test tasks is
established professions/professions. Another obvious choice is a pragmatic proce-
dure as established in the International World Skills (IWS). A relatively large
number of occupations are internationally established. This applies not only to the
craft and health professions such as carpenter, chef or nurse, but also to modern
professions such as electronics technician, computer scientist and numerous com-
mercial professions. The internationalisation of economic development and the
emergence of a European labour market have led to greater harmonisation of
professional activities and occupations. Occupations with similar professional titles
therefore also include comparable fields of professional activity. It is therefore
advisable to compare occupations on the basis of their fields of action in international
comparisons.
The competence levels and the characteristic competence profile of test groups
are measured with the test format of the open complex test tasks (Fig. 5.5) (! 8).
5.2 An Open Test Format 97

Fig. 5.5 Open test tasks for assessing process and shaping competence

These can represent local, regional, national and international courses and systems of
the same or formally different levels of qualification. Test items are therefore
developed according to the following criteria.
Test tasks are open for different task solutions. They are authentic for professional
reality. This is the prerequisite for recording the different competence levels and
profiles.
The format of the open test tasks and the associated requirement to justify the task
solution in detail (experts should be able to understand, explain and take responsi-
bility for their task solutions) increase the scope for solutions and offer the possibility
of participating in a test of relevant technical training courses at various qualification
levels. The prerequisite for participation of training courses in a competence diag-
nostics project is that the validity of the content of the test tasks is assessed as given.
As these are open test tasks that can be solved at different levels of knowledge or
justification, there is a wide range of participation in comparative competence
surveys by courses at different levels of qualification and training organisations
(dual, school-based), insofar as these pursue the goal of qualifying for the exercise of
relevant professional activities.
98 5 Developing Open Test Tasks

5.2.1 Representativeness

The criterion of the representativeness of the test tasks determines whether and to
what extent test tasks cover a profession’s fields of action. Professional competences
are open to application as cognitive performance disposition specific to the field. On
the other hand, qualifications that are examined in the examination procedures are
objectively given by the work tasks and processes and the resulting qualification
requirements. When assessing professional competence, the qualification require-
ments must be fully reviewed for safety reasons alone. Nevertheless, both forms of
professional skills overlap. The decision on the representativeness and validity of
test items for related programmes concerns both the vertical and horizontal structure
of the education system and the degree of its scholastic (academic) and dual
(occupationally qualifying) structure of the curricula. In contrast to an examination,
competence diagnostics aims to record the competence levels and competence pro-
files of test groups (! 8). A complete review of the professional qualification
requirements defined in the job descriptions is not necessary. In practice, the teachers
and trainers decide on which and how many complex test tasks are required to cover
the fields of action characteristic of a profession or to record the competence levels
and profiles of the test participants.

5.2.2 Authenticity/Reality Reference

The test tasks represent authentic work situations. It is taken into account that the
partial competences corresponding to the requirement criteria are challenged in their
complete solution (! 4). This ensures that not only partial competences such as
environmental compatibility or the functionality of a task solution are measured. A
restriction of the complexity of professional tasks in reality would limit or call into
question the validity of the content of the test tasks.

5.2.3 Difficulty

When assessing the difficulty of test and examination tasks, a distinction must
always be made in ‘the degree of training’. Thus, beginner tasks are easier to solve
for experts and advanced users than for beginners.
In principle, professional tasks are not solved correctly or incorrectly, but are
always more or less expedient. The criterion of correctness also applies to partial
aspects of professional tasks if, for example, the relevant VDI [Association of
German Engineers] safety regulations and electrophysical laws are to be observed
when planning office lighting—and above all when installing it.
5.2 An Open Test Format 99

5.2.4 The Description of the Situation

Test tasks are always based on authentic, situation-specific descriptions. They


represent the reality of the professional working world. This is formulated from a
customer perspective in such a way that it directly or indirectly includes all relevant
requirements for the task solution. The situation description is not a specification.
Specifications are derived from the situation description by the test subjects (sub-
jects) and are therefore already part of the solution. The COMET test tasks are
therefore based on a criteria-oriented test format and not on a standards-oriented test
concept.

5.2.5 Standards and Rules to be Complied with

The standards and regulations to be observed when solving an occupational test task,
e.g. accident prevention, health protection and occupational safety as well as the
relevant VDI or DIN regulations are not specified in the situation description, since
the test task is used to check whether and to what extent the test persons are familiar
with the subject-related standards and rules and how they apply them in relation to
the situation.
The test authors base the development of the test questions on examples of related
professions and the general criteria for the test questions development (Table 5.1).

Table 5.1 Guidelines for the development of test tasks (Appendix C: Examples of test tasks)
The test tasks
• Entail an authentic problem of professional and company work practice,
• Define a profession-specific—rather large—scope for design and thus enable a multitude of
different solution variants of varying depth and width,
• Are open to design; i.e., there is no right or wrong solution, but requirement-related solution
variants,
• Require the consideration of aspects such as economic efficiency, practical value orientation and
environmental compatibility (see the concept of holistic task solution) in addition to technical-
instrumental competences,
• Require a typical professional approach to their solution. The solution of the tasks concentrates
on the planning-conceptual aspect and is documented using relevant forms of presentation,
• Can also include the practical solution if the test tasks are to be used to test concrete professional
skills,
• Challenge the test persons to solve, document and justify the tasks in the sense of professional
professionalism (at the respective development level) without excluding reduced solutions.
100 5 Developing Open Test Tasks

5.3 Cross-Professional and Subject-Related Test Tasks

There is often an interest in competence surveys on cross-occupational—subject-


related—fields of action. Welding, for example, is a component of a large number of
metalworking occupations. Since this is a central field of action in the metalworking
professions, there may be good reasons for making this or comparable professional
fields of action the subject of professional competence diagnostics.
The limit to the dissolution of professional working contexts and thus also the
understanding of the context as an object of competence diagnostics is exceeded if
the professional fields of action are selected according to subject-systematic aspects
or if abstract subtasks (tasks) become the content of test tasks within professional
fields of action (! 3.2). This risk is always present in educational practice when
(university) vocational training courses are involved in competence diagnostics
studies in addition to (dual) vocational training courses. Figure 5.6 shows typical
competence profiles of higher education programmes in which a subject-systematic
apprenticeship predominates.
An essential criterion for the participation of training courses in comparative
studies is a vocational or occupationally qualifying training concept. If a higher
education programme claims to be professionally qualifying and there is an interest
in checking whether and to what extent students achieve professional competence,
then an essential prerequisite for a professional competence survey according to the
COMET competence model is given.
For the practical implementation of comparative COMET projects involving
(higher) academic vocational training programmes, it makes sense to develop the

Fig. 5.6 Competence profiles of college students (China) (Zhou, Rauner, & Zhao, 2015, 400; for
calculating and presenting competence profiles ! 8) (In an earlier version of the competence
profiles, the three dimensions of functional, process-related and holistic shaping competence were
still indicated as competences—the terms KF, KP and KG thus correspond to the dimensions DF, DP
and DG)
5.4 Test Arrangements for Related Vocational Training Courses with Different. . . 101

test tasks first for SII and post-SII training programmes that clearly qualify for
vocational training, and then in a second step to check whether the test tasks are
assessed as representative and valid in content by the subject teachers/lecturers of
related (higher) academic training programmes.

5.4 Test Arrangements for Related Vocational Training


Courses with Different Levels of Qualification

The decision on the representativeness and validity of test tasks for related
programmes concerns both the vertical and horizontal structure of the education
system and the degree to which the work tasks of the training regulations are oriented
towards ‘subjects’ or lead to vocational qualifications.
Initial experience and research results are now available for the inclusion of
vertically consecutive courses of education from upper-secondary level to the level
of higher education vocational training courses.
These test arrangements are divided into primary and secondary (associated) test
groups (Table 5.2). Primary test groups represent training courses for and with which
the test tasks are developed. Typical examples of this are the COMET projects for
the training occupations of electronics technician, automotive mechatronics techni-
cian, industrial mechanic and other training occupations regulated by BBiG, related
vocational school and vocational training courses that are regulated according to the
model of alternating duality at SII level.
Once the set of test items has been developed and tested in a pre-test (see below),
it makes sense to check whether these test items can also be used to measure
vocational competences that are taught in courses building on initial vocational
training (associated test groups).
These are, for example, technical school programmes, further training to become
a master craftsman as well as relevant technical university programmes.
Whether such a test arrangement is possible depends solely on how the represen-
tativeness and validity of the content of the test tasks are evaluated by the teachers in
these courses. If the test tasks represent the main fields of action of the occupations

Table 5.2 Test arrangements for primary and associated test groups
Test arrangements
Formal quality level 1 2 3
Tertiary programmes at bachelor Associated test Associated test Primary test
level group 2 group 1 group
Post-S II Associated test Primary test Associated test
Technical schools/master craftsman group 1 group group 1
qualification
Sec II Primary test Associated test Associated test
Dual vocational training group group 1 group 2
Vocational schools
102 5 Developing Open Test Tasks

for which the training courses qualify and if the validity of the test tasks in terms of
content is assessed as appropriately high, then nothing stands in the way of partic-
ipation of this test group.
The degree of representativeness and validity of the content of the test tasks
determines the possibility and design of the test arrangement.

5.4.1 The S II Test Arrangement

In addition to the primary test group, the S II test arrangement identifies two
associated test groups that are formally assigned to higher qualification levels.
Vocational schools are upgraded by one qualification level and bachelor courses
by two qualification levels in accordance with international and national qualifica-
tion frameworks. A frequently asked question about this test arrangement is: Are the
technical college students (and master students) systematically underchallenged by
the test tasks of the primary test group (here trainees in the second year and third year
of training) and therefore cannot prove their real competence?
In the case of closed test tasks (multiple choice tasks), such a test arrangement
would not be possible, or only to a very limited extent, since norm-based test tasks
are always assigned school levels or school years or a defined professional qualifi-
cation level. A decisive allocation criterion is then the degree of difficulty of the test
tasks. The COMET test format is based on the concept of open and complex test
tasks. These are criteria-oriented test tasks throughout. This gives each test task a
scope for solutions (scope for design) that offers room for solutions of different
quality and quantity requirements (simple to very professional). Even if a trainee in
the second or third year of training presents a task solution of comparable quality to
that of a technical college or university student, students have the opportunity to
justify their solutions in great depth and range of subjects. The ‘range’ and ‘depth’ of
the explanatory statement are indicators of the level of work process knowledge
incorporated into the task solutions that the test persons have. In test practice, this
leads to the solution spaces for the test tasks developed for SII training courses being
exhausted to a higher degree on average by the test participants in higher education
courses. Since the solution spaces also include the knowledge that guides and
reflects action, they are usually ‘exploited to the full’, which is only rarely the case.

5.4.2 The Post-SII Test Arrangement

Formally, the post-SII test arrangement differs from the first and third test arrange-
ments in that the formal qualification differences to the subordinate and superior
courses of study each constitute only one level. The professional fields of action of
the post-SII graduates are the reference point for the development of the test tasks. A
certain difficulty in international comparative studies is that the same vocational
5.4 Test Arrangements for Related Vocational Training Courses with Different. . . 103

fields of action are trained in vocational training courses that are formally assigned to
different qualification levels. For example, the vertical range in the training of
nursing staff (child, nursing, elderly care) extends from the ‘unskilled workers’
level through SII training courses to the bachelor’s level. Training at all three levels
of qualification usually goes hand in hand with the development of level-related
professional fields of action. Whether and to what extent these differ in their content
and qualification requirements must be examined empirically in each case.
The typical vocational fields of action for technical college graduates are initially
identified during the development of test tasks. Domain-specific qualification
research has relevant research methods (Rauner, 2006; Röben, 2006). The educa-
tional plans of the post-SII educational programmes are of secondary importance, as
the ability to work is usually only achieved in a phase of familiarisation with the
profession—following the relevant studies at a technical college. The reference point
for the content of COMET competence diagnostics is therefore the professional
competence, which is the focus of the curricula at the technical college, but which
can often only be achieved in the practical phase following the studies at the
technical college. The situation is different with higher technical schools such as
those established in Switzerland. They are organised in dual manner, and their
content and objectives are therefore based on the training content and objectives
identified with the participation of organisations from the world of employment.
If no results of the relevant qualification research are available, it makes sense to
identify the characteristic fields of professional tasks and activities on the basis of
expert specialist workshops (Spöttl, 2006).

5.4.3 The Third Test Arrangement: Graduates


of Professionally Qualifying Bachelor Programmes As
Primary Test Groups

The test tasks are developed by the lecturers of the bachelor’s degree programmes at
universities. Here, too, the rule applies that the authors of the test tasks take the
professional fields of action as a basis, which are considered representative for the
graduates of the degree programmes. One difficulty that arises for this test arrange-
ment is, on the one hand, the very broadly designed courses of study, the contents of
which are based more on traditional concepts of basic academic studies. The
contrasting study programme concept is based on a high degree of specialisation
in content and corresponding ‘tailor-made’ university-based vocational training. For
numerous professionally qualifying bachelor degree programmes (subjects), there is
a more or less pronounced correspondence on the content of vocational training
programmes at SII and technical college level. COMET projects based on this test
arrangement have not yet been conducted. The COMET project Nursing (Switzer-
land) has a certain proximity to this test arrangement, since the dual course of study
104 5 Developing Open Test Tasks

at a higher technical college ends with an examination which is equivalent to a


bachelor’s degree (Gäumann-Felix & Hofer, 2015).

5.4.4 Validity of the Test Tasks for Different Training


Courses and Test Arrangements

In competence diagnostics projects, it is more the rule than the exception that
different vocational training programmes such as dual vocational training, voca-
tional schools and technical colleges as well as bachelor’s programmes qualifying
for vocational training take part in a test. The concept of open test tasks facilitates
this form of comparative competence surveys. The validity of the test tasks is
determined in projects spanning different educational programmes with reference
to the higher-level occupational fields of action of the primary test population
(occupational validity). The test tasks developed in pre-test procedures are then
evaluated by the project groups of the educational programmes involved in the test
according to their validity for ‘their’ educational programmes (Fig. 5.7).
During the evaluation of the individual test tasks, the project groups (of the
participating training courses) evaluate the

• Professional authenticity 1 10,


• Representativeness for competence 1 10,
• Curricular validity 1 10.

5.5 Description of the Solution Scopes

The solution scope of a test task defines the possibilities of a task solution
under the basic conditions specified in the situation description. The wishes
and requirements of the client (customer) limit the (theoretical) scope for
design. Therefore, in the context of a (higher) school learning situation, it is
more likely to assume room for manoeuvre and, in a test format related to the
context of company work orders, to assume a solution space. Scope for
solutions and design can only illustrate possible solutions in their structures
in exemplary manner. In this respect, scope for solutions and design is also
open to unforeseeable solutions.

The authors of the test tasks have an idea of the spectrum of possible solutions to
the test tasks. The theoretically possible solutions form an almost unlimited design
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 105

Action-learning fields Test tasks Action-learning fields


(prim. test groups) (prof. validity) (sec. test groups)

Fig. 5.7 The professional fields of action as reference point for determining the professional
validity of the test tasks

scope. It is therefore important to describe and illustrate the dimensions of possible


solutions when describing the solution space. The solution spaces for the test tasks
are a necessary prerequisite for rater training and for familiarisation with the rating of
task solutions. The solution space facilitates the task-specific interpretation of the
rating items, which are necessarily formulated at an abstraction level that allows their
application in the broadest possible spectrum of professions (Table 5.3).
The criteria for holistic task solving serve as a structuring scheme for the
description of solution scopes. The solution scopes sensitise the raters to the
spectrum of possible solutions. The solution space can indicate the potential of
competences that the respective test task contains in the form of its possible
solutions. Solution spaces are always incomplete. However, they are an essential
basis for rater training when it comes to developing common standards for evaluat-
ing the various solutions to test tasks and achieving a high level of interrater
reliability.

When dealing with the solution spaces within the framework of rating and
rating training, it must be avoided that solution spaces are misunderstood as
ideal-typical solutions.

The use of the solution space when evaluating the task solutions is practiced
within the framework of rater training. Practice shows that the raters only occasion-
ally (initially) use the solution spaces to evaluate the solutions after rater training.
They are able to apply the rating items in task-specific manner and are able to think
of the solution space virtually automatically. This phenomenon finds its expression
in a correspondingly pronounced inter-rater reliability.

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

The development of test tasks for a profession or a specialist area is carried out
according to a defined procedure (Fig. 5.8).
106 5 Developing Open Test Tasks

Table 5.3 Example of a solution scope for a test task


Solution space: Form-glued desktop (carpenter)
Criterion 1: Clarity/presentation Structuring of planning documents
• Production process.
– Topan material,
– Form gluing from veneer plywood,
– Choice of materials,
– Type of surface,
– Specification of the edge,
– Dimensions of the desk.
They should justify the selection, present it in sketches and
discuss the advantages and disadvantages.
• The notes should be appropriate for the addressee.
• The workflow should be clear and understandable for the
workshop listed in the appendix.
Criterion 2: Functionality • The dimensions of the desk must be selected so that
– ... The user suffers no ergonomic damage.
– ... There is sufficient space under the tabletop for the office
chair and the container.
– ... Rounding does not interfere with daily work processes.
• As these days no desk can do without a computer, the
trainee could recommend an invisible connection with cables
as possible.
• The surface must offer protection against scratches due to
the daily use of the tabletop.
Criterion 3: Sustainability • The dimensions of the worktop are not specified by the
customer. Here, the trainee has to adjust the dimensions of
the desk.
• The surface of the desk must be chosen so that it is resistant
and can be used for many years (easy to maintain and repair).
Criterion 4: Efficiency • Optimal workflow planning can improve desktop produc-
tion and thus bring greater economic benefits to operations.
• Planning the routes is also crucial, as the workshop can also
be used for other production processes if production is
optimised.
• By applying the correctly selected surface, it is possible to
ensure that the desk remains intact for a long time
(laminate vs. HPL).
• A lot of material and working time can be saved by
planning the optimum connection between the rounding and
the carrier plate.
Criterion 5: Orientation on busi- • The workflow must be planned in detail. As a basis, the
ness and work process trainee must accurately plan the implementation of the
rounding. He must decide how this is to be produced.
Criterion 6: Social compatibility • The production of round elements is not necessarily
everyday carpentry work. Here it is important that the trainee
uses the protective measures of the workshop correctly dur-
ing processing.
Criterion 7: Environmental • By optimising the work process, the trainee is to save routes
compatibility and material supplies.
• The best possible use of the materials used saves money
(continued)
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 107

Table 5.3 (continued)


Solution space: Form-glued desktop (carpenter)
and resources.
• Panel materials are more sustainable than solid wood, as
only a manageable service life can be assumed.
• Water-based paints do away with solvents to a large extent.
The residues of the paints should be disposed of profes-
sionally after use.
Criterion 8: Creativity • Since the form of the desk is almost predetermined, the
trainee cannot become creative. He can apply his creativity
in the optimisation of planning processes.
• Apart from the occupational science-related aspects, there
are no limits to the trainee’s choice of measurements.
• The planning of cable outlets and their supply with elec-
tricity could also be considered.

5.6.1 Determining the Test Group(s)

The first step is to determine which test groups are to be involved in a COMET
project. As COMET tests are generally designed as international comparative tests or
it must be assumed that national projects will expand into international projects, the
educational and study programmes to be included in the tests are defined. Three test
arrangements are differentiated (! 5.3). These result from the definition of the
primary test group. These can be (1) vocational training at upper-secondary level
(initial vocational training), (2) continuing vocational training at the level of techni-
cal school programmes and (3) higher education programmes that qualify for a
profession.
The primary test groups can be extended in an extended test arrangement by
courses with a lower and higher formal qualification level (secondary test groups).
The associated prerequisite is the classification of the test tasks by the subject
lecturers (teachers) as valid in content for the test groups to be involved.

5.6.2 Training of Test Task Authors

The authors of the test questions are usually subject teachers/lecturers and trainers
(content specialists) who are qualified for the vocational training of the trainees
(students) to be examined. As a rule, a one-day training course is sufficient to qualify
these teachers/lecturers for the development of test tasks. The subject of the training
is an introduction to the COMET competence and measurement model as well as the
test procedure. The criteria for developing test tasks are explained using examples of
tasks from related COMET projects. The development of test tasks includes the
development of solution spaces. These are used for the task-specific interpretation of
the rating items by the raters of the task solutions.
108 5 Developing Open Test Tasks

Fig. 5.8 Procedure of the


pre-test phase 1. Definition of test cohort(s)

Introduction seminar for authors of


2. test tasks

Identification of occupational
3. fields of action

Development of 2–3 test tasks per occupational


4.
field of action including soulution spaces

Didactical evaluation of drafts test tasks and


5. solution space by project steering group

Revision of test tasks and


6.
solution spaces

Pre-test with revised test task in a


7.
representative test cohort

8. Ratertraining and rating

9. Analysis of results

10. Choice of test tasks to be


used in a main test
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 109

Identification of Professional Fields of Action

The development of the test tasks requires the identification of the professional fields
of action for the respective profession. For each professional field of action
(Table 5.4), two to three test questions (drafts) including the solution spaces are
developed by the teams of authors (groups of two or three). It must be taken into
account whether the same fields of action apply to all the test groups to be involved
or whether specific technical characteristics of training courses have to be taken into
account (Fig. 5.9).
For such test arrangements, the common competences are covered by a set of test
tasks and the specific competences by supplementary test tasks.
In the previous COMET test practice, especially against the background of the
requirements for the international comparative projects, test tasks are developed
which aim at the end of the educational programmes. This serves to record the
competences defined in the job descriptions (job profiles), on the basis of which the
employability or the training objective is described. Vocational (university)

Table 5.4 Example of two professional fields of action


Professional fields of action
Logistics managers Car mechatronics
(1) import export orders (1) service/maintenance
(2) procurement (2) repair
(3) marketing/proposal preparation (3) conversion and
(4) forwarding and logistic services business processes/ retrofitting
controlling (4) diagnostics

Fig. 5.9 Common and


sector-specific fields of
action in nursing training in
Switzerland for the areas of
childcare, nursing care and
care for the elderly
110 5 Developing Open Test Tasks

education and training courses can also be included. Although vocational skills
cannot be taught in these programmes, it is possible to measure the degree to
which these programmes succeed in teaching their pupils/students vocational skills.
The authors’ intended ‘degree of difficulty’ results from the qualification require-
ments placed on the primary test group. The aim is to assess the difficulty of the test
tasks by the (primary) test group with values between 6.5 and 7.5 on a scale of 0 to
10. These values are determined in the pre-test. The ‘difficulty’ of open test tasks
according to the COMET test task format should not be confused with the degree of
difficulty of normative test tasks (! 5.8).

Didactical Evaluation and Revision of the Test Tasks

An essential step in the development of test tasks is the evaluation of the task drafts
and solution spaces by the coordinating project group and the test experts involved in
the project. As a rule, this results in initial revision instructions and a corresponding
revision of the task drafts. A detailed didactic evaluation of the test tasks and rating
scale (if modified for a new professional field) is part of the rater training (testing the
test tasks).

Rater Training and Rating

The test tasks (drafts) are tested on a sample of the primary test group. Each test task
should be completed and evaluated by at least ten to 15 test persons. If the test group
is relatively homogeneous, the lower number of participants is sufficient. In the case
of more heterogeneous courses of education, the upper limit should be chosen.
The pre-test includes rater training immediately after the test. The project group
or the group of authors of the test tasks selects a task solution for each professional
field of activity—at least four sample solutions of medium difficulty. They form the
basis for rater training.

The Aims of Rater Training

Rater training has three objectives:


1. Above all, the raters should learn to safely apply the rating scale for the evaluation
of task solutions and to develop professional and task-specific evaluation stan-
dards with the aid of the solution space of the test tasks. This goal is achieved
when the degree of agreement of the rating values of the raters is largely given
(Finn(just) > 0.7).
2. When applying the rating scale, the raters should also check the validity (coher-
ence) of the content of the rating items in the event that the rating is also part of
the task development for a new profession or a new occupational field.
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 111

3. This includes the review of the author proposals for the list of rating items to be
considered.
The method of rating training is described below. Exact adherence to the method-
ical procedure ensures that good to very good reliability values are achieved after
approx. One day of rater training.

The Trainers

The trainers conducting the rater training should have assisted in at least one rater
training. You must be familiar with the COMET test procedure and have an exact
knowledge of the test tasks and their solution space for the respective project. It has
proven useful that training is carried out by teams of two and that one of the trainers
has relevant professional/technical and didactical competence.

The Participants in Rater Training

The participants in rater training must be teachers or trainers with a teacher/trainer


qualification for the respective profession or vocational training course and should
have several years of professional experience. The authors of the test tasks as well as
the authors of textbooks and experienced examiners bring good previous experience
into rater training. Specific knowledge of the COMET test procedure is not required.
The number of participants should not exceed 30. The number of raters to be
qualified for a COMET project is determined by the following parameters.
• The average rating for a task solution (after training) is approx. 15 min.
• A double rating (two raters per task solution) is advisable in order to achieve a
sufficiently high reliability.
Rarely more than 150 participants take part in a pre-test rater session. Rating can
then be handled by a group of 8–10 raters.

Example For 600 test participants, each of whom solves a (complex) test task
(maximum processing time: 120 min), a double rating requires 300 h of rating
time.
• With a rating time of 10 h per rater, 30 raters are required for the rating, and
20 raters with a rating time of 15 h per rating.
• After approx. 4 h of rating (empirical value), each rater should take a half-
hour break, as rating requires a high degree of concentration.
112 5 Developing Open Test Tasks

Organisation

Rating is done online. A joint one- or two-day rating schedule has proven to be the
best. Upon completion of the online rating, the rating results are available for
feedback to the test participants (via the responsible teachers/trainers).
Each participant is provided with a rater manual to prepare for rater training.

Contents of the Rater Manual


1. The COMET test procedure.
2. The test tasks with solution spaces.
3. The rating scale.
4. The selected solutions for the trial rating.
5. References.

Structure and procedure of rater training Time (in minutes)


1. Introduction to the rating procedure. Approx. 40
• The COMET competence and measuring model, Approx. 40
• The evaluation criteria and the evaluation procedure (rating),
2. First trial rating in plenary session. Approx. 60
The participants evaluate (rate) the first solution example without (approx. 70)
consulting other participants. Individual evaluations are required.
If rater training takes place within the framework of the pre-test, it is
also about the didactic evaluation of the test tasks (drafts), the
solution spaces and the evaluation criteria selected by the teams of
authors, which may not be applied to the individual test tasks for
reasons of content. For rating scales that have been adapted to a new
professional field, participants are asked to check the validity of the
rating items (formulations, etc.) in terms of content and—if neces-
sary—to make suggestions for corrections. The solution space is
used as a working surface.
3. Group rating.
Following individual rating, groups of five participants are formed, Approx. 60
each of whom carries out a group rating on the basis of their
individual ratings. The following rules apply:
(a) the rating items are called in sequence—by a member of the
group who moderates the group rating. If the ratings (of digits 0–3)
do not differ by more than one value (e.g.: 2, 2, 2, 3, 2), then this can
be considered a consensus. The mean value is recorded as a group
result by increasing or decreasing the value. Deviating evaluations
can be explained briefly with the use of the solution space.
(b) if evaluations differ by more than one value (e.g. 3, 3, 2,
1, 3), then the participants justify their evaluation standards with
deviating evaluations. The group agrees on a group value. The aim is
to agree on the content of the evaluation criteria. It is also important
to use the solution space.
(c) Group ratings are not primarily about calculating averages
(continued)
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 113

Structure and procedure of rater training Time (in minutes)


and overruling group members with deviating ratings, but about
developing common evaluation standards.
Input of individual ratings and group ratings (figure) for pre-
sentation and discussion in plenary
Once all individual and group ratings are available (cf. rating table:
Fig. 5.10), they are followed by
Evaluation of the first trial rating in plenary Approx. 15
The groups present their group results (depending on number of
Difficulties in finding group values for individual items participants)
Proposals for correcting formulations of items, if necessary
If necessary, proposals for the inclusion of individual items
(in deviation from the suggestion of the test authors)
The reports of the groups are followed by a discussion of noticeable Approx. 30
evaluations. The projected tableau of all individual evaluations and
the group evaluations serves this purpose. Noticeable evaluations
are
• Strong deviations from the mean value (across all items),
• Strong deviations in individual items of individual raters,
• The results of the discussion on the evaluation standards in the
groups,
• Rating items for which the ratings of the raters differ signifi-
cantly (by more than one point value),
• Items that some of the participants do not consider relevant in
terms of content.
4. Additional trial ratings. Second trial rating
The example of the first trial rating is followed by additional (usu- Approx. 90
ally) 3–4 trial ratings. Experience has shown that the duration of the Third trial rating
trial rating is considerably decreased with each further example. Approx. 60
Specifics Fourth trial rating
If rater training is a qualification of raters for the implementation of Approx. 60
COMET projects on the basis of test tasks that have already been Closing plenary session
tested, the reference values of the rating from the project in which Approx. 60
the test tasks were developed are also available for the plenary
discussions of the rating results. For these cases, both the selected
test tasks and their solution spaces, as well as the rating items to be
applied, are defined.

5.6.3 Calculating of the Finn Coefficient

Calculation of Interrater Reliability for the COMET Test Instruments

Different coefficients are available for calculating the interrater reliability. Apart
from the question, the choice of a suitable coefficient depends primarily on two
factors:
1. the number of rating persons
2. the scale level.
114

Fig. 5.10 Example of a rating table from rater training for the profession ‘industrial mechanic’ (first trial rating)
This rating table shows the rating results of the first trial rating of twelve raters and three rating groups. The degree of agreement is therefore still very low.
In the course of rater training, the degree of agreement increases steadily and converges to values of Finn >0.75 (Fig. 5.11).
5 Developing Open Test Tasks
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 115

Fig. 5.11 Progress of the rater consensus forwarding and logistics merchants

In the case of COMET test instruments, we usually have more than two raters to
deal with during the pilot phase of the projects—especially during rater training. Up
to 40 raters will participate in the pilot phase. This means that they evaluate the same
task solutions. For more than two raters, the following three coefficients are suitable:

Fleiss’ and Conger’s Kappa

It is (also) suitable if more than two raters evaluate a person or task and an ordinal
scale structure is adopted. Both are the case with COMET instruments. Fleiss’
Kappa distinguishes between the exact, also called ‘Conger’s Kappa’ and Fleiss’
Kappa. In most cases, Conger’s Kappa is slightly higher, so a comparison between
Fleiss’ Kappa and Conger’s Kappa is recommended.

The Finn Coefficient

Finn coefficient (Fu)

MSw
Fu ¼ 1  
1=12  N 2  1

MSW ¼ average deviation square of the observed values per item within minutes
N ¼ number of measured values
Spearman–Brown formula for the rater group:
116 5 Developing Open Test Tasks

n  Fu
F ug ¼
1 þ ð n  1Þ  F u

n ¼ number of raters
Justification for the choice of the measure:
Asendorpf and Wallbott (1979) propose the Finn coefficient if the variance of the
mean values of the observation units is too small (as in our case).
To calculate the Finn coefficient correctly, a distinction must be made between a
‘two-way’ model and a ‘one-way’ model. The ‘one-way’ model assumes that only
the persons/tasks to be evaluated are selected at random. The ‘two-way’ model also
assumes that the raters are randomly selected. Since the latter is usually not the case,
the ‘one-way’ Finn coefficient is calculated for the COMET model. The advantage
of the Finn coefficient lies in the fact that it is suitable for calculation even if there is a
high degree of correspondence between the raters. In other words, it is sensitive to
small differences between persons.

Intraclass Correlation Coefficient (ICC) for One-Way and Two-Way


Models

The ICC is particularly popular due to its implementation in the SPSS statistics
software. As with the Finn coefficient, a distinction must be made here between a
‘one-way’ model and a ‘two-way’ model. As with the Finn coefficient, the one-way
model assumes that only the persons/tasks to be evaluated are randomly selected.
The reasoning is the same as for the selection of the Finn coefficient, so that for the
COMET instruments, the ICC is calculated for ‘one-way’ models. Another aspect to
consider when correctly calculating the ICC is whether the absolute or the average
agreement of the rating persons is of interest. This aspect also depends on how high
the agreement between the rating persons is, so it is advisable to calculate both the
‘absolute’ (¼ ‘consistency’) and the ‘relative’ (¼ ‘agreement’) ICC. The consider-
ation of this aspect is interesting in that it could be that there is a high degree of
agreement between the raters across all (averaged) items, but that these differ in
some important respects.
If only the relative ‘ICC’ is calculated, there is a risk that these differences cannot
be worked out. Accordingly, both the ‘conformity’ and the ‘consistency’ version of
the ICC are considered below.
Accordingly, the following interrater coefficients are calculated for COMET
instruments:
1. Fleiss’ Kappa
2. Conger’s Kappa
3. The Finn coefficient (‘one-way’)
4. The ICC (‘one-way’) to check the consistency
5. The ICC (‘one-way’) to check the relative conformity.
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 117

The following shows how these interrater coefficients differ from one another
using the example of electricians’ test tasks used in international comparative
COMET projects. The correct reading of the data was checked several times using
descriptive parameters.
The proximity of the unadjusted Finn coefficient (2009) to the (‘two-way’) Finn
coefficient with the statistical software ‘R’ is striking. However, the (‘two-way’) Finn
coefficient assumes a random selection of raters. This would mean that 14 out of
20 raters are randomly selected. The random allocation to the tasks is already given
by the (‘one-way’) calculation. The calculation (‘two-way’) increases the ‘degrees of
freedom’ and thus leads to a higher Finn coefficient. The table shows that the
calculation of the Finn coefficient for the reference values (2009) corresponds
exactly to the values of the comparative rating for the skylight control and the drying
space.
Results: The unadjusted Finn coefficient is the two-way Finn coefficient. This is
not suitable for the COMET test procedure, as the raters are not selected randomly
(Table 5.5).

Prospect

The evaluation for selecting a suitable interrater coefficient is based on more than
two raters evaluating a task. This is the case in rater training during the pilot phase of
COMET projects. In the actual test phase, however, the solutions of the pupils,
trainees and students are always evaluated by two independent raters, so that further
coefficients are available for the calculation of the appraiser agreement. These
coefficients and their benefits would still have to be demonstrated for COMET
instruments.

5.6.4 Rating Results

All test tasks (drafts) are tested in the pre-test. The test results are used to measure the
competence (competence level and competence profile) of the test participants. A
distinction is made between test tasks (Fig. 5.12). The profile of a test task and the
variation coefficient can be used to estimate the potential requirements of the test
task. If all task profiles for a profession have a varying degree of homogeneity, it
makes sense to strengthen the sub-competencies under-represented in the situation
descriptions of the test tasks with corresponding requirement-related information
without including specifications because these would already be part of the solution
The total point values (ATS) of the test tasks (drafts) for the carpenters show
consistently high values. Since four of the test tasks (A1, A2, A3 and A7) have a
homogeneous to very homogeneous (A2, A3) task profile, two conclusions are
obvious.
118

Table 5.5 Example for rater training


Rater training South Africa: Electrotechnology: 14 raters; 39 items; 4 tasks
Finn (unjust/just) (Dr. Finn ‘one-way’ ICC (unjust/just) ICC consistency ICC consensus Fleiss’ Conger’s
Erdwien) (‘two-way’) (Dr. Erdwien) (‘one-way’) (‘one-way’) Kappa Kappa
Skylight 0.72/0.84 0.634 (0.746) 0.70/0.80 0.117 0.117 0.081 0.092
control
Signals 0.55 /0.70 0.400 (0.541) 0.55/0.65 0.054 0.054 0.020 0.035
Drying 0.74/0.84 0.668 (0.766) 0.70/0.79 0.118 0.118 0.093 0.105
space
Pebble 0.80/0.89 0.775 (0.859) 0.58/0.70 0.0771 0.0771 0.107 0.119
treatment
Comparative rating 2009 Hessen (18 raters, 39 items) Finn_170310
Skylight 0.76/0.82 0.758 (0.815) 0.38/0.45 0.376 0.376
control
Drying 0.67 /0.73 0.668 (0.728) 0.32/0.36 0.311 0.311
space
Pebble 0.74/0.80 0.543 (0.802) 0.50/0.57 0.105 0.105
treatment
5 Developing Open Test Tasks
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 119

Fig. 5.12 Profiles of test task drafts from the carpenters’ pre-test (Figs. 5.17 and 5.18)
120 5 Developing Open Test Tasks

1. The team of authors was able to formulate situation descriptions suitable for the
collection of homogeneous competence profiles. The task profiles show which
sub-competencies are not challenged by the situation descriptions. For Test 8, for
example, this concerns the sub-competencies K3, K5, K6, K7 and K8.
2. With values of 40 and above, the TS represents a rather easy level of difficulty
with the exception of the A 4 tasks.
The test results tend to have an objective level of difficulty which corresponds
to the subjective assessment of the difficulty of the tasks by the test group and the
values of its self-assessment. For example, the values for ‘difficulty’ and ‘self-
assessment’ are 6, which reflects the objective level of difficulty of this task with
its TS ¼ 32.5.

Rating Results (pre-test)

In a first approximation, the total TS (of the pre-test participants) represents the
objective difficulty of a test item for the test population represented by the pre-test
group.
In a first approximation, the competence profiles of the test tasks represent the
competence profiles of the pre-test participants and, at the same time, the quality of
the test tasks. The variability coefficient V indicates whether a test item has the
potential to comprehensively test professional competence.
The authors of the test questions and the participating subject teachers decide—
taking into account all pre-test results—whether an inhomogeneous competence
profile of a test question is due to the competence of the test groups or to weaknesses
in the situation description of the test questions.

Example: Trainees for Shipping and Logistics Services (SLS)

The pre-test results of SLS indicate a special feature. Although the criteria or
sub-competences environmental and social compatibility were applied in all test
tasks, the competence profiles show a pronounced competence gap in the trainees’
‘technical’ understanding (Fig. 5.13). However, both competence criteria
(sub-competences) are of fundamental importance for the SLS. This was confirmed
by the project group with reference to the job description and the relevant training
regulations.
It is remarkable here that the teachers, with their extended understanding of
professional competence, were able to identify very precisely the reduced profes-
sional understanding of their students when assessing the test results (rating): ‘The
test result is probably due to our own professional understanding’. However, this
had changed fundamentally with the rater training, according to the consistent
assessment of the pre-test experiences of the raters.
Reliability analyses (Erdwien & Martens, 2009, 70 f.)
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 121

Fig. 5.13 Competence profiles of trainees for shipping and logistics services (SLS) (n ¼ 6 left,
n ¼ 8 right)

Table 5.6 Reliability analyses for the eight criteria of the evaluation form
Criterion Rating items Alpha value
Clarity/presentation 1–5 0.88
Functionality 6–10 0.86
Sustainability 11–15 0.84
Efficiency/effectiveness 16–20 0.82
Orientation on business and work process 21–25 0.87
Social compatibility 26–30 0.84
Environmental compatibility 31–35 0.85
Creativity 36–40 0.90

As the aim is to maintain the eight criteria adopted in the further analyses or to
combine them into the competence levels ‘functional competence’, ‘procedural
competence’ and ‘shaping competence’, a reliability analysis was carried out once
again on each of the evaluation items belonging to one criterion in addition to the
factor analysis in order to check whether joint further processing of each of the five
evaluation items belonging to one criterion is appropriate.
The reliability analyses show the alpha values documented in Table 5.6.
If item 20 is excluded from the scale, as it does not meet the requirements of
sufficient cell occupation, the criterion ‘efficiency’ leads to a slight deterioration of
the alpha value to 0.80. By contrast, exclusion of item 35 from the ‘environmental
compatibility’ scale would lead to a slight improvement of the alpha value to 0.86
In a further step, it was examined which reliability values were achieved by the
competence levels ‘functional competence’, ‘processual competence’ and ‘shaping
competence’ on which the theoretical assumptions were based, and whether all
40 assessment items resulted in the overall construct ‘vocational competence’. The
relevant results are shown in Table 5.7.
1.Conclusion Overall, the results of the reliability analyses show a very satisfactory
scale stability for each of the eight criteria for the closer determination of the
122 5 Developing Open Test Tasks

Table 5.7 Reliability analyses for the three assumed competence levels
Competence levels Competence criteria (sub-comp.) Alpha value
Functional competence (DF) Clarity/presentation 0.93
Functionality
Procedural competence (DP) Sustainability 0.92
Efficiency/effectiveness
Orientation on business and work process
Shaping competence (DG) Social compatibility 0.93
Environmental compatibility
Professional competence All 40 rating items 0.97

competence model’s competence levels. The reliabilities for the competence levels
based on education theory and for the overall construct of vocational competence are
proving to be very high

5.6.5 Interviewing the Test Participants

Four questions are presented to the pre-test participants for evaluation of the test
questions. There is also the opportunity for additional comments.
How do you assess. . .
1. the comprehensibility of the test tasks

0. . .. . .. . .. . .. . .0.10,
2. the difficulty of the task

0. . .. . .. . .. . .. . .0.10,
3. the practical relevance of the task

0. . .. . .. . .. . .. . .0.10,
4. How well have you solved the task?

0. . .. . .. . .. . .. . .0.10.

Comprehensibility

When assessing the comprehensibility of test tasks, it must be noted that the
comprehensibility of a professional text also depends on the professional under-
standing and competence of the test participants. When evaluating the pre-test, the
project group must therefore assess whether the linguistic formulation or the com-
petence of the participants determines the degree of comprehensibility.
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 123

Difficulty of the Test Tasks

The level of difficulty of the test tasks is a criterion that is of secondary importance in
an open test format, as open test tasks allow the entire range from weak to very
elaborate task solutions. If the open test tasks are based on authentic descriptions of
the situation characteristic of the test population or the respective occupational field
of action, it can be assumed that the test tasks also have an appropriate level of
difficulty. This can be changed by the degree of complexity of the situation
descriptions.
2.Example Assessment of comprehensibility and own competence (How well have
I solved the task?) (Fig. 5.14).

Practical Relevance

When assessing the practical relevance of the test participants, it must be noted that
they should have relevant practical experience. If, for example, both trainees with
relevant practical experience and pupils/students of vocational school programmes
participate in a comparison project, a test group with practical experience should be
selected for the pre-test.

Fig. 5.14 Example: Student assessment of pre-test task 2: Training, guidance and counselling of
patients and relatives (COMET project Care professions/Switzerland)
Assessment of the degree of difficulty and the practical relevance of the test drafts (Fig. 5.15)
124 5 Developing Open Test Tasks

Fig. 5.15 Example: Assessment of students’ pre-test tasks, test 2: Training, guidance and counsel-
ling of patients and relatives (COMET project Care professions/Switzerland)

5.6.6 Selection and Revision of Test Tasks and Solution


Scopes

Only the combination of a subjective evaluation of the test tasks by the test
participants and the assessment of their own competence as well as the objective
test results provide a sufficient basis for the selection of suitable test tasks and, if
necessary, their revision.
The following shows how the appropriate test tasks are selected on the basis of
the pre-test results and according to which criteria they are finally corrected, if
necessary.
When evaluating the pre-test results, particular attention must be paid to any
contradictions between the subjective assessment of the trainees (e.g. with regard to
their own competence) and the objective test results (Figs. 5.16 and 5.17).
For example, the results for ‘shipping clerks’ show that they consistently assess
the degree of difficulty of the tasks as very low (Fig. 5.16). Their objective test
results give a clearly different picture: the competence profile is highly one-sided,
and the overall score is rather low. A completely different picture results from the
pre-test of the carpenters (Fig. 5.17). They also rate the level of difficulty of their test
tasks as low. This corresponds to the high overall score that the pre-test participants
achieve, as well as a considerably more homogenic competence profile. In this case,
it is necessary to increase the complexity of the situation description. This also
significantly increases the level of testing requirements for carpenters.
For a summary of the results and the proposal for the revision of the carpenters’
test tasks, see Fig. 5.18.
The difficulty of the test task can be increased by including further and higher
requirements in the situation description. It should always be borne in mind that this
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 125

Fig. 5.16 Evaluation results pre-test 2013, profession: Shipping and logistics services

Fig. 5.17 Evaluation results pre-test 2013, profession: carpenter

Comprehensibility: except for Task 2 and Task 8, completely given, i.e. at


this point no revisions are necessary
Difficulty: all tasks tend to be too easy (5.0 as point too low)
Practical relevance: given for all tasks except for Task 4 to a limited extent
V coefficient good: A1, A2, A3, A7
Æ Potential test tasks – after revision: A1, A3, A6, A7, A8

Fig. 5.18 Summary of the results and proposal of the scientific support for the selection and
revision of the test tasks, profession: carpenter

must be an authentic requirements situation for the test group (test population). This
is the only way to ensure the validity of the content of the test tasks. This can most
likely be achieved by teachers and trainers who already have rating experience
(e.g. by participating in the pre-test).
It is absolutely necessary to use the competency model as a basis.
126 5 Developing Open Test Tasks

5.7 Test Quality Criteria

In recent decades, especially since the establishment of the PISA project, empirical
educational research has developed and internationally established methods of
competence diagnostics, especially in mathematics and the natural sciences, which
now have high quality standards (measured by the test quality criteria). The method
of competence diagnostics for vocational education and training must be measured
against this. The special features of vocational education and training explained in
Chaps. 2 and 3 require an application and interpretation of the established quality
criteria, taking into account the special features of vocational education and training.
Excluding this differentiation and transferring the established test methods used in
the PISA and TIMMS projects to vocational training, there is a risk that precise
measurement results can be presented, but that these do not match the object of
measurement—professional competence. Robert STERNBERG and Helena
GRIGORENKO therefore also warn against misunderstandings in the
conceptualisation and implementation of ‘Studies of expert performance’: ‘Expertise
theorists have argued about what it is that makes someone an expert (. . . .). How
expertise is acquired, for example, through deliberate practice or skilled appren-
ticeship. They have failed to consider fully the role of expertise in the development
and maintenance of expertise, and indeed, few expertise theorists have used any tests
of abilities in their research’ (Sternberg & Grigorenko, 2003, VII).
This is a sobering balance which shows the height of the hurdle for the develop-
ment of competence diagnostics for vocational education and training that needs to
be overcome.
In his ‘Epistemology of Practice’, Donald SCHOEN unfolds the characteristics of
professional competence as an opposite pole of social knowledge to theoretical and
scientific knowledge. In contrast to abstract, context and purposeless scientific
knowledge, professional competence means ‘a way of functioning in situations of
indeterminacy and value conflict, but the multiplicity of conflicting views poses a
predicament for the practitioner, who must choose among multiple approaches to
practice or device his own way of combining them’ (Schoen, 1983, 17).
The peculiarities of vocational work and vocational learning developed in
Chaps. 2 and 3, as summarised once again here from a different perspective, make
special demands on the quality criteria of competence diagnostics in vocational
education and training.
When traditional test quality criteria are applied to the measurement of profes-
sional competence in the relevant methodological manuals of empirical social
research, the quality criteria for test procedures—in the tradition of experimental
scientific research—are generally listed in the following order: objectivity, reliability
and validity.
5.7 Test Quality Criteria 127

5.7.1 Objectivity

It specifies the degree to which a test result is measured independently of the


tester. The tests are differentiated according to the objectivity of their execu-
tion and evaluation.

Objectivity of Implementation

A high degree of implementation objectivity is achieved through standardised


information of the test participants about the goals, the procedure and the evaluation
of the tests. A regulation on the feedback of test results to test participants has proven
to be particularly important. Since the COMET tests are often carried out close to the
final examinations and the test participants measure the examination as highly
important for their professional future, the interest of the participants in a compe-
tence test is based on being given the test results as soon as possible.

Objectivity of Evaluation

The objectivity of the COMET test procedure is given by the rating procedure.
However, this only applies if it can be ensured that the raters do not evaluate the tests
of ‘their’ pupils/students. Evaluation objectivity requires not only the anonymisation
of the test documents, but also a sufficiently large sample of test participants.

Reliability (Credibleness)

The reliability of a test indicates the degree of accuracy with which profes-
sional competence is measured. This represents a special challenge for the
COMET test procedure, as professional competence can only be measured
with open, complex test tasks.

The consequence here is that as many different task solutions have to be evaluated
as persons take part in the test. A further complication is that the great heterogeneity
of the competence characteristics—especially in international comparative pro-
jects—places additional demands on the rating. Sufficiently high values of interrater
reliability are regularly achieved with the help of tried and tested rater trainings.
128 5 Developing Open Test Tasks

Validity (Significance)

It accurately specifies how the test measures that which is to be measured. A


distinction is made according to a large number of validities, including above all
content validity, construct validity, criterion validity and ecological validity.
Whether a test is highly objective and reliable is irrelevant if it does not fulfil the
criterion of validity in terms of content. Why this criterion is often problematic from
a psychometric perspective, despite its importance, is immediately obvious, since the
contents of the test tasks, and above all the quality of the task solutions, can only be
assessed by experts qualified for the respective domain. What constitutes the pro-
fessional competence of a nurse or an industrial mechanic can only be assessed by
experts in this profession. BORTZ and DÖRING therefore avoid answering the
question of what is important in a professional situation in their method manual:
‘Strictly speaking, content validity is therefore not a test quality criterion, but a goal
that should be considered in test design’ (Bortz & Döring, 2003, 199).
Why the authors use the subjunctive ‘should’ is not clear from the argumentation.
Of course, there is no way around compliance with the criterion of validity of
content. Therefore, it ‘must’ (not ‘should’) be ensured, as otherwise a test would
lose its significance. Descriptions of validity of content with designations such as
‘face validity’ refer to the tension between this central and at the same time unwieldy
quality criterion and psychometric procedure. However, only the domain experts
should be able to ultimately decide to what extent a certain characteristic can be
assessed as obvious, empirically founded or logically comprehensible.

Validity of Content (Face Validity and Ecological Validity)

If the contents of the test tasks comprehensively capture the construct to be


measured (professional competence) in its essential moments, then content
validity is given. This is best achieved when a test task directly represents the
characteristic to be measured. In vocational education and training, this is
always the case when a test task corresponds to a real vocational task.

It is therefore always about a high degree of authenticity. In the COMET test


tasks, the reference point in terms of content is the professional fields of action. In
this respect, professional work tasks and professional fields of action are regarded as
external criteria for the development of complex open test tasks. The validity of the
content can also be recorded numerically, e.g. by a group of professional experts
(teachers/trainers) assessing the degree of validity of a test item’s content for the
population to be tested after rater training. The question to the experts is: To what
degree does the test task (the description of the situation) represent one of the central
professional fields of action of a professional skilled worker?
5.7 Test Quality Criteria 129

A special feature of determining the validity of open, complex test tasks is that
these tasks can always be based on a different level of knowledge. If test subjects of
different formal qualification levels (e.g. skilled worker and technician levels)
participate in a test, it is necessary to name the primary test group for which the
test tasks were developed.
A special case applies if a comparative study involves both test participants in
dual, vocational and technical school programmes. While, in dual vocational training
courses, vocational competence is to be attained at the end of training, school-based
vocational training courses are always followed by a phase of familiarisation with
the profession. If school-based vocational training providers have an interest in
knowing to what extent pupils/students attain employability, participation in com-
parative projects is justified.
As the validity of a test task’s content is always assessed in relation to authentic
situations in the respective profession (professional validity) and not in relation to a
curriculum, the various forms of vocational training courses can participate in the
COMET projects if the representatives of the training courses want to find out to
what degree it is possible to convey vocational competence to the pupils/students in
a school-based training course.
For example, the results of a test for apprentices in the profession of industrial
mechanic, in which a test group of students in a dual Master’s programme for
mechanical engineers also participated, show that the students rated these test
tasks as valid in content. They justified this with the fact that, as future managers,
they were also responsible for the quality control of such tasks. The head of the study
pointed out that the students all had the professional qualifications of a master
craftsman, technician or a comparable profession. The aim of this course of study
would be to convey a holistic professional competence profile at the level of senior
executives.
A continuing education programme with the goal of developing management and junior
executives must ultimately enable them to consider the respective overall system in every
solution development (Heeg, 2015, 123; Fig. 5.19).

In order to conduct a COMET test in which this course would be the primary test
group, it would be important to adapt the competence and measurement model to the
qualification profile of managers (ibid., 123 f.).
Professional work tasks are not ‘given’ values from which test tasks can be
derived, but they are regarded as the reference points for the development of test
tasks. However, this plausible assumption proves to be a challenge for the test
developers. Professional work tasks are the result of different company organisation
and organisational development traditions. If one wants to grasp the specific quality
of a professional task and the competence incorporated in it, then this presupposes
regarding professional work processes as an expression of work structuring and
work organisation. Vocational qualifications and competences therefore result not
(only) from objectively given qualification requirements, but from work structuring
processes. This also includes the design of human–machine interaction, for example
in network- and computer-controlled work systems.
130 5 Developing Open Test Tasks

Fig. 5.19 Average


competence profile of all test
participants in the MScPT
Industrial management
course, full time); n ¼ 18, Ø
TS ¼55.97, Ø V ¼ 0.23

With a professional work task, specific work to be performed by an employee


is described in relation to results. It should refer to work contexts that allow
employees to understand, execute and evaluate their function and significance
for a higher-level operational business process.

This determines the degree of competence development. For vocational education


and training, it is therefore a question of developing and psychometrically evaluating
a skills model (not a difficulty model) ‘which can be used to model how test persons,
whose solutions have different levels of responsibility, can cope with open occupa-
tional tasks’ (Martens & Rost, 2009, 98).

Criterion Validity

Criterion validity is measured by the degree to which the result of a test to


measure a latent characteristic or construct such as professional competence
corresponds to the results of a corresponding test or examination procedure.

For vocational education and training, the determination of criterion validity is


indeed appropriate with reference to the established examination procedures under
the Vocational Training Act. However, it should not be forgotten that this
5.7 Test Quality Criteria 131

examination practice has been critically evaluated for decades—especially with


regard to its inadequate validity (cf. Rademacker, 1975). It therefore—con-
versely—makes more sense to evaluate the criterion validity of the established
examination practice, if it concerns the nationally standardised examination parts
with reference to the large-scale competence diagnostics of the COMET test proce-
dure. The latter is based on a competence and measurement model which, in turn, is
based on educational theory and vocational education. The coordinators of the
COMET model test for electronics technicians therefore come to the following
conclusion: ‘The further development of examinations in industrial occupations
will receive new impetus [on the basis of the COMET research results]’1.
The reference point of competence diagnostics is ultimately the professional
ability in the professional fields of action. Therefore, tests for further research can
only represent a possible external validity criterion if they also refer to the compe-
tences incorporated in the professional fields of action, for example in the form of
company commissions.

Construct Validity

Construct validity is given if the result of a test procedure precisely and


comprehensibly reflects the scope of the construct (e.g. professional compe-
tence) to be measured. The degree of construct validity can either be derived
theoretically or empirically.

Construct validity is particularly important in the psychometric evaluation of test


procedures, since no objectifiable values can be given for content validity and no
suitable external criterion can be given for measuring competence in vocational
education and training. Construct validity can be determined from the examination
of hypotheses derived from the target construct: in this case the COMET competence
model.
The psychometric evaluation of the COMET competence and measurement
model aims at construct validity (Martens et al., 2011, 109 ff.; Erdwien & Martens,
2009, 62 ff.; Martens & Rost, 2009). The central object of the first COMET pilot
project was the psychometric examination of the competence model with the aim of
developing it into a measurement model (Martens & Rost, 2009, 95). In COMET
Vol. III, Martens and others present the method of an evaluation procedure with
which the construct validity of the COMET test procedure was checked (Martens
et al., 2011, 109–126).

1
From the protocol of the project coordinators of 2.12.2010 (COMET Vol. III, 233). The project
coordinators have long-term experience as examiners in carrying out examinations according to
BBiG (skilled worker, journeyman and master craftsman examinations).
132 5 Developing Open Test Tasks

5.8 Difficulty Level: A Problematic Quality Criterion


for Test Tasks Intended to Measure Professional
Competence

5.8.1 Standardised Test Tasks

The introduction of intermediate examinations in dual vocational training in the


early 1970s led to a considerable increase in the time and effort required for
examinations. The introduction of so-called ‘programmed tests’ (multiple choice
tasks) was intended to provide a remedy, as this test format allows a rational test
procedure and a high degree of achievement of the test quality criteria.
Two points of criticism are highlighted in particular:
(1)
In individual cases, the proportion of correctly guessed answers could be far too
high. This could allow candidates to pass the exams that have not in fact attained the
competency. The advocates of multiple-choice tasks rightly point out that the
randomly corrected score x’ is easy to determine2.

X  XR
X0 ¼ XR 
m1

The proportion of professional skills that can be verified by means of selection


responses is very limited. This second point of criticism is more serious.
Based on the design criteria: difficulty index P, selectivity index T and the equal
distribution of the solutions among the distractors (incorrect response options) are
intended to ensure that vocational skills can be validly tested with multiple choice
tasks. In the following, we will examine whether this goal can be achieved.
The difficulty index P of an examination task is determined from the relationship

NR
P ¼ 100
N

NR stands for the number of participants who solved the task correctly and N for
the total number of participants.
The selectivity index T results from the relationship.

R0  Ru
T¼  100
N

2
The achieved number of points X R (raw value) is reduced by a factor which is divided by the
difference between the total number of points X und X R, divided by the number of selection
answers m reduced by 1.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 133

R0 stands for the number of examination participants from the upper half of the
examination participants who have correctly completed a task and RU for the number
of examination participants in the lower half who also performed this task correctly,
and N is the total number of examination participants.
The upper and lower group is formed by sorting the participants of the examina-
tion (overall examination results) according to the increasing number of points and
dividing them into an upper and lower half of the same size. The level of difficulty
and the selectivity index of the MC exam tasks are directly related, as shown in
Fig. 5.20.
Maximum selectivity is achieved at a difficulty index of 50 (medium difficulty).
On the other hand, the selectivity index T ¼ 0 for test tasks when the difficulty index
is P ¼ 0 or 100, i.e. when the task is solved either by all examination participants or
by none. Since such tasks are not suitable for distinguishing between ‘good’, ‘less
good’ and ‘bad’ examination participants, i.e. they are not valid in the sense of
discriminatory validity, they are considered unsuitable examination tasks according
to this examination concept.
In addition to the ideal line ‘1’, curves ‘2’ and ‘3’ show a course that can be
achieved empirically. This is due to the fact that in the practical application of
multiple-choice tasks, there are always exam candidates in the subgroup who are

Fig. 5.20 Dependence of selectivity on difficulty index (Lüdtke, 1974)


134 5 Developing Open Test Tasks

also able to solve individual difficult tasks and, conversely, members of the super-
group are occasionally unable to solve even minor tasks.
The standard work on ‘Forschungsmethoden und Evaluation in den Sozial- und
Humanwissenschaften’ [Research methods and evaluation in social and human
sciences] by Bortz and Döring (2003) states: ‘For very easy and very difficult
items one will have to accept [...] losses in selectivity. Items with medium difficulties
now have the highest selectivity’ (ibid., 219).
Their conclusion is ‘In principle, the greatest possible selectivity is desirable’
(ibid., 219). And these high selectivity values are achieved if the test tasks are
constructed in such a way that they ‘ideally’ lie at a medium degree of difficulty
(P ¼ 50) or have a degree of difficulty between 30 and 70 or also between 20 and
80 (ibid., 218).
For examination tasks that are more difficult or easier, the selectivity index would
be too low to distinguish between ‘good’ and ‘weak’ examination participants.
SCHELTEN therefore comes to the conclusion that test tasks that fall outside the
framework thus defined ‘must be completely revised or replaced by new ones’
(Schelten, 1997, 135). It is therefore not important in this form of standardised test
questions to check whether a participant has a specific professional ability—in this
case, it would depend on the validity of the test question—but to construct test
questions in such a way that the given bandwidth of the degree of difficulty and a
correspondingly high selectivity value are achieved. These values can be achieved,
for example, by adjusting the distractors (the wrong response specifications) for
multiple-choice tasks. This principle of test construction applies to classical test
theory as well as to probabilistic test methods.

In 1975, Hermann Rademacker was commissioned by the Federal Institute for


Vocational Education and Training (BBF), today’s BIBB, to draw attention to the
fact that professional skills cannot be tested with standardised test tasks. He illus-
trated this with an example of pilot training. At the end of the pilot training at a ‘pilot
school’, it was regularly checked whether the prospective pilots were able to
interpret the displays of the artificial horizon correctly. The test task was: ‘For the
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 135

following display of the artificial horizon, please indicate the flight status of your
aircraft!’. Correct answer: ‘Descending in a left turn’ (Rademacker, 1975).
All participants in pilot training regularly solved this task correctly. This is not
surprising at all, as the artificial horizon reading is trained on a large number of test
flights as well as in the aircraft simulator.
The instructors (experienced pilots) were (always) very satisfied with this test
result. All student pilots had demonstrated an essential aspect of professional
competence (as pilots).
The psychometric evaluation of the established test procedure came to the
conclusion that this task should be removed from the test or reformulated, as it
would not meet the quality criteria of the relevant test theory in its present form. The
degree of difficulty and the selectivity index would be outside the observable limits.
The task was changed in such a way that a higher degree of difficulty and therefore
also a sufficiently high selectivity value were achieved. The reworded task was:
‘Please draw the position of the artificial horizon in the illustration (an empty circle
symbolising the artificial horizon), which indicates when you are flying a left turn on
your plane while ascending’.
A sufficiently large proportion of the prospective pilots now solved the task
incorrectly, although they had all demonstrated the safe and error-free handling of
the artificial horizon during their ‘training flights’ and in the flight simulator.
This example shows that standardised tests are unsuitable for testing professional
competence, as the level of difficulty of the test tasks does not result from the
complexity of the task to be tested, but from the manipulation (e.g. by skilful
formulations) of the wrong answer options. When checking occupational skills,
especially those that are safety-relevant, the use of standardised test tasks is not
only unsuitable, but also not permissible, since the validity of the contents of the test
or examination tasks is not given. For example, it is essential that the VDE safety
regulations for the installation of electrical systems are safely controlled by qualified
electricians. An examination practice that does not verify this involves incalculable
risks, as a successful examination also entails the authorisation to install electrical
systems.
The examination of professional competence therefore necessarily requires valid
forms of testing and examination in terms of content (see Rademacker, 1975).
If it is not about individual items but about tests with a large item pool, for
example in multiple-choice tests, probabilistic modelling with the help of the Item
Response Theory allows precise statements to be made about the selectivity of entire
tests. At the Chamber of Industry and Commerce’s final examinations, for example,
there was a lack of reliability, particularly in the lower part of the results, which is
however the decisive factor for passing or failing the final examination (Klotz &
Winther, 2012) (Fig. 5.21).
The lack of reliability in the lower range of personal competence is due to the use
of only a few very difficult or very easy items. A higher selectivity for the respective
degrees of difficulty could be achieved by using a correspondingly higher number of
items in these areas. It is also possible to select items from a sufficiently large item
pool individually for each test person, whose level of difficulty is adapted to the
personal competence based on the results of the previous items (‘adaptive testing’).
136 5 Developing Open Test Tasks

Fig. 5.21 Capability-specific reliability sum for all test items (Klotz & Winther, 2012, 9)

While this requires the use of a large number of individual items, such items
cannot depict work process knowledge and context understanding. This splits the
solution of complex work tasks into the knowledge of individual activity steps. Not
even specialist knowledge can be validly tested in terms of content. Especially, the
low confidence of employers in the professional relevance of this type of examina-
tion (Weiß, 2011), as well as the trend of modern examination practice to rely on
open exemplary tasks in the form of company assignments, is clear arguments
against the use of multiple-choice examinations in vocational education and training.
Conclusion The use of standardised test tasks leads to problems with selectivity. If
the degree of difficulty of individual tasks is adjusted accordingly, the task thus
optimised fails to achieve essential contents of professional competence. The opti-
misation of entire test batteries allows good selectivities, even if individualised or
according to difficulty ranges. However, this requires the division of tasks into a
large number of individual items. This in turn leads to a survey of specialist
knowledge, but not to the measurement of professional competence. Here again, a
fundamentally problematic test procedure is not improved by its optimisation.

5.8.2 Criteria-Oriented Test Tasks

Criteria-oriented test tasks must first of all be valid in terms of content. The criterion
validity of a test is therefore measured by whether the test result correctly predicts the
subsequent behaviour (see above). If, for example, the ability to solve a professional
task in conceptual-planning manner is tested with open, valid test questions, it is
assumed that the test person can also solve this task practically—in the working
world or within the framework of a practical examination. Test items that meet this
criterion are referred to as criteria-oriented test items. In this respect, the validity of
the criteria concretises the overriding criterion of the content’s validity. The test-
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 137

theoretical literature refers to the fact that the validity of the content cannot be
determined numerically and that one is therefore ultimately dependent on the
technical or didactic competence of the developers of the test tasks to assess the
validity of the test task’s content.
In the COMET projects carried out and planned so far, vocational competence is
measured towards the end of vocational training—with reference to the occupational
competence to be imparted.
The content dimension of the competency model differentiates between profes-
sional fields of action for beginners, advanced beginners, advanced learners and
experts. This is the basis for the development of test tasks with which professional
development can be recorded (! 5.1, 5.2).

Difficulty of Tasks in a Cross-Professional Comparison

As COMET projects for different occupations are based on the same competence
model as the design of test tasks (see this chapter), this leads to the assumption that
this should also enable a comparison between the test results across occupations. If
one compares the test results of the model tests (COMET) for the occupations
electronics technician (industrial engineering and energy and building technology),
industrial engineer and car mechatronic (Fig. 5.22), one is shown clear differences in
the competence of the occupation-related test groups.

Fig. 5.22 Competence level of trainees in electronics, industrial engineering and car mechatronics
(COMET projects in Hesse)
138 5 Developing Open Test Tasks

Table 5.8 Evaluation of the difficulty of the test tasks for electronics technicians, industrial
engineers and car mechatronics by vocational teachers in the fields of electrical engineering and
metal technology on a scale from 1 to 10
Teacher of the
subject Electrical Metal Automotive
Test tasks for engineering technology engineering Σ Variance
Electronics 7,4 6,1 5,0 7,1 2,4
technicians
Industrial 6,0 6,0 5,8 6,1 1,3
mechanics
Car mechatronics 5,0 5,9 5,9 5,4 1,7
On the basis of ‘All teachers evaluate all tasks according to their degree of difficulty’, a clear
weighting is given for the ‘degree of difficulty’ of the test tasks. The highest level of difficulty is
found in the test tasks for electronics technicians (7.1). The test tasks for industrial engineers have a
lower value (6.1). The test tasks for car mechatronics are classified as significantly less difficult. The
relatively high variance is striking as the ratings vary considerably. This applies both to the
evaluation of each individual test item and to the evaluation of each non-occupational test item
A comparison in levels of difficulty of the individual test questions assessed by the teachers with the
objective test results a) on the basis of all test participants and b) on the basis of comparable test
groups results in further conspicuities. The test results on the one hand and the assessment of levels
of difficulty of the test tasks by the teachers on the other hand are relatively far apart (Table 5.9)

Industrial engineers achieve significantly higher test values than, for example,
electronics engineers. This also applies if the test groups of all three occupations are
compared with each other on the basis of comparison groups (with comparable
previous training). This finding triggered an evaluation by the vocational school
teachers involved in these COMET projects of the difficulty of all test tasks across all
three occupations (Table 5.8).
While the total point values (TS) for the individual test tasks in the respective
groups vary by a maximum of 5.5 points, i.e. are relatively close together, the range
of the teachers’ assessment of the level of difficulty is considerably wider.
At the same time, the teachers’ assessments of the difficulty and the actual
number of points achieved by trainees (e.g. T6) differ.
The teachers involved attributed this, among other things, to the fact that the
‘difficulty’ was primarily assessed from the limited perspective of subject-systematic
criteria. However, the concept of ‘holistic task solving’ allows the scope for design
to be exhausted and thus a higher number of points, especially for complex tasks
The inclusion of other professions such as industrial tradesmen, medical special-
ists and carpenters in the comparisons of the difficulty of the test tasks results in
further differentiations (Fig. 5.23)
The very different levels of competence initially confirm the thesis that a cross-
professional comparison must take into account what is to be compared. The
‘difficulty’ of a test or an examination results from the qualification requirements
in the professional fields of action as defined in the job descriptions (the job profiles).
These also represent the profession-specific qualification level. For example, at the
end of their training, the qualification level of industrial tradesmen is rated as equally
high or even higher than that of graduates of relevant bachelor’s degree programmes,
Table 5.9 Total point values (TS) for all test tasks
Professions Electronics technicians Industrial mechanics Car mechatronics
Test tasks T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14
Teacher of the subject 23.1 22.1 26.6 27.1 35.5 40.3 36.3 41.8 34.5 35.8 38.7 32.8 33.2 37.7
Electrical engineering 6.5 7.1 7.6 8.5 5.8 6.4 6.8 5.4 5.4 6.8 4.4 6.2 4.3 3.9
Metal technology 4.8 6.0 7.0 6.8 5.8 6.2 6.8 5.3 5.8 6.4 6.4 7.0 5.6 4.4
Automotive engineering 6.0 7.6 7.4 6.6 6.2 7.4 7.2 5.4 8.2 5.7 5.1 6.6 5.1 3.9
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .
139
140 5 Developing Open Test Tasks

Fig. 5.23 Competence level of apprentices in industrial mechanics, industrial tradesmen, medical
assistants and carpenters, COMET projects in NRW

which are formally rated higher according to European standards. Practical training
shows that occupations predominantly chosen by high-school graduates have a
higher level of qualification than occupations predominantly chosen by secondary
school students (Fig. 5.24).
The preparatory training of trainees in a profession therefore also represents their
level of requirements (qualification levels). This is reflected by the idiom that
industrial clerk is a typical occupation for high-school graduates. From this perspec-
tive, the training occupations can be distinguished according to their ‘difficulty’. The
wording ‘degree of difficulty’ is avoided, as the calculation of a degree of difficulty
for professions would not do justice to the complexity of the occupational concept.
Howard Gardner has pointed out that each profession has its own quality: ‘Take a
journey through the world in spirit and look at all the roles and professions that have
been respected in different times and cultures. Think of hunters, fishermen, farmers,
shamans (...) sportsmen, artists (...) parents and scientists (...). If we want to grasp
the whole complex of human cognition, I think we must consider a far more
comprehensive arsenal of competencies than usual’ (Gardner, 1991, 9). With the
concept of multiple intelligence, Gardner tries to meet the variety of different
abilities, which also find their expression in the competence profiles of different
professions. The various profiles of intelligence and skills expressed in the
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 141

Fig. 5.24 Competence level of apprentices for industrial clerks and electronics technicians for
energy and building technology

occupations are superimposed in the practice of vocational education and training by


the development of occupations with different levels of qualification. For example,
the two-year assistant occupations (e.g. nursing assistants) are regarded as occupa-
tions with lower qualification requirements and thus also as less difficult to learn.
Subjectively, the level of difficulty in learning the profession of industrial clerk
(IC) will probably be experienced as appropriate by a high-school graduate as by a
trainee with a weak secondary school leaving certificate learning a two-year assistant
profession. According to the Vocational Training Act, vocational training follows
full-time compulsory schooling. A differentiation of occupations according to pre-
vious schooling is not planned. The ‘two-year occupations’ are an exception. They
are considered ‘theory-reduced’. The few trainees in such professions receive a
certificate in Switzerland that enables them to continue their training in a related
‘fully fledged’ profession. The German BBiG does not provide for this differentia-
tion. Informally, these occupations are considered ‘training occupations’.

Test-Theoretical Problems for Vocational Education

1. Competence diagnostics (and testing) generally relate to established training


occupations.
In countries with a dual vocational training system, the job profiles are ‘classified’ on
the basis of statutory regulations. Different levels of requirements can at best be seen
in the length of training. Four years of training (as is the case in Switzerland and
Austria, for example) is considered an indication of a ‘more difficult’ profession than
142 5 Developing Open Test Tasks

a profession with a three-year training period. The reference point for the develop-
ment of test tasks, with which the employability is examined, is the competences
defined in the job description.
In international comparisons such as the International World Skills (IWS), the
professional experts of the participating countries agree on the fields of action relevant
to professional practice and the criteria of professional competence for the respective
profession. These are the basis for the formulation of complex competitive tasks
(Hoey, 2009, 283 ff.). A similar procedure was developed for the international
comparative COMET projects. On this basis, the professional capacity can be checked
at the end of the vocational training. The possibility of measuring competence during
the course of vocational training (for beginners, advanced beginners, advanced
learners and experts) is also possible in principle. However, difficulties always arise
when vocational training courses are included which differ in the content structure of
the curricula/training regulations. This is the case, for example, if the development of
competences during a training course structured according to learning fields is to be
compared with a training course structured according to a subject system.
This difference is irrelevant for the verification of employability at the end of
training. Competence diagnostics makes it possible to compare programmes with
different curricular structures if they pursue the goal of promoting the trainees/
students on their way to professional competence.
2. The ‘level of difficulty’ of the test items is of secondary importance for open test
items.
The decisive criterion for the quality of open test tasks is their professional
validity and therefore their authenticity and their representativeness for a profes-
sion’s fields of action.
The competence level of a test participant therefore does not depend on the level
of difficulty of the test tasks, but firstly on the ability to solve the open (complex) test
tasks completely and secondly on the professional justification of the solutions of a
test task. A distinction is made between the level of action-guiding, action-declara-
tive and action-reflecting work process knowledge (! 3.5.4).
This difficulty component is also not a characteristic of the test item, but an ability
characteristic. The test task therefore always contains the request to provide detailed
and comprehensive reasons for the task’s solution.
This concept of difficulty is realistic because the test tasks can be solved not only
at different competence levels, but also at different levels of knowledge.
At the first level of knowledge, it is only important that the future specialists
(completely) solve or process the tasks assigned to them on the basis of the rules and
regulations to be observed. In companies with a flat organisational structure, in
which a high degree of responsibility and quality assurance is shifted to the directly
value-adding work processes, it can be assumed that the specialists can understand
and explain what they are doing.
Equally typical are situations in which, for example, journeymen from an SUC
company advise their customers on the modernisation of a bathroom or heating system
(at the level of action-reflecting knowledge) in such a way that they have the oppor-
tunity to make a well-founded choice between alternative modernisation variants.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 143

Test situations are also conceivable in which technical college or university


students from relevant fields document their specialist knowledge beyond initial
vocational training when explaining their proposed solution. In an ability-based test
concept such as COMET, the level of the test result does not therefore depend on the
‘degree of difficulty’ and the selectivity of test items, but on the test subjects’ ability
to solve the complex, open task solution more or less completely and to justify it
comprehensively.
The ‘degree of difficulty’ of a test task for a test population results solely from the
validity of the situation description and the guiding idea for vocational training:
ability to participate in shaping the working world and society in socially and
ecologically responsible manner (KMK, 1991, 1999). The guiding principle of
design competence places high demands on vocational education and training. The
COMET test procedure allows the measurement of the degree to which this guiding
principle is implemented in vocational training practice.
3. Inter-occupational comparisons of the ‘level of difficulty’ of test tasks
Assuming that the procedures described and justified in this chapter are used as a
basis for the development of the COMET test tasks, the test results represent (in the
case of a representative test) the competence level and competence profile of the test
population as well as the heterogeneity of the competence level of the test partici-
pants in and between the test groups. This shows whether and at what level the
prospective skilled workers have attained professional competence. This result says
something about how difficult this profession is to learn. For example, the result of
the second main test of a longitudinal study of nursing school students (Switzerland)
shows a very high proportion of students who achieve the highest level of compe-
tence compared to other occupations (Fig. 5.25).

Fig. 5.25 Competence distribution of nursing occupations Switzerland, second main test 2014
(On differentiating competence levels according to knowledge level ! 8.2.2)
144 5 Developing Open Test Tasks

Students and lecturers had the opportunity to reflect on the weaknesses of their
training identified in the first main test—1 year earlier—and to introduce more forms
of learning according to the learning field concept.
The context analysis, the project results, the longitudinal study and the results of
the feedback discussions with this project group (the coordinators of the VET centres
involved in the test) were the basis for the interpretation of the test result.
1. The test tasks are based on an authentic, valid and representative description of
the situation (result of the pre-test).
2. The test tasks are classified as adequate and demanding by the lecturers/subject
teachers and the students.
3. The high level of competence that is achieved with the three-year dual technical
college course is an expression of the high quality of this course (Gäumann-Felix
& Hofer, 2015; Rauner, Piening, & Bachmann, 2015d).
This also means that the very high proportion of test participants (representative
of the test population), which achieves a high and very high level of competence,
cannot be interpreted as an indicator of a low level of difficulty of the test tasks. In a
capability-based test concept, the test authors must agree on the formulation of the
solution space and the raters on the rating criteria for the rating items.
When formulating the solution spaces, it is important to define the space of
possible task solutions in relation to all relevant solution criteria. The authors of
the test tasks and the solution spaces are oriented to their picture of the primary test
population to be tested. If, for example, the authors (teachers) teach both trainees and
technical college students and if the primary test population is not explicitly defined,
a requirement level can subjectively arise that is at the level of the technical college
rather than that of the dual courses of education—or vice versa. Therefore, it is
necessary to accurately define the primary test population. Difficulties arise in
international comparative projects if, for the same professional activities (e.g. for
nursing professionals) in the participating countries, training is provided in upper-
secondary (dual and vocational schools), post-secondary (technical colleges) or
tertiary education programmes.
In these cases, inaccuracies can only be avoided by carrying out the rater training
on the basis of the solution examples and the reference values of the rater training for
the primary test group. Only then is there certainty that the raters apply the same
standards in interpreting the rating items in relation to the individual test items.
If this prerequisite is met, then different courses of education at different quali-
fication levels which qualify (are to qualify) for the same or a comparable profes-
sional activity can be compared with each other.
Cross-professional comparisons of competence levels as a yardstick for the
difficulty of the test tasks, on the other hand, are only possible to a very limited
extent.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 145

The empirical data available on this subject show that assessment standards,
which are characterised by different vocational training traditions, often lead to a
wide divergence in the assessments of the raters at the beginning of rating training.
The rater training allows a final evaluation of the task solutions on a high level of
agreement.

5.8.3 The Variation Coefficient V: A Benchmark


for the Homogeneity of the Task Solution

The quality of a COMET test task is proven in its potential to measure the degree of
completeness and homogeneity of professional competence.
To quantify more or less complete task solutions, the variation coefficient V is
determined.

V ¼ Standard Deviation ðC1 :: C8Þ=MeanðC1 :: C8Þ

V is a benchmark for the degree of homogeneity of the task solutions. It is


calculated by dividing the standard deviations of the eight competence values by
the mean value of the competence values 1–8, based on the sub-competences valid
for a test task. If a situation description (of a test task) contains the potential for a
homogeneous task solution, then it is suitable for measuring competence profiles: the
ability for a complete task solution. This prerequisite was met for the test tasks used
to measure the competence profiles in Fig. 5.26.

Fig. 5.26 Example of a homogeneous and inhomogeneous competence profile of two commercial
professions
146 5 Developing Open Test Tasks

5.8.4 Conclusion

A summarised state of the scientific discussion on the ‘degree of difficulty’ of test


tasks in COMET projects results in the following findings.
1. The COMET procedure of competence diagnostics is not a difficulty level test
procedure, but an ability-based test procedure.
2. The competence levels of test groups of different professions are an expression of
• job-specific requirement levels
• different (scholastic) backgrounds
• different training periods (2-, 3-, 4-year courses)
• the more or less successful implementation of the guiding principle ‘shaping
competence’ (the learning field concept).
Chapter 6
Psychometric Evaluation of the Competence
and Measurement Model for Vocational
Education and Training: COMET

6.1 What Makes it So Difficult to Measure Professional


Competence?

The complexity of professional tasks usually requires a holistic solution. Specifi-


cally, this means that different competences must interact successfully to constitute
the professional competence together. One possible description of these compe-
tences is the eight criteria of the complete task solution (clarity/presentation, func-
tionality, sustainability, economic efficiency, work and business process orientation,
social compatibility, environmental compatibility and creativity), which have fre-
quently been described above (! 4).
The most important question for the measurement of professional competences is
how the interaction of these eight criteria can be described mathematically and
transferred into a suitable measurement model (Suppes & Zinnes, 1963). Mathemat-
ically, the interaction of several criteria (! 6.3) must be described as a possible
interaction of a higher order. If, for example, four criteria are involved, then this
means that all possible interactions of these four criteria must also be considered
(vgl. Martens & Rost, 2009).
The theoretical requirement of a holistic task solution therefore determines the
search direction for the description of a suitable measurement model. The first step
is to check which higher-order interactions can be identified. Such complex
models can be systematically simplified if theoretically and mathematically feasi-
ble. The simplification then facilitates the representation and interpretation of such
models. Thus, if an empirically tested measurement model can represent higher-
order interactions, which can subsequently not be empirically identified, then the
simpler measurement model, which excludes exactly these interactions, can also
be used.
Unfortunately, this search direction cannot be reversed. It is not possible to
conclude from the fit of a model that such higher-order interactions can be excluded

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 147
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_6
148 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

a priori. This statement can be supported by two main arguments. There is always
a—possibly very small—chance that a better measurement model can be found.
Moreover, it is not possible to define clear criteria for the fit of measurement models.
These criteria are also subject to normative assumptions which, however, cannot be
discussed here (cf. Bozdogan, 1987). Therefore, if the theoretical assumptions
require it, at least an active search should be made for a model that can mathemat-
ically represent the ordered higher-order interactions.
If higher-order interactions can actually be assumed, then this would mean that
simpler models pretend comparability, for example, of the persons investigated,
which is not given. Higher-order interactions result in qualitatively different capa-
bility profiles that are no longer directly comparable (Erdwien & Martens, 2009). In
summary, it can be said that the interaction of the eight criteria of the complete task
solution determines the search direction for the measurement model to be identified:
from simple to complex.
A further argument that specifies this search direction results from the rough
allocation of the eight criteria to three levels of professional competence (Rauner,
Grollmann, & Martens, 2007). This allocation gives rise to two possible develop-
ment paths for professional expertise: a gradual development in which the three
levels develop in succession and a simultaneous development with a more or less
continuous increase in all three areas. If both development paths are to be allowed,
this also has implications for the selection of a suitable measurement model. In
particular, the idea of the successive training of competence levels gives rise to
different qualitative competence profiles, which in turn can only be described by
allowing higher-order interactions.
It must be noted at this point that there can be no clear identification of a
measurement model. Ultimately, one has to decide on a suitable model with a
transparent presentation of the corresponding selection criteria. It is often necessary
to balance the contradictory forms of selection criteria against each other.
In the following, we will therefore examine which statistical approaches and
measurement models 0are statistically suitable to meet the requirements
discussed here.

6.1.1 Procedures Based on the Analysis of the Covariance


Matrix

Factor analyses and many related procedures are usually based on an analysis of the
covariance matrix, i.e. on a correlative relationship of two variables each. Histori-
cally, the continuing popularity of these methods is mainly due to the fact that they
could be calculated easily and without the help of computers. However, this math-
ematical simplicity is achieved with a severely limited informative value of the
resulting models. The data structure based on the covariance matrix, the bivariate
network of relationships, excludes higher-order interactions between the variables a
priori. The corresponding models are therefore only conditionally suitable for the
6.1 What Makes it So Difficult to Measure Professional Competence? 149

analysis of professional competences. Factor analyses are—in relation to the covari-


ance matrix—the attempt to bundle variance components and thus present them
more simply. The structural statements from these simplification processes also
strongly depend on the characteristics of the respective sample. Even minor changes
in the composition of the sample can change the factor structure, for example. This is
defined mathematically. Changes in the covariance matrix must also lead directly to
a change in the higher-level factors in the factor analysis. Such direct dependence
requires careful sampling. An instable factor structure can already be expected if, for
example, part of the sample drawn is not taken into account. For example, if certain
differences in the sample are systematically reduced, this can lead to an apparent
increase in the resulting number of factors, because a general factor can combine less
variance. This is an important difference to the family of mixed distribution
methods—these are largely structure-invariant to changes in the sample.
The empirical results of the procedures based on a covariance matrix must
therefore be very carefully interpreted with regard to the measurement of vocational
competences, which particularly concerns Chaps. 6.4, 6.5 and 6.6. The disregard for
higher-order interactions and the sample dependence of the results limits the infor-
mative value of the results.

6.1.2 Mixed Distribution Models

Mixed distribution models separate the population to be analysed into homogeneous


subpopulations, whereby methods can be distinguished in which the persons in a
subpopulation can be further differentiated (the Mixed Rasch Model) or it is simply
assumed that persons assigned to a common population also have the same personal
characteristics (the Latent Class Analysis). These methods generally meet the theo-
retical measurement requirements resulting from a holistic task solution. The sepa-
ration into subpopulations mathematically corresponds to a simultaneous
consideration of all interactions of higher order. Some disadvantages of these pro-
cedures must of course be specified:
1. Persons assigned to different subpopulations may no longer be directly compa-
rable. This would restrict some practical applications, such as selection decisions,
since the profiles can no longer be reduced to a single criterion. Persons can only
be directly compared if the competence profiles of the assigned subpopulations
run in parallel and without overlaps.
2. In addition, the statistical determination of the most suitable parameters is
challenging. While the selection of most model parameters can follow practical
considerations, the number of subpopulations in particular can lead to an
improved fit between model and data. This relationship naturally applies to all
measurement models: if the number of model parameters is increased, the data fit
must improve. In this case, the use of so-called FIT indices such as the AIC (see
Bozdogan, 1987) only helps to a limited extent, because the use for additionally
150 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

invested parameters is of course a normative setting. A possible way out is the use
of bootstrapping (Efron & Tibshirani, 1994; von Davier, 1997; von Davier &
Rost, 1996), in which a test distribution is generated using artificial data sets
assuming the validity of the model. Thus, the fit of the empirical data set can be
roughly estimated in comparison with the artificial data sets under model validity.
Ultimately, however, one should not rely on a criterion for the quality of the
model fit. Careful consideration of several criteria with reference to the theoretical
foundations certainly makes sense. For example, the mean class affiliation prob-
ability is a good measure of the stability of a solution, even if this measure can
only be used to compare different solutions with each other to a limited extent.

6.2 Ensuring the Interrater Reliability of the COMET Test


Procedure

Reliability is the degree of measurement accuracy of a test procedure. If a test


procedure is based on a rating procedure involving a large number of raters,
interrater reliability is regarded as an indicator of the reliability of the test
procedure.

The determination of reliability in the measurement of occupational competence


with open complex test tasks (as in the COMET project) is particularly relevant. If it
can be proven that a very high interrater reliability is also achieved for international
comparative surveys (here Germany, China, South Africa), then this is an important
indicator for the quality of the test procedure (see Zhuang & Ji, 2015). The basis for
achieving high reliability in the COMET project is a tried and tested concept for rater
training (Haasler & Erdwien, 2009; Martens et al., 2011) and the development of
solution scopes for the test tasks by the authors of the test tasks (cf. Martens et al.,
2011, 104 ff.).

6.2.1 Example: Securing Interrater Reliability (COMET


Vol. I, Sect. 4.2, Birgitt Erdwien, Bernd Haasler).

In order to put the proof of interrater reliability on a solid basis, a sample was drawn
beforehand from the test person solutions of the main test, which was submitted to all
18 raters for evaluation. From the set, consisting of the four test questions, two test
person solutions were used for the rating. Each rater from the team was therefore
confronted with 8 solution variants of test persons, which had to be evaluated and
assessed. The advance rating therefore consisted of a database of 144 individual
ratings.
6.2 Ensuring the Interrater Reliability of the COMET Test Procedure 151

As a result of the response format for the evaluation of the test tasks and the
number of raters, the Finn coefficient was chosen as a quality measure for the
evaluation of the interrater reliability. Strictly speaking, this is a measure that
requires an interval scale level and for its use, the data must meet the requirements
for calculating variance analyses. Although the available data are ordinally scaled
rating data, a rating scale can be treated as an interval scale, provided that the
characteristic values are equidistant, and the numbering of the different characteristic
values therefore differs equidistantly. In addition, Bortz and Döring (2002, 180 f.)
point out that the mathematical requirements for the analysis of variance say nothing
about the scale properties of the data. This means that parametric procedures can be
used even if the data are not exactly interval-scaled, provided that the other pre-
requisites for carrying out the procedure are met.
The criteria—(a) independence of observations, (b) normal distribution of obser-
vations and (c) homogeneity of variances—are the main prerequisites for the feasi-
bility of a variance analysis. The violation of the criterion of independence leads to
serious consequences, whereas the analysis of variance is robust against a violation
of the normal distribution or variance homogeneity criterion.
In the present case, the observations are independent: vocational schools are only
the functional units in which all students can be tested together. However, practical
vocational skills are primarily developed in training companies due to the dual
organisation of training, so that independence is ensured due to the allocation of pupils
to different training companies. In addition, the vocational students of the various
vocational school classes solve the tasks assigned to them individually, whereby all
four test tasks are distributed equally among the students within a respective vocational
school class, which prevents attempts at cooperative work or ‘copying’. Also, the rates
evaluate the task solutions strictly independent of each other. They are never in
exchange with each other at any time during the assessment process.
An explorative data analysis also showed that 33 of the 40 items meet the criterion
of variance homogeneity. The seven non-variance homogenous items were neverthe-
less included in the main analysis; however, when the overall data set was available,
they were subjected to another critical examination with regard to their variance
homogeneity before further, constructive analyses were carried out. The interrater
reliabilities were calculated both including and excluding these seven items.
To test empirical data for the presence of a normal distribution, various graphical
(e.g. histograms, P-P plots) as well as statistical (e.g. Shapiro–Wilk test,
Kolmogorov–Smirnov test) methods can be used. In this study, so-called P-P plots
were generated. These represent the expected cumulative frequencies as a function
of the actual cumulative frequencies. The normal distribution probability was cal-
culated using the Blom method. Figure 6.1 gives an example of the PP plots
generated in this way.
While the P-P plots indicate the existence of normally distributed data, this could
not be proven by statistical methods. Due to the robustness of a variance analysis
against the violation of the criterion of the normal distribution, the use of the Finn
coefficient for the interrater reliability calculation was, however, not discarded.
With regard to the use of the Finn coefficient, it should be noted that it is generally
considered to be of little value. It poses the danger that the proven reliability can
152 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.1 P-P-Plot for the evaluation of the item ‘Is the solution functional?’

positively distort the degree of actual agreement between the raters. Therefore, the
calculation of the intraclass correlation (ICC) as a stricter valuation method had to be
considered to prove interrater reliability. However, here again it proves to be a
problem that only a small variance of the item mean values leads to the fact that
‘no or no significant reliability can be measured with the ICC for a measuring
instrument’ (Wirtz & Caspar, 2002, 161). Although a lower ICC value is considered
acceptable in this case, it is difficult to determine a clear anchor point of the boundary
between satisfactory and inadequate interrater reliability. Compared to the ICC, the
Finn coefficient is ‘apparently independent of the variance of item averages’
(Asendorpf & Wallbott, 1979, 245). The variances of the item averages in the
available data prove to be small to very small. They range between 0.00 and 1.02.
The mean dispersion is 0.37. Therefore, the Finn coefficient is useful here as a
reliability measure.
Several models are available for calculating reliability with the Finn coefficient.
For the purpose of reliability determination,
1. each subject is assessed on the basis of 40 items
2. each rater of the entire rater team makes these assessments; i.e., they are not
randomly selected from a larger rater population,
6.2 Ensuring the Interrater Reliability of the COMET Test Procedure 153

a two-factor model of reliability measurement (rater fixed) is selected (cf. Shrout &
Fleiss, 1979). Furthermore, the decision must be made whether an unadjusted or an
adjusted reliability should be used as a measure. The unadjusted reliability reflects
the degree of agreement between the raters, while the adjusted reliability does not
take average differences between the raters into account as a source of error and thus
does not take the personal frame of reference of the raters into account. According to
Wirtz and Caspar (2002) (related to the ICC) and Shrout and Fleiss (1979), a
decision criterion for the use of the unadjusted or adjusted reliability calculation
lies in the properties of the rater test. Since all raters are to assess all subjects and the
reliability statement is to apply exclusively to the raters belonging to the sample, an
adjusted value can be used as a reliability measure in this case.
A reliable assessment can be assumed if the differences between the subjects
(here the pupils) are relatively large and the variance between the observers with
respect to the subjects is relatively small. The Finn coefficient can accept values
between 0 and 1.0. A value of 0.0 expresses that there is no correlation between rater
assessments, while a value of 1.0 arises when the raters have both equal mean values
and equal variances. The closer the value is to 1.0, the higher the reliability of the
assessments. For the Finn coefficient, values from 0.5 to 0.7 can be described as
satisfactory and as good from more than 0.7. Due to its merely low valuation
stringency, only Finn values with high interrater reliability are considered acceptable
in the present study; i.e., only Finn values of at least 0.7 are considered sufficient.
The following table shows the results of the reliability calculations for the eight
test person solutions that were given to the 18 raters for evaluation following the
rater training, whereby these are displayed both including and excluding the seven
non-variance homogeneous items (Table 6.1).
This shows that all Finn coefficients range within high reliability; i.e., the target
criterion of 0.7 defined for this study is achieved or exceeded. Even if the seven non-

Table 6.1 Interrater reliabilities in the advance sample of the population


All Inclusion of the seven non-variance homogeneous
40 items items
Subject
code Test task Finnjust Finnjust
H0102 Skylight 0.70 0.71
control
H0265 Skylight 0.76 0.75
control
H0225 Signals 0.80 0.79
H0282 Signals 0.78 0.78
H0176 Drying space 0.74 0.74
H0234 Drying space 0.71 0.71
H0336 Pebble 0.87 0.86
treatment
H0047 Pebble 0.73 0.73
treatment
154 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

variance-homogeneous items are excluded from the calculations, the results are
largely stable. All in all, the interrater reliabilities can be described as satisfactory.

6.3 Latent Class Analysis of the COMET Competence


and Measurement Model

Thomas Martens

The psychometric review of a competence model aims to examine whether and how
it has been possible to translate the theoretically and normatively based competence
model into a measurement model. Thomas Martens and Jürgen Rost point to the
complexity of the psychometric evaluation of a competence and measurement model
for vocational education and training and to the state of psychometric evaluation
practice, which is particularly challenged here. They describe the ‘approximation
process’ between the theoretical creation of models and their gradual verification
with various measurement models (Martens & Rost, 2009, 96 ff; Rost, 2004b):
• Competence models as psychometric models of one-dimensional competence
levels.
• Competency models as psychometric models of cognitive sub-competences.
• Competence models as psychometric models of level-related processing patterns.
The central object of the first COMET model experiment was the psychometric
examination of the competence model with the aim of developing it into a measure-
ment model (ibid., 95). In COMET Vol. III, MARTENS and others present the
method of an evaluation procedure with which the construct validity of the COMET
test procedure was checked (Martens et al., 2011, Ch. 4.3, 109–126) (Fig. 6.2). The
competence and measurement model based on educational theory and normative
pedagogy has all the required psychometric quality characteristics. In the discussion
about the external validity of the test procedure, it is also a question of whether the
modelling of professional competence has been successful, and whether the profes-
sional skills and knowledge of experts can be adequately represented in their
development from beginner to expert with the competence model. For this purpose,
a comprehensive justification framework was developed in the COMET project,
proven to be highly connectable in the international comparative studies conducted
to date.
To answer the question of how professional competences can be measured, some
fundamental questions of test theory and measurement theory must first be
discussed.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 155

Source COMET Vol. I COMET Vol. II COMET Vol. III


(2009) (2009) (2011)

Criteria
Objectivity HAASLER, HEINEMANN et al.: Ch. 2 MARTENS et al.: Ch.
ERDWIEN: Ch. 5 RAUNER, MAURER: Ch. 4.1
HAASLER et al.: 3 MAURER et al.: Ch. 5
Ch. 4 PIENING, MAURER: Ch. 4
Reliability MARTENS, ROST: ERDWIEN, MARTENS: MARTENS et al.: Ch.
Ch. 3.5 Ch. 3.2 4.2
HAASLER,
ERDWIEN: Ch. 5.1,
5.2
Validity RAUNER et al.: ERDWIEN, MARTENS: RAUNER et al.: Ch. 1, 2
Cahpter 1, 2 Ch. 3.2 HEINEMANN et al.: Ch.
MARTENS, ROST: RAUNER, MAURER: Ch. 3
Ch. 3.5 3.1 MARTENS et al.: Ch.
4.3

Fig. 6.2 Source references for the quality of the COMET test procedure (COMET Volumes I to IV)

6.3.1 On the Connection between Test Behaviour


and Personal Characteristics

Rost (2004a) suggests that the subject area of test theory is the conclusion of test
behaviour to the personal characteristic (Fig. 6.3).
In many practical applications of test theory—especially in psychological test
practice—it is assumed that a test behaviour can be translated directly into an
empirical relative; for example, ‘test task solved’ and ‘test task not solved’ is
converted into ‘1’ and ‘0’ and is then subsequently used as an estimator of a person’s
ability, for example in the dichotomous Rasch model (Rasch, 1960; see Rost, 1999).
However, even this simple relation raises a whole series of questions. Such a
structure implies, for example, the property intended by the test designer that there
can only be two possible outcomes of a test action, i.e. ‘solved’ or ‘not solved’. This
logic, for example, does not map the intermediate steps towards the result of the
action. How the result of the step came about is simply ignored, for example whether
there could be alternative steps leading to an equivalent result.
From the VET perspective and VET research, such dichotomisation of the test
behaviour into ‘correct’ and ‘wrong’ represents a strong restriction of the validity of
the content. The division into individual and independent steps, which can then be
regarded as ‘solved’ or ‘not solved’ in the sense of a single test action, also seems
hardly possible in many contexts of vocational education and training.
156 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.3 Connection between personal characteristic and test behaviour according to Rost (2004a)

6.3.2 On the Relationship Between the Structure


and Modelling of Vocational Competences

This introductory consideration should have made it clear that the conclusion of test
behaviour on characteristics of professional competence is by no means trivial.
Basically, this is a fitting problem between empiricism and theory (see also
COMET Volume I: Martens & Rost, 2009). Steyer and Eyd (2003, 285) explain:
‘Measurement models therefore have the task of explaining the logical structure of
theoretical quantities and their combination with empirical quantities.’
In particular, therefore, this is a matter of
(a) Theoretical assumptions relating to a structure of professional competence;
(b) Mathematical relationships that can be described by a measurement model.
Theoretical model (a) and measurement model (b) should be linked in such a way
that one structure can be transferred to the other. According to Suppes and Zinnes
(1963), this should be as isomorphic and unique as possible.
A number of desirable properties can be added to both the theoretical model and
the measurement model.

6.3.3 Mathematical Properties of a Test Model

The desirable properties of a test model cannot be discussed in detail here (see, for
example, von Davier & Carstensen, 2007). The characteristic that is particularly
controversial from the perspective of vocational education and training is the idea of
items that are independent of one another and measure the same personal character-
istic. Conceptually, this is described in classical test theory (KTT) with ‘essential tau
equivalence’ and in probabilistic test theory (PTT) with ‘local stochastic indepen-
dence’ (cf. Rost, 1999). This property of a test or a test model facilitates the exchange
of test questions or test items or to shorten tests—with loss of reliability. This test
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 157

property is also the basis for the development of adaptive tests, in which selected test
items are presented according to personality and test progress.

6.3.4 Characteristics of a Competence Model for Vocational


Education and Training

Vocational education and training focuses on the competence to act (cf. COMET
Vol. I, 28). This implies above all competencies for actions in work processes.
Optimal processing usually requires various consecutive steps, which then result
in a work result. This also means that there are usually several different strands of
action that can lead to an equivalent result. Such steps are no longer generally
independent of each other, but build on each other systematically. The artificial
isolation of these steps will therefore generally severely restrict the validity of the
content of the vocational competence model.

6.3.5 Dilemma: Dependent Versus Independent Test Items

This therefore poses a dilemma: on the one hand, the test-pragmatic requirement for
a measurement model that contains independent test items and, on the other hand,
the content-theoretical VET requirement that the individual processing steps of a test
should build on one another.
Before this section outlines possible solutions to this dilemma, we would like to
point out the consequences if there is no convergence between the theoretical model
and measurement model in professional competence measurement.
If the demand for independent test items or test questions is maintained on the
side of the measurement model, this would mean that only a few professional
competences could be tested. It is then, of course, up to the respective domain
experts to ascertain whether this residual proportion of professional competences
to be measured is sufficient for an appropriate validity of content. In many cases, the
answer must certainly be ‘no’.
If, on the other hand, a non-measurable specificity of professional competence
were derived from theoretical content requirements, this would also have
far-reaching consequences. It would remain confined to expert assessments relating,
for example, to observations of work or the evaluation of work products. In
particular, the further formal processing of these assessments is then not assured.
For example, individual indicators of the expert assessment could not be meaning-
fully linked to an overall value. In particular, the test subjects could no longer be
compared with each other without a suitable linking rule for the indicators. Only the
ranking of the characteristics of individual indicators could be compared with each
158 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

other—and also only under the assumption that a reliable expert assessment is
available.
An insufficient formalisation of the competence measurement would also contra-
dict the objectivity of implementation and test fairness. There would be hardly any
possibility to regard the expert assessment thus obtained as independent of the
situational conditions. For example, both a work sample and a work product
would be inseparably linked to the respective practical working conditions. Such
confusion with the examination situation could be mitigated by standardising the test
situation. But even in this case, the test fairness can be violated if the conditions in
the training company and those in the test situation resemble each other to varying
degrees.

6.3.6 The Search for a Bridge Between Theory


and Measurement

The aim is therefore to identify a suitable fit that can build a bridge between the
theoretical requirements of competency assessment and the desired requirements of
a measurement model.
Basically, two ways of approach can be distinguished:
1. In psychometric modelling, item dependencies can be taken into account by
modelling so-called testlets. Monseur, Baye, Lafontaine and Quittre (2011)
describe three ways to model such item dependencies: as partial credits, as
fixed effects or as random effects.
2. The other alternative would be to place the theoretical expert model on a better
empirical basis. This can be promoted in particular by the following measures:
(a) product development is mostly freed from situational influences,
(b) linking the rating criteria to a theoretical model of professional competence,
(c) ensuring interrater reliability through appropriate measures,
(d) mapping the theoretically derived rating criteria using a suitable psychometric
model.

6.3.7 The Example COMET

The empirical procedure within the framework of the COMET project will be
outlined here as an example for the above-mentioned item 2. One focus of the
following presentation will be on the steps of the empirical approach, which are
more closely related to psychometric modelling.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 159

6.3.8 Empirical Procedure

Five evaluation steps were carried out in the COMET measurement procedure:
1. Determination of the interrater reliability of task solutions.
2. Sorting of task solutions.
3. Verification of the homogeneity of the competence criteria (scale analysis).
4. Identification of typical competence profiles.
5. Longitudinal analysis of competence development.
The first step was to check whether the interrater reliability of the individual items
is sufficient for further data processing.
In a second step, the data matrix was restructured. The assignment of the
individual task solutions to the test persons and to the measuring points was resolved
so that all task solutions could be analysed together. This procedure seems justified
inasmuch as it may be possible for two different tasks to be performed by one test
person at different competence levels.
In the third step, the five items that can be assigned to a competence criterion are
then checked for homogeneity in line with the Rasch model, i.e. whether these
ratings actually measure the same latent dimension.
In the fourth step, the person parameters determined with the Rasch model, and
which correspond to the competence criteria, are calculated in a joint analysis. The
aim is to identify typical person profiles that correspond to the assumptions of the
COMET measurement model.
In the fifth step, the data record was returned to its original order. This means that
the four task solutions of the first two measuring points are again assigned to a test
person in order to be able to analyse longitudinal developments.
The decisive evaluation step in the empirical approach of the COMET project is
the rating of the open solutions by specialists—usually vocational schoolteachers
and trainers. The open solutions are evaluated by means of questions, five of which
form one of the eight criteria of the COMET model (! 4).
Before the actual empirical results of the COMET project are presented, two key
aspects of test quality are to be discussed in more detail: the reliability of the rating
procedure and the validity of the derived rating dimensions in terms of content.

6.3.9 On the Reliability of the COMET Rating Procedure

In order to determine the reliability, two advisers evaluate the same task solution.
Securing interrater reliability is an absolute prerequisite for further data calculation
pursuant to a measurement model. Without sufficient measurement accuracy (reli-
ability), the data thus obtained would vary randomly and would no longer be
meaningful. To ensure interrater reliability, the following measures, systematically
160 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

applied during the COMET project (see COMET Volumes I–IV) can be
recommended:
• The rating schemes should be formulated as precisely as possible.
• Actual case studies should be used for training.
• Rater training should be accompanied by a continuous review of interrater
reliability until a sufficient target criterion is reached.
• The rater training should work with real task solutions.
• The composition of the rater teams should be systematically varied within the
training and also in the subsequent investigation.
• A third rating for systematically selected solutions can further increase measure-
ment accuracy.

6.3.10 On the Content Validity of the COMET Rating


Dimensions

The transformation of the open solutions into the criteria of the COMET measure-
ment procedure is the most important link in the measurement chain; therefore, it
must be critically discussed at this point whether the validity of the content can be
guaranteed here. Does it really measure what needs to be measured? The most
important measure to ensure the validity of the content is to have the rating carried
out by experts. These experts ensure that the abstraction of domain-specific solution
knowledge is incorporated into the target criteria. The direct involvement of domain
experts immanently and directly supports the validity of the content in the COMET
measurement procedure.
This means in particular that the rating of open task solutions must also be carried
out permanently by domain experts. As long as this is the case, the validity of the
rating procedure in terms of content is also assured in the long term.
The validity of the open tasks in terms of content and the universal applicability
of the eight criteria to the different domains of VET must be discussed elsewhere
(see COMET Volumes I-IV).

6.3.11 Population

The basis for the following calculations is a data set obtained in Hesse with
electronics technicians in industrial engineering (industry) and electronics techni-
cians for energy and building technology (crafts), which was collected at two
measuring points in the second and third year of training; 178 pupils at the first
and 212 pupils at the second measurement point completed the open tasks. In total,
1560 task solutions have been developed and rated accordingly (cf. COMET Vol-
umes I-IV).
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 161

6.3.12 Step 1: Determination of Interrater Reliability

The ‘Finn coefficient’ was used as a reliability measure (see Asendorpf & Wallbott,
1979). With a value range of 0.0 to 1.0, values of 0.5 to 0.7 can be interpreted as
satisfactory and values 0.7 as good reliabilities. The test for this sample showed
coefficients between 0.71 and 0.86. This means that the reliability can be classified
as consistently good (! 6.2).

6.3.13 Step 2: Sorting of Task Solutions

After reviewing the interrater reliability, the data matrix was restructured. The
assignment of the individual task solutions both to the test subjects and to the events
was completely resolved, so that all task solutions could be analysed together in a
vertical data matrix. The analysis units in the procedure described below are no
longer the test subjects, but the tasks.

6.3.14 Step 3: Verification of the Homogeneity


of the Competence Criteria

The raw values from the ratings of the evaluation items were then processed further,
as already described for Erdwien and Martens (2009) in COMET Vol. II. Each
subject was judged by two raters—in a few cases also by three raters. Although
the interrater reliability proved satisfactory (see Step 1), there were, of course,
divergent assessments. Therefore, the rater judgments were averaged before further
processing of the data. Mean values of 0 to 3 were calculated according to the rating
scale and rounded as follows for the following analyses (see Table 6.2).
The following analyses first examined whether the evaluation items for a criterion
are really homogeneous, i.e. form a scale that measure the same underlying dimen-
sion. In particular, it was checked whether the evaluation items of a criterion are so
similar that they can be summarised to a scale value. The eight criteria (clarity,
functionality, sustainability, economic efficiency, business process orientation,
social compatibility, environmental compatibility and creativity) were each calcu-
lated using the ordinary Rasch model. On this basis, the reliability (according to

Table 6.2 Rounding of the Rounding range Rounding


rating assessments
0–0.499 0
0.500–1.499 1
1.500–2.499 2
2.500–3 3
162 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.3 Overview of the scale analyses


Reliability (according Items Number of categories
to Rasch) eliminated in the LCA
Clarity/presentation 0.88 0 9
Functionality 0.85 1 9
Sustainability 0.84 0 9
Efficiency 0.8 0 7
Orientation on business and 0.87 1 9
work process
Social compatibility 0.77 1 9
Environmental compatibility 0.73 1a 8
Creativity 0.86 1 9
a
A rating item was removed here because of an ‘overfit’. The corresponding explanations can be
found in the text

Rasch) and the Q indices (cf. Rost & von Davier, 1994) were determined as item fit
measures. As the strictest model test, this solution was additionally contrasted with
the mixed Rasch model for two subpopulations. Information criteria were used to
directly compare whether the mixed Rasch model with two subpopulations was
better suited to the data. The ‘Consistent Akaike’s Information Criterion’ (CAIC)
was primarily used for this (see Bozdogan, 1987; Bozdogan & Ramirez, 1988). The
fit was better for the simple Rasch model than for the mixed Rasch model. This
ensures that the respective criterion can be represented by a latent parameter.
For individual criteria, however, questions (items) had to be excluded from the
further analysis due to insufficient homogeneity (measured by the Q indices (see
Table 6.3). After excluding the respective items, the following solutions of the
simple Rasch model were inconspicuous and matched the data well. An exception
is the criterion ‘environmental compatibility’—here an item was removed because of
an ‘overfit’; i.e., the item characteristic curve was too steep for the simple Rasch
model. After eliminating this item, the 1-class solution for environmental compati-
bility was also inconspicuous, even though the resulting reliability was somewhat
lower. An overview of the results can be found in the following table.
Thus, for all eight criteria, a satisfactory solution can be identified using the
simple Rasch model. This allows each task solution with exactly one value per
criterion to be included in the further analyses. The corresponding items thus each
form a scale for recording one of the eight criteria of vocational competence.
It should be noted that five items of a rating each relate to the same task solution.
Even if this fact has been taken into account in detail in the rating training courses, it
cannot be completely ruled out that the homogeneity of the items will be
overestimated by this procedure. This could lead to an overestimation of the actual
correlations, especially in the further calculation of correlations between the scales.
At the same time, the average of the rating assessments leads to an underestimation
of homogeneity. The exact effect of the rating procedures on homogeneity would
have to be checked with further data simulations. These are the additional reasons
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 163

why the following statistical methods concentrate on mixed distribution models and
thus above all on qualitative differences of the profiles.

6.3.15 Step 4: Identification of Typical Competence Profiles.

The latent class analysis method was used to further identify typical competence
patterns in vocational education and training. The analytical procedure is based on
that of Martens and Rost (1998). The results of the above-mentioned ordinal Rasch
analyses of the competence criteria are included in the latent class analysis in the
form of rounded theta parameters. Here, the theta parameter represents the task-
specific ability to fulfil the respective criterion well.
This two-step procedure has the particular advantage that the data basis for the
subsequent latent class analysis becomes more robust against possible distortions of
individual evaluation items.
Latent-class analysis is a method that identifies typical profiles of task solutions.
For this purpose, the entirety of the task solutions is broken down into precisely
defined subgroups. Each subgroup (classes) defined in this way has exactly one
characteristic profile for the eight criteria. The challenge of this procedure is in
particular to determine the correct number of subgroups into which the whole is
broken down. With each additional subgroup, the fit of the measurement model to
the data must become more accurate. On the other hand, there is the fundamental
requirement that a measurement model should be as ‘simple’ as possible, i.e. as few
subgroups as possible should be identified. The two demands for ‘data fit’ (¼ more
subgroups) and ‘simplicity’ (¼ fewer subgroups) must be carefully weighed against
each other. Information criteria such as CAIC (cf. Bozdogan, 1987) are often used
for these weighing processes. However, the penalty function implemented in CAIC
for additional model parameters is arbitrary and differs from other similar informa-
tion criteria. To determine the correct number of subgroups, two further criteria were
therefore used: the use of bootstrapping (see, for example, von Davier, 1997) and the
consideration of the average allocation probabilities to the subgroups. Bootstrapping
creates a customised test distribution using synthetic samples. The actual sample
should not differ significantly from artificially generated samples. The medium
assignment probabilities indicate how well the criteria profiles of the task solutions
can be assigned to the subgroups. The mean allocation probabilities to the subgroups
should not fall below 0.8. Low assignment probabilities would mean that the profiles
cannot be uniquely assigned to the subgroups. The model with the most subgroups,
which simultaneously has a non-significant bootstrap (P(X > Z ) ¼ 0.234 for the test
variable Pearson X2) and provides average allocation probabilities to the subgroups
between 0.81 and 0.95, has 10 groups. Although the information criteria indicate that
models with a smaller number of groups could be better suited to the data—CAIC,
for example, refers to a solution with four subgroups—this of course only applies if
additional model parameters are disproportionately penalised.
164 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.4 Characteristic values of the identified subgroups


Subgroup Size of subgroup (in percent) Average assignment probability to the subgroups
1 19 0.844
2 15.8 0.888
3 13.9 0.816
4 12.5 0.811
5 9.3 0.899
6 8.0 0.949
7 6.7 0.808
8 6.1 0.830
9 5.2 0.856
10 3.4 0.920

Fig. 6.4 Competency patterns of subgroups 1–10: General overview

The number of subgroups identified by latent class analysis, weighing ‘fit’ and
‘simplicity’, is 10 (Table 6.4). Each of the 10 groups has a typical competence profile
(Fig. 6.4). The measurement model of latent class analysis assumes that all solutions
assigned to a specific subgroup have exactly the same (latent) competence profile.
Only the level of assignment probabilities varies between the task solutions of a
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 165

subgroup. When interpreting the lower graphics, it must also be taken into account
that these are typical competence patterns for the solution of tasks and not the
competence patterns of persons. Our sample therefore includes people whose solu-
tions to tasks are assigned to different subgroups and thus different competence
profiles. Such intraindividual differences in competence can be caused by personal
characteristics, such as learning effects or fatigue, or task characteristics, such as
systematic task differences. In particular, systematic differences in tasks would have
to be very carefully taken into account when interpreting the results.
In order to present this result more clearly, the following sub-graphics of this
overall graphic are presented with a systematic selection of the subgroups.
Figure 6.5 shows the essentially parallel competence patterns of subgroups 1, 2,
3, 5, 6 and 10. 69.4% of the tasks that were solved with this competence profile.
These subgroups differ almost exclusively in their different profile heights. Sub-
group 6 with a share of 8% of all task solutions has the lowest competence profile;
i.e., the corresponding tasks were processed on the rating items with particularly low
competence according to the ratings. Conversely, the particularly good task solu-
tions were assigned to subgroup 1 with a share of 3%. When interpreting the
graphics, it must be taken into account that these are rounded theta parameters, so
the absolute height differences cannot be interpreted directly.

Fig. 6.5 Competency patterns of subgroups 1, 2, 3, 5, 6 and 10


166 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.6 Competency patterns of subgroups 4 and 7

Figure 6.6 gain shows two almost parallel competence profile curves. These two
parallel subgroups 4 and 7 have a higher level of descriptiveness/presentation and
functionality compared to the competence profiles considered in Fig. 3.3. Such a
competence pattern has been theoretically expected and corresponds to the level of
‘functional competence’. The corresponding task solutions were therefore carried
out with a disproportionately pronounced functional competence.
Figure 6.7 shows the two remaining subgroups 8 and 9 together with the
subgroup 1 already shown above; like subgroup 1, the two subgroups 8 and 9 also
show an average competence profile. However, subgroup 9 shows a slight drop in
the criteria clarity/presentation and functionality compared to subgroup 1, while
subgroup 8 does not appear to have a very different profile from subgroup 1. The
validity of the contents of these two profiles must be checked by further analyses, for
example by distributing the subgroups to the task sets.
The competence patterns of most subgroups therefore run in parallel. In contrast
to this, subgroups 4 and 7 show a competence profile that corresponds to the level
‘functional competence’; a drop can be identified in the levels ‘procedural compe-
tence’ and ‘holistic shaping competence’. For subgroups 8 and 9, it is not immedi-
ately clear which theoretically expected competence profiles these patterns could
correspond to.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 167

Fig. 6.7 Competence patterns of subgroups 8, 9 and 1

6.3.16 Distribution of Tasks Among the Competence Models

In the following, it was examined whether the four different tasks were also equally
distributed among the identified competence patterns. First of all, the different levels
of difficulty of the tasks are striking (Fig. 6.8). This can be seen directly in the
proportionate ratio of subgroups 6 (the lowest competence profile) and subgroup
5 (the second highest competence profile). Task 2 (skylight control) is relatively
speaking the easiest, followed by task 3 (drying space) and task 4 (pebble treatment)
and finally by the most difficult task 1 (signals).
It can be noted that there do not seem to be any task-specific, qualitative patterns.
In particular, the subgroups 4, 7, 8 and 9, which represent qualitative deviations from
the ‘standard pattern’, appear to be more or less equally distributed among the tasks.
Only for subgroup 4, there is a slightly reduced percentage for task 3 (drying space).
The distribution of competence patterns among the tasks can therefore serve as
proof that the identified patterns or subgroups are not specific competence profiles of
individual tasks.
This means that the solutions of the four test tasks are very similar in terms of the
proficiency criteria in the different subgroups. The implementation of the compe-
tence model presented here has proved its worth insofar as there is no task that shows
a completely independent solution in the form of an ‘own’ subpopulation.
168 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.8 Distribution of tasks among the competence models

6.3.17 Step 5: Longitudinal Analysis of Competence


Measurement.

In Step 5, the task solutions were again assigned to the individual persons and lined
up in time. For an initial evaluation, the number of identified subgroups from Step
4 was evaluated. The first two tasks were solved at the first measurement time and
the other two tasks (w3 and w4) at the second measurement time. For the interpre-
tation, it must be taken into account that each task is a mixture of tasks, since the
assignment of the tasks to the persons has been systematically rotated. Since only
test persons were considered who solved tasks at both times, the data basis for the
following graphics consists of 151 pupils. In particular, the first solved task
(w1) should be compared with the first solved task one year later (w3).
This shows a significant increase, especially for the largest subgroup 1. In
addition, the two subgroups with the highest competence profiles (subgroups
10 and 5) show an almost equal number at least in comparison with the measurement
points w1 and w3. This can be understood as a first indication of the validity of the
tasks in terms of content and curricula: At least for some of the tasks, a higher
competence profile can be shown.
At the same time, subgroup 6, which has the lowest competence profile, shows
that a fatigue effect occurs within a measurement for part of the sample (increases in
frequencies from both w1 to w2 and from w3 to w4). Furthermore, a kind of
reactance effect can be observed, which manifests itself as an increase of this
subgroup between the two measurement points, especially from w2 to w4. The
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 169

proportion of these task solutions increases, especially in the last task. Probably
some of the pupils simply no longer wanted to work on their tasks in a motivated
way. This interpretation is also supported by the fact that subgroup 2 with the second
lowest competence profile has a relatively uniform share over time (see Fig. 6.9).

6.3.18 Discussion of the Results

Overall, 89% of the task solutions correspond to the assumptions made in the
COMET model for vocational education and training (subgroups 1, 2, 3, 4, 5, 6,
7, 10). Only subgroups 8 and 9 show theoretically unexpected profile progressions.
Subgroups 4 and 7 in particular confirm that there are qualitatively different task
solutions which are characterised by a higher level of functional competence and
correspondingly lower levels of conceptual/procedural competence and holistic
shaping competence.
The question remains as to why no significant qualitative profile differences could
be identified with regard to the distinction made for theoretical reasons between
conceptual/process-related competence and holistic shaping competence. In the
sense of the theoretical competence model, at least one subgroup should have been
found in which the conceptual/procedural competence (sustainability, economic
efficiency, business process orientation) is higher than the holistic shaping compe-
tence (social compatibility, environmental compatibility, creativity).
Various explanations are conceivable: as already indicated above, the fact that the
ratings referred to one and the same task solution could tend to lead to certain
characteristics of the task solution ‘outshining’ other characteristics and to a uniform
assessment of the various criteria (halo effect). Furthermore, it could be justified by
the fact that the subjects studied are at the beginning of their careers. At least a
general halo effect can be excluded, because then subgroups 4 and 7 would not have
been identified either.

6.3.19 Need for Research

Further research is therefore required, which supplements the available data with a
sample of test persons who are already more advanced in their professional lives and
thus have a higher level of professional expertise. With such a sample, further
qualitative level differences could then be found.
The longitudinal analyses provide initial indications of the content and curricular
validity of the VET competence model presented here. However, it turns out that the
designed tasks promote a certain displeasure potential. This potential can be identi-
fied—as shown—with mixed distribution analyses, but when using the tasks in a
non-scientific environment, this potential ‘displeasure’ must of course be taken into
account. However, in order to carry out a more detailed analysis of individual
170 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.9 Development of the competence patterns of the individual subgroups (mc1 to mc10)
over time
6.4 Confirmatory Factory Analysis 171

development patterns over time, the number of samples would have to be increased
and the number of subgroups to be examined reduced.

6.3.20 Prospect

The measurement procedure within the framework of the COMET project demon-
strated a possible approach for resolving at least part of the contradiction between
measurement model and competence model in vocational education and training.
The combination of open tasks and subsequent ratings shown here could of course
also be implemented under other framework conditions, for example by computer-
supported holistic simulation of tasks, the solution of which could then be assessed
similarly by raters. Various options could be found for calculating the corresponding
rating assessments, based on the respective theoretical model (for an overview, see
Martens & Rost, 2009). Especially where the theoretical competency model predicts
qualitative profile differences, measurement models should be used that can map this
accordingly—such as mixed distribution models such as the latent class analysis
used here or the mixed Rasch model (Rost, 1999).
It should not be overlooked at this point that the ‘fit’ between measurement model
and theoretical model required in the introduction means that further criteria, such as
a particularly efficient measurement of professional competence, cannot be given
priority. Adequate consideration of both theoretical and measurement methodolog-
ical requirements means that methods that can solve this fitting problem, such as the
COMET measurement method, are relatively time-consuming to implement.
It must also be emphasised that measurement methods that meet the formulated
requirements are not suitable for all practical applications. For example, the use of
mix distribution models prevents the profile heights from being compared directly
with each other. This means that at least no simple application for selection processes
is possible. This is also directly related to the fact that it is not possible to identify
common parameters for an entire population. This makes the use in large-scale
studies very difficult.

6.4 Confirmatory Factory Analysis


6.4.1 The I-D Model (→ 4, Fig. 4.5)

The identity and commitment model of vocational education and training is evalu-
ated on the basis of two samples (A ¼ 1124. B ¼ 3014) using the methods of an
explorative and a confirmatory factor analysis. In this model, an occupational and an
organisational identity can be distinguished from each other. However, it is also
assumed that the two are related to each other—in both cases, it is a matter of identity
that is shaped by vocational training and work. The development of occupational
172 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

identity and identification with the profession in the process of vocational training
and professional work go according to this model rather with the development of
occupational and organisational commitment. Following Carlo Jäger, the work ethic
is defined as the unquestioned willingness to carry out the instructions of superiors
(Jäger, 1989).

6.4.2 Original Scales

Vocational Identity
NRW Saxony
BE1 SkBE1 I like to tell others what profession I have/am learning.
BE2 SkE2 I ‘fit’ my profession.
BE3r SKBE3r I am not particularly interested in my profession. REVERSE
BE4 SkBE4 I am proud of my profession.
BE5 SkBE5 I would like to continue working in my profession in the future.
BE6 SkBE6 The job is like a bit of ‘home’ for me.
Cronbach’s Alpha NRW: 0.843 (n ¼ 1124)
Cronbach’s Alpha Saxony: 0.871 (n ¼ 3014).

In both data sets, the alpha would increase if the reverse-coded item (Sk)BE3r
were removed from the scale: in NRW, the alpha increased to 0.859 and in Saxony to
0.889. The corrected item scale correlation in NRW is 0.425, and 0.447 in Saxony. It
remains open whether the reasons for this effect lie in the reverse coding and whether
the trainees deal with this differently than with the items coded in the right direction,
or whether interest alone or lack of interest alone does not yet say anything about the
identification with the profession.
Vocational Commitment
NRW Saxony
ID1 SkID1 I am interested in how my work contributes to the company as a whole.
ID2 SkID2 For me, my job means delivering quality.
ID3 SkID3 I know what the work I do has to do with my job.
ID4 SkID4 Sometimes I think about how my work can be changed so that it can be done
better or at a higher quality.
ID5 SkID5 I would like to have a say in the contents of my work.
ID7 SkID7 I am absorbed in my work.
Cronbach’s Alpha NRW: 0.767 (Note: Only data on 627 trainees for item ID7 were available.)
Cronbach’s Alpha Saxony: 0.820 (n ¼ 2985)
6.4 Confirmatory Factory Analysis 173

Organisational Identity
NRW Saxony
OC1 SkOC1 The company is like a bit of ‘home’ for me.
OC2 SkOC2 I would like to remain in my company in the future—even if I have the
opportunity to move elsewhere.
OC3 SkOC3 I like to tell others about my company.
OC4r SkOC4r I don’t feel very connected to my company.
OC5 SkOC5 I ‘fit’ my company.
OC6 SkOC6 The future of my business is close to my heart.
Cronbach’s Alpha NRW: 0.869 (n ¼ 1121)
Cronbach’s Alpha Saxony: 0.899 (n ¼ 3030)

Organisational Commitment
NRW Saxony
BetID1 SkBetID1 I like to take responsibility in the company.
BetID2 SkBetID2 I want my work to contribute to operational success.
BetID3 SkBetID3 I am interested in the company suggestion scheme.
BetID4 SkBetID4 The work in my company is so interesting that I often forget time.
BetID5 SkBetID5 I try to deliver quality for my company.
BetID6 SkBetID6 Being part of the company is more important to me than working in my
profession.
Cronbach’s Alpha NRW: 0.702 (n ¼ 1077)
Cronbach’s Alpha Saxony: 0.704 (n ¼ 2990)

The item (Sk)BetID6 fits rather badly to the rest of the scale. However, this is to
be expected with regard to the content of the item: here, two concepts are compared
with each other. The value of the affiliation to the company is asked, not the
affiliation itself. That makes this item difficult to understand. The corrected item-
scale correlation for NRW is only.198 and for Saxony.201. When this item is
excluded from the scale formation, the alpha for NRW increases to.732 for Saxony
to.737. The alpha therefore remains at a rather low level even when the conspicuous
item is excluded.
Work Moral
NRW Saxony
AM1 SkAM1 I am motivated, no matter what activities I get assigned.
AM2 SkAM3 I am always on time, even if the work does not require it.
AM3 SkAM2 I am reliable, no matter what activities I get assigned.
Cronbach’s Alpha NRW: 0.592 (n ¼ 1145)
Cronbach’s Alpha Saxony: 0.687 (n ¼ 3037)

Overall, the work moral scale should be revised, as the alpha is at a very low level.
Although the shortness of the scale of only three items has to be taken into account,
which additionally weighs on the alpha level, the very low corrected item scale
correlations of, e.g., AM1 (NRW) 0.401 and AM2 (NRW) 0.331(!) also indicate an
ambiguity of the scale in terms of content.
174 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

6.4.3 Confirmatory Factor Analysis for the Original Model

By means of confirmatory factor analysis in MPlus7, the model briefly described


above is to be checked with regard to its fit to the empirically collected data. This is
again done using the data records from NRW 2013 and Saxony 2014.
NRW (Original Model)
The model specifications were stated in such a way that the latent factors are formed
on the basis of the original scales. As a result, regressions of engagement factors and
work morale on identity factors were inserted and correlations between identity
factors and between engagement factors and work morale were allowed. The model
comes to the following conclusion (cf. Fig. 6.10).
The factor loads show that these are in an acceptable range overall. Only some
items have low loads below 0.5 (be3r, am2, betid6). The residual variance of these
items is particularly high. The main problem is the item betid6, which, with a charge
of 0.214, actually does not fit the scale. The correlation between the identity factors
is quite high at r ¼ 0.616, in line with expectations. The regressions of the
commitment factors confirm the model. At β ¼ 0.565, the regression of vocational
commitment to vocational identity is higher than the regressions of work moral and
occupational commitment to this factor. Likewise, the regression of the
organisational commitment to the organisational identity with β ¼ 0.64 is higher
than the regressions of the occupational commitment and work morale to this form of
identity. The problem with the model seems to be that the work morale shows very
different regressions in the forms of identity (β ¼ 0.423 for professional identity and
β ¼ 0.194 for organisational identity). Furthermore, the very high correlation
between the two forms of engagement seems problematic. These factors are almost
identical to a correlation of r ¼ 0.938(!). These problems are also reflected in the
statistical evaluation of the model.

MODEL FIT Information


Number of Free Parameters: 91
Information Criteria
Akaike (AIC) 72238.519
Loglikelihood Bayesian (BIC) 72696.809
H0 Value 36028.260 Sample-Size Adjusted BIC 72407.766
H1 Value 35125.421 (n* ¼ (n + 2)/24)
Chi-square test of model fit RMSEA (root mean square error of approxi-
Value 1805.677 mation)
Degrees of freedom 314 Estimate 0.065
P-value 0.0000 90% C.I. 0.062 0.068
Probability RMSEA  0.05 0.000
Chi-square test of model fit for the baseline CFI/TLI
model CFI 0.875
Value 12319.722 TLI 0.861
Degrees of freedom 351
P-value 0.0000
(continued)
6.4 Confirmatory Factory Analysis 175

MODEL FIT Information


Number of Free Parameters: 91
Information Criteria
Akaike (AIC) 72238.519
Loglikelihood Bayesian (BIC) 72696.809
H0 Value 36028.260 Sample-Size Adjusted BIC 72407.766
H1 Value 35125.421 (n* ¼ (n + 2)/24)
SRMR (standardised root mean square resid-
ual)
Value 0.052

6.4.4 Explanations

Loglikelihood: H0 means that the specified model is valid, H1 means that an


unrestricted model in which all mean values, variances, etc. are independent of
each other is valid. The absolute figures of the loglikelihood are not easily interpret-
able or comparable. The statistical check is then carried out using the chi-square test.
AIC/BIC/adjusted BIC: Should be as small as possible, but serves only for
descriptive analysis.
Chi-square test: Tests the null hypothesis that covariance matrix in the population
is equal to the covariance matrix implied by the model. If the test becomes signif-
icant, this means that the model does not fit the data. This is the case in our analysis.
RMSEA: Should be less than 0.05.
CFI/TFI: Should be above 0.95, better still above 0.97.
Chi-Square Test for Baseline Model: Tests the so-called ‘baseline model’ for its
fit to the data. The baseline model assumes that no valid predictions (regressions)
exist between all variables of the data set. The baseline model is also called the zero
model.
SRMR: Should be below 0.05.
The results of the model specified here are not good. However, the values are not
so far away from a good model. We want to check again what model looks like when
the problematic item betid6 is excluded.

6.4.5 Modification

BetID6 is excluded from the analysis. All other items are retained. See Fig. 6.11 for
the result of the model.
Compared to the first model, which takes all items into account, the latent factor
of operational commitment becomes somewhat clearer. However, with the exception
of minor changes in regression weights and correlation coefficients, there are no
major changes.
176 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.10 Factor Analysis NRW (original model)


6.4 Confirmatory Factory Analysis

Fig. 6.11 Factory analysis NRW (first modification)


177
178 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

The statistical indices are improving marginally. The model cannot be really
optimised by deleting problematic items. A look at the CFA of the original model
based on the data from Saxony should provide further insights.

MODEL FIT Information


Number of Free Parameters: 88
Information Criteria
Akaike (AIC) 69360.399
Loglikelihood Bayesian (BIC) 69803.580
H0 Value 34592.199 Sample-Size Adjusted BIC 69524.066
H1 Value 33751.369 (n* ¼ (n + 2)/24)
Chi-square test of model fit RMSEA (root mean square error of approxi-
Value 1681.661 mation)
Degrees of freedom 289 Estimate 0.065
P-value 0.0000 90% C.I. 0.062 0.068
Probability RMSEA  0.05 0.000
Chi-square test of model fit for the baseline CFI/TLI
model CFI 0.882
Value 12151.190 TLI 0.868
Degrees of freedom 325
P-value 0.0000
SRMR (standardised root mean square resid-
ual)
Value 0.050

6.4.6 Discussion

It remains open why model and data do not go well together. The partially low
charges already described above provide initial information. These were already
noticeable during the reliability analysis. It is therefore necessary to further sharpen
the content-related fit of the items to the scales as well as the similarities between the
items of a scale. Furthermore, in this model a correlation is on the edge of possibility:
the two forms of engagement correlate to r ¼ 0.938! This means that they measure
almost the same. It is therefore important to work on the scales of commitment.
However, it remains to be seen at this stage whether a distinction should be made
between two forms of commitment in terms of content.
Saxony (Original Model)
The original model was checked again on a second data set in order to check the
results and findings initially obtained. In the first calculation, all items were taken
over, and no changes were inserted.
As in Saxony, the factor loads are largely within an acceptable range. Really
critical are only the items SkBetID6 (0.182!!!!) and partly those of SkBE3r (0.477).
Overall, the loads are somewhat higher than in NRW. Again, it is striking that
the regression from organisational commitment to organisational identity has the
6.4 Confirmatory Factory Analysis 179

greatest overall predictive power for organisational identity (β ¼ 0.59) and the same
also applies to the predictive power of occupational commitment to occupational
identity (β ¼ 0.677). The correlations between the two forms of identity are
r ¼ 0.637, and the correlation between the two forms of commitment even exceeds
1! r ¼ 1.068. The original model was modified as a result. It is also striking that there
are also very high correlations between work moral and the forms of commitment of
r ¼ 0.708 to vocational commitment and r ¼ 0.818 to organisational commitment
(Fig. 6.12).
Initially, MPlus tells us that the model, as already mentioned above, cannot
continue to exist in this form:

WARNING: The latent variable covariance matrix (PSI) is not positive defi-
nite. This could indicate a negative variance/residual variance for a latent
variable, a correlation greater or equal to one between to latent variables, or
a linear dependency among more than two latent variables. Check the TECH4
output for more information. Problem involving variable BTE.

The statistical key figures are as follows:

MODEL FIT Information


Number of Free Parameters: 91
Information Criteria
Akaike (AIC) 192269.224
Loglikelihood Bayesian (BIC) 192817.218
H0 Value 96043.612 Sample-Size Adjusted BIC 192528.075
H1 Value 93298.023 (n* ¼ (n + 2)/24)
Chi-square test of model fit RMSEA (root mean square error of approxi-
Value 5491.179 mation)
Degrees of freedom 314 Estimate 0.074
P-value 0.0000 90% C.I. 0.072 0.075
Probability RMSEA  0.05 0.000
Chi-square test of model fit for the baseline CFI/TLI
model CFI 0.878
Value 42619.050 TLI 0.863
Degrees of freedom 351
P-value 0.0000
SRMR (standardised root mean square resid-
ual)
Value 0.056

As in the review by the data from NRW, the statistical parameters indicate that the
model and data do not match well. The resulting correlation of one in this model
invalidates the model.
180 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.12 Factory analysis Saxony (original model)


6.4 Confirmatory Factory Analysis 181

6.4.7 Explorative Factory Analysis

Since the originally assumed model does not ideally fit the data of both data sets, we
want to determine by means of explorative factor analysis (main component analy-
sis) what model is proposed to us on the basis of the available data.
NRW
First EFA: All variables taken into account.
The analysis is carried out with the SPSS program, and all items are first included
in the analysis. No assumptions are made. Factors included in the model should be
determined according to the Kaiser criterion (value greater than 1). The solution is
also rotated using the Varimax process. This means that by continuously rotating, the
items are tried to assign only one factor as clearly as possible. Medium high loads are
attempted to be avoided. However, the factors remain independent of each other.

6.4.8 Results

The main component analysis results in a solution with 5 factors, which looks like
this:

Factor 1 Factor 2 Factor 3 Factor 4 Factor 5


OC1 BE1 BetID1 AM1 BetID6
OC2 BE2 BetID2 AM2 (BetID3)
OC3 BE3r ID1 AM3
OC4r BE4 ID2 BetID5
OC5 BE5 ID3
OC6 BE6 ID4
(BetID4) ID7 ID5
(BetID3)
(BetID4)
Value: 9.133 2.365 1.706 1.325 1.003

57.52% of the total variance can be elucidated with the 5 factors. The items in
parentheses could not be uniquely assigned, but load on at least two factors
similarly high.

6.4.9 Discussion

The forms of identity are very clear in the analysis. Factor 1 corresponds to the
organisational identity and factor 2 to the vocational identity. The work moral can
also be redetermined (factor 4), whereby however, the item BetID5 is awarded from
the area of the operational commitment actually intended for this. The original
182 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

separation between two forms of commitment is not found in the data. Factor 3 forms
a kind of general commitment factor, which consists mainly of items of vocational
commitment and partly of items of organisational commitment. The analysis of
reliability yields an alpha of 0.787 if BetID3 and 4 are omitted. Factor 5 consists
only of items from the area of operational exposure, but these are the items that have
already attracted negative attention in the previous analyses. BetID3 contains the
company suggestion system, which is not necessarily relevant for all trainees.
BetID6 contains an assessment of the importance of the company’s affiliation. In
terms of content, factor 5 seems to be a kind of ‘residual factor’ on which the items
that caused the student problems during processing load. BetID6 and, if possible,
BetID3 should be excluded from further analysis and revised if possible.
Second EFA: Exclusion of BetID6
A new explorative factor analysis (EFA) excluding the variable BetID6, which was
identified as inappropriate to its originally intended scale ‘organisational commit-
ment’ as well as to all other scales, should provide information about the factor
structure in which a ‘residual factor’ is avoided. The inclusion criterion again forms
the Emperor criterion.
The following factor solution results:

Factor 1 Factor2 Factor3 Factor4


OC1 BE1 BetID1 AM1
OC2 BE2 BetID2 AM2
OC3 BE3r ID1 AM3
OC4r BE4 ID4 BetID5
OC5 BE5 ID5 (ID2)
OC6 BE6 (BetID3) (ID3)
(BetID3) ID7 (BetID4)
(BetID4) (ID2)
(ID3)
Value: 9.072 2.239 1.694 1.299
Overall, the model explains 55.017% of the variance.

6.4.10 Discussion

Excluding the item BetID6, the fifth factor is omitted, which suggests that it was in
fact a kind of residual factor that resulted from the difficulty of the item BetID6. The
EFA carried out here paints a similar picture as the first EFA with regard to the
identity factors (factor 1 and 2). Factor 2 has the same structure as in the first EFA
and factor 1 is now only supplemented by the proportional charge of the item
BetID3, which no longer charges on a residual factor. The work moral (factor 4)
also achieves a similar structure as in the first EFA, with additional items from the
area of commitment loading on the factor of work moral. The commitment factor
(factor 3) has decreased overall. In this analysis, it turns out that the items BetID1
6.4 Confirmatory Factory Analysis 183

and 2 as well as the items ID1, 4 and 5 make up the core of this factor. The other
commitment items are now spread over various factors. They do not seem to fit
concretely enough on commitment alone, but instead blur the boundaries to identity
and work moral (with the exception of ID 7 and BetID5, which do not strictly fit the
commitment, but complement the vocational identity and work moral).

6.4.11 Discussion

With a third EFA: exclusion of the aging factors BetID6 and BetID3, the proportion
of the enlightened total variance increases significantly, whereas the values of the
factors become only slightly smaller.
Saxony
An explorative factor analysis will also be carried out again on the basis of the Saxon
data in order to check whether similar patterns arise in this data set.
First EFA: All Variables Taken Into Account
The first EFA contains all variables and comes to the following result after the
Varimax rotation:

Factor 1 Factor 2 Factor 3 Factor 4 Factor 5


SkOC1 SkBE1 SkBetId1 SkAM1 SkBetID6
SkOC2 SkBE2 SkBetID2 SkAM2 (SkBE3r)
SkOC3 SkBE4 SkBetID3 SkAM3
SkOC4r SkBE5 SkID1 SkBetID5
SkOC5 SkBE6 SkID2 (SkID3)
SkOC6 SkID7 SkID4
SkBetID4 (SkBE3r) SkID5
(SkID3)
Value:10.425 2.332 1.698 1.205 1.012

The model elucidates a total of 61.75% of the total variance.

6.4.12 Discussion

This first explorative factor analysis also comes to a solution of 5 factors. Again, it is
striking that not two engagement factors arise as assumed in the original model, but a
global engagement factor (factor 3) and a kind of residual factor (factor 5), on which
exactly the items upload, which already caused difficulties in the reliability analysis,
unlike the solution from NRW, the item SkBetID3 finds its place in factor 3 and
instead the item SkBE3r becomes blurred with charges on factors 2 and 5. This item
loaded data in the solution for NRW on the scale of the vocational identity intended
184 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

for it. This reverse-coded item seems to cause the trainees greater difficulties here, or
does not have the same significance for their vocational identity as in NRW.
Two further model corrections were analysed: exclusion of SkBE3 and SkID3; as
a result, these analyses contribute to the conclusion that the scales of organisational
commitment and work moral in particular should be revised.

6.4.13 Overall Discussion EFA

Both the analysis of the data from NRW and the data from Saxony provide a pattern
that can best be described by a 4-factor solution. A clear distinction is made here
between vocational and organisational identity, while there is no separation between
vocational and organisational commitment. These forms blur into each other. The
work moral can be found similar to the originally assumed form in both data sets,
whereby it is supported sometimes more strongly times less strongly by further items
from the commitment range.
The evaluations carried out here suggest that the separation between vocational
and organisational commitment should be reconsidered and at the same time some
items should be assigned differently. Thus, item (Sk)ID7 appears consistently as part
of the vocational identity and item (Sk)BetID5 as part of the work ethic. The other
items, some of which have changed their affiliation to a scale, require vocational
pedagogical justification. It should also be examined whether the separation between
the forms of engagement should be removed. It remains to be seen whether this is
just one type of commitment, as the factor analysis consistently shows or whether the
scales provided so far are not suitable for validly measuring the construct and the
problem therefore lies in the construction of the scales.

6.4.14 Considerations for Further Action

The analyses carried out show that the separation between organisational and
vocational commitment could not be confirmed on the basis of the item structure
analysed. This may indicate that both concepts are one concept and can be measured
within a construct. Similarly, the result may also mean that the separation of content
is justified, but the items are not able to measure the constructs. It must therefore be
justified in terms of vocational education whether a separation of the two concepts
can be assumed, what exactly the core points of the concepts are, and at which points
they differ from one another. Against this background, new items can then be found
and scales developed (! 4.7: Table 4.7 and 4.8).
Other results relate to work ethics. So far, the scale contains only 3 items with
only mediocre reliability values. In the EFA, one or more items are additionally
assigned to the scale. It should therefore also be examined what exactly constitutes
the core of work moral and how it can be more clearly distinguished from
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 185

commitment. Then, further items can be found, and the existing items can be
modified (Table 4.9).

6.5 Validity and Interrater Reliability in the Intercultural


Application of COMET Competence Diagnostics

The COMET project is an internationally comparative research project on vocational


education and training that was initiated in Germany. On the basis of the three-
dimensional model of vocational competences, the level of development of trainees
and students is diagnosed with regard to their occupational competences, their
occupational commitment and their occupational identity (COMET I, II, III). The
aim is to carry out comparative studies based on the evaluation of measurement
results in various (higher) schools, regions and countries and to achieve a high
degree of reliability and validity in a country comparison.

6.5.1 Method

Guillemin, Bombardier and Beaton (1993) point out that existing measuring instru-
ments must always be adapted to existing cultural and linguistic differences and
similarities when used in other cultures and in a different language (Table 6.5).
China and Germany show great differences in terms of the vocational training
system, language and culture. For this reason, the intercultural adaptation of
COMET measuring instruments and evaluation items to the vocational training
system/training in China has not only involved translations but also cultural adap-
tations. The overall process included preparation, translation, cultural adaptation and
performance evaluation. In addition, appropriate counselling training was conducted

Table 6.5 Different situations of intercultural adaptation (based on Guillemin et al., 1993)

Differences and similarities between languages and cultures Situation of cultural adaptation
from the target group in the new measurement and source in need of needs cultural
measurement translation adaptation
No difference in language, culture and country Not Not necessary
necessary
Same country and same language, but different culture Not Necessary
(e.g. group that immigrated to the country of source measure- necessary
ment a long time ago)
Different country and different culture, but same language Not Necessary
necessary
Same country, but different language and culture (e.g. new Necessary Necessary
immigrants in the country of source measurement)
Different country, language and culture Necessary Necessary
186 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

in China to ensure the reliability and validity of the evaluation of task solutions for
open test tasks.

6.5.2 Preparation and Translation

The Chinese working group received the German version of the measuring concept,
the measuring instruments and the evaluation items (COMET-Bde I-IV) from the I:
BB of the University of Bremen and worked out the implementation concept for the
competence measurement in China together with the German colleagues. During
translation, the working group used the two-way translation method, in which a text
is translated directly into the foreign language and then translated back into the
source language in order to achieve high semantic equivalence and maintain the
content and meaning of the tasks in the original instruments. The translation work
was carried out jointly by the developers of the measuring instrument and concept of
the University of Bremen and by the scientific staff of Beijing Normal University
and the Beijing Academy of Education.

6.5.3 Cultural Adaption

In order to ensure the equivalence of content on the basis of semantic equivalence


through translation, the Chinese project group had contents such as test tasks and
questionnaires checked and adapted by scientific staff, vocational school teachers
and practitioners from companies by organising workshops, among other things, so
that the formulation of the measurement concept corresponds to Chinese expressive
customs, is adapted to the special features of vocational training in China and the
Chinese examinees and thus the raters and test participants can precisely understand
the significance of measurement tasks, questionnaires and evaluation items. The
adaptation with regard to context questionnaires and evaluation items took place
mainly in the initial phase of the project, while the adaptation of open test tasks was
carried out before each test.
First, the formulation of 40 evaluation items was discussed and determined. As in
several previous innovative projects, such as the ‘Project to improve the qualifica-
tions of teachers at vocational training centres’ of the City of Beijing, many further
and advanced training measures were carried out for teachers/lecturers with regard to
curriculum development and implementation in work process systems, the teachers/
lecturers were able to quickly understand and accept the basic idea of the COMET
measurement concept, the measurement model and the 40 assessment items in the
workshop. An agreement could also be reached quickly regarding the formulation of
the evaluation items.
Furthermore, the workshop participants interpreted the tasks in the context
questionnaire in order to understand the measurement intention. For this purpose,
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 187

the formulations were adapted to Chinese usage. For example, ‘occupation’ was
replaced by ‘subject/training course’ (zhuanye) and ‘training company’ by ‘practical
training company’. Some questions that do not correspond to Chinese circumstances
have been deleted. To ensure the international comparison of the results, the original
numbering of the test items was retained. For example, questions 6 and 8 were
deleted from the German context questionnaire and simply skipped in the Chinese
questionnaire (Rauner, Zhao, & Ji, 2010).
Before each test, the Chinese working group organised experienced teachers and
practitioners from companies to check and adapt the validity of the open tasks and
problem-solving areas proposed by the German side, especially from the perspective
of professional validity instead of the teaching validity. It turned out that the teachers
were able to agree quite easily on the validity of test task content due to the
acceptance of the measurement model for recording professional competence. For
example, the experts (including teachers and practical experts) examined and
discussed four test tasks proposed by Germany in the course of measuring the
vocational skills of trainees in electrical engineering subjects in Beijing and quickly
reached an agreement. There was agreement that three of the four tasks did not need
to be changed and could be taken over directly as test tasks in China. The
corresponding problem-solving scopes did not have to be adapted either. Only the
understanding of a test task was discussed for a long time: the point was that the task
at one point in the description of the situation would go beyond the scope of
electrical engineering and that the trainees would also be required to work on the
task from an interdisciplinary perspective. At the beginning, the experts disagreed as
to whether the interdisciplinary element should be retained in the task. However,
after carefully interpreting the task with reference to the COMET competency
model, a consensus emerged that there would be an agreement between the task
and the model. Therefore, it was finally agreed to include this task as a test task
(Rauner, Zhao, & Ji, 2010). Teachers and practical experts tested and adapted the test
tasks on the basis of the COMET competence model and professional practice in
China. This also happened in the subsequent tests for trainees in the automotive
service sector.
The practice of cultural adaptation of the COMET concept as well as the test tasks
and the context survey show that the basic idea of the work process systematic
curriculum and the COMET competence model is accepted by the Chinese teachers
involved in the project. This shows that the COMET competency model has a sound
scientific basis for achieving corresponding educational goals and guiding principles
for international standards in vocational education and training and that an interna-
tional comparison can thus take place.
188 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

6.5.4 Rater Training

In order to ensure the assessment quality and the quality of the country comparison
of competence measurement, the training concept for raters to increase interrater
reliability was developed in the COMET project. The Chinese project group
organised training courses for raters for all tests, in which the teachers were involved
in the evaluation work. A very high interrater reliability was achieved with the rat
training. The process of rater training includes
• Presentation of the COMET model for vocational competence, the measuring
procedure and the evaluation procedure;
• Explanation of the eight evaluation criteria and 40 evaluation items for the
evaluation of the solution examples;
• Rating exercise using real examples, i.e., solutions of the trainees were selected as
case studies for each task and evaluated by the raters for the exercise. Each
exercise included three parts: the individual assessment, a group rating and a
plenary discussion. For the process of rater training and the test in electrotechnical
subjects, see COMET Vol. III, 4.2, Tables 4.5 and 4.6.

6.5.5 Analysis of the Reliability and Validity of the Evaluation


Item

Based on the aforementioned processes and on the data from three tests, the
structural validity of the evaluation items was evaluated by means of an explorative
factor analysis and the reliability of the evaluation items by means of the internal
consistency coefficient. The three tests involved 831 trainees in electrical engineer-
ing subjects in 2009, 779 trainees in automotive engineering in 2011 and 724 trainees
in automotive engineering in 2012 (Zhuang & Zhao, 2012, 46 ff.).

6.5.6 Results
Effect of Rater Training on Increasing Interrater Reliability

The Chinese working group organised a very extensive rater training and came to the
following results:
• The first test rating was very different. The raters relied on their subjective
teaching experience and did not rely predominantly on the solution scopes.
After the importance of the solution scopes as an interpretation aid for the rating
had been discussed in plenary, the degree of agreement increased sharply at the
next rehearsal rating.
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 189

• Individual independent evaluation, agreement in the group rating as well as


reports and discussion in the plenum represent a very effective approach to
Council training. In the group rating and the plenary discussion in particular,
the evaluations of individual raters were presented to the groups in the plenary in
tabular form. The evaluation results of the German raters were also shown as
reference values. During the discussion, each rater reflected on his own assess-
ment based on the evaluation of other experienced raters, the groups and the
plenary discussion. In the groups and in the plenary session, the main focus was
on the evaluation points on which there were major deviations in the evaluation in
order to agree on the evaluation standards and the evaluations. To this end, those
advisers whose assessments differed significantly from others were asked to
explain and justify the assessment so that the gap between the assessments of
the different advisers could be reduced (discursive validity).
• In the evaluation of the first two solutions, the agreement of the raters increased
significantly. In the evaluation of the third solution, the agreement of the raters
had already reached a high level. The interrater reliability of the raters (Finnjust)
was 0.8 and higher. After the discussion on the fourth solution, the raters had
internalised the evaluation points even more deeply and mastered the evaluation
standards, so that the interrater reliability of the raters remained at a high level.
See also Tables 6.6, 6.7 and 6.8.

Table 6.6 Interrater reliability: Rater training for the preparation of competence measurement in
electrical engineering subjects (2009)
Day 2 Day 2 Day 3 Day 3
Day 1 morning afternoon morning afternoon
Pb-Code Task Finnunjust
H0282 Signals 0.41 0.82
H0225 Signals 0.54 0.79
H0176 Drying space 0.80 0.84
H0234 Drying space 0.75 0.80
H0265 Skylight control 0.84 0.82
H0102 Skylight control 0.82 0.83
H0336 Pebble treatment 0.86 0.85
H0047 Pebble treatment 0.79 0.79

Table 6.7 Interrater reliability: Rater training for the preparation of competence measurement in
the field of automotive engineering (2011)
Answer sheets Answer sheets Answer sheets
Name of Answer sheets from from Chinese from Chinese from German
the case Chinese trainees on teachers on winter trainees on winter trainees on winter
study oil consumption check check check
Number 29 persons 30 persons 30 persons 30 persons
of raters
Finnjust 0.7 0.76 0.85 0.77
190 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.8 Interrater reliability: Rater training of the competence measurement of trainees in the
field of automotive engineering (2012)
Name of Liquified
the case Window Window petroleum Classic Classic Glowing
study lifter 1 lifter 2 gas car 1 car 2 MIL Misfuelling
Number of 25 25 25 persons 25 25 25 25 persons
raters persons persons persons persons persons
Finnjust 0.64 0.79 0.84 0.78 0.8 0.84 0.83
Note: The interrater reliability (Finnjust) is satisfactory at a value >0.5 and good at a value >0.7

Analysis of the Structural Validity of the Evaluation Items

The COMET competence model assumes that competence at a higher level includes
competence at a lower level. Here, the factor analysis is carried out at the level of
functional competence (assessment points 1–10), procedural competence (assess-
ment points 11–25) and shaping competence (assessment points 26–40)
(cf. Table 6.9).
From the result of the factor analysis, it can be deduced that the 10 evaluation
items under ‘functional competence’ can be combined into one factor and the
15 evaluation items under ‘shaping competence’ into two factors (of which 10 eval-
uation items under ‘social compatibility’ and ‘environmental compatibility’ into one
factor and 5 evaluation items under ‘creativity’ into one factor). This ensures a good
structural validity. The 15 evaluation items under ‘procedural competence’ can be
grouped into three factors. Four evaluation items of each of the three criteria
‘sustainability’, ‘economic efficiency’ and ‘business and work process orientation’
can be combined into one factor with five evaluation items each. Overall, the
COMET evaluation items represent a good structure and essentially correspond to
the theoretical hypothesis (Table 6.10).

Table 6.9 Factor analysis of three levels of professional competence


Functional competence Procedural competence Shaping competence
(Assessment points (Assessment points (Assessment points
1–10) 11–25) 26–40)
KOM value 0.957 0.910 0.934
χ2 value of the 14088.433** 10648.921** 26649.283**
Bartlett test for
sphericity
Number of com- 1 3 2
mon factors taken
Share of declared 70.724% 63.176% 61.972%
total variance
Note: ** means P < 0,01
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 191

Table 6.10 Factor analysis for the 15 evaluation items under ‘procedural competence’
Components
1 2 3
WM11 0.386 0.001 0.501
WM12 0.382 0.056 0.752
WM13 0.2 0.106 0.781
WM14 0.8 0.012 0.232
WM15 0.81 0.062 0.204
WM16 0.623 0.304 0.105
WM17 0.133 0.866 0.043
WM18 0.177 0.83 0.111
WM19 0.091 0.529 0.521
WM20 0.384 0.713 0.241
WM21 0.708 0.351 0.236
WM22 0.728 0.308 0.251
WM23 0.434 0.394 0.359
WM24 0.642 0.328 0.343
WM25 0.135 0.312 0.665
Extraction method: Main ingredient
Rotation method: Orthogonal rotation method according to the Kaiser criterion
Convergence of rotation after 5 iterations

Analysis of the Reliability of the Evaluation Items

Reliability was analysed for the overall reliability of ‘vocational competence’, the
three competence levels ‘functional competence’, ‘procedural competence’ and
‘shaping competence’ as well as the eight criteria. It was found that the coefficient
Cronbach α for the overall reliability of ‘vocational competence’ and for the three
competence levels is above 0.9 and the coefficient α for all eight criteria is above 0.8.
Overall, the measurement model has a high internal consistency (see also Tables 6.11
and 6.12).

Table 6.11 Reliability analysis for the three assumed competence levels
Vocational Functional Procedural
competence competence competence Shapingcompetence
α 0.956a 0.953 0.907 0.924
coefficient
a
Item 20 was deleted from the evaluation scale of the 2009 test and item 3 in the 2011 and 2012
tests, which is why the overall reliability of occupational competence is the calculation result
without evaluation item 3. Calculation without evaluation item 20 results in α ¼ 0.971
192 6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.12 Reliability analysis of 8 competence dimensions


Competence level Competence dimensions Alpha value
Functional competence Clarity/presentation (item 1–5) 0.910
Functionality (item 6–10) 0.902
Procedural competence Sustainability (item 11–15) 0.899
Efficiency (item 16–20) 0.821
Orientation on work and business process (item 21–25) 0.917
Shaping competence Social compatibility (item 26–30) 0.837
Environmental compatibility (item 31–35) 0.828
Creativity (item 36–40) 0.907

Discussion

On the basis of the analysis described, the following can be determined:


1. The intercultural adaptation of the measurement model of COMET’s test tasks
and evaluation items and the rater training in China were very successful. This
result supports the assumption that the COMET competence model has a good
scientific basis, complies with the rules and training objectives of VET and
provides a good basis for a country comparison.
2. The practice of rater training in China shows that the succession of individual
independent evaluation, group rating and collective discussion in plenary is an
effective training concept. The combination of individual, independent evaluation
and reflection on the basis of reference values can effectively promote under-
standing in rating. The solution scope can help the raters effectively both in
evaluation and in reaching agreement among themselves. The Chinese raters
were able to accept and discuss different opinions and to agree on common
evaluation standards. After evaluating two solutions, a high interrater reliability
has already been achieved. The evaluation items of the COMET competence
measurement have a good structural validity and a high internal consistency.
Chapter 7
Conducting Tests and Examinations

7.1 How Competence Diagnostics and Testing are


Connected

When measuring professional competence development in the form of competence


levels and competence profiles, competence is recorded as a domain-specific cogni-
tive disposition and not as professional action competence. What is measured is the
ability to solve professional tasks through planning and concepts. This means that—
so far—the aspect of the practical execution of a conceptual-planning task relevant
for the examination of professional action competence remains unconsidered. By
defining competence as a cognitive potential, the aspect of the practical implemen-
tation of tasks solved in planning and conceptual terms is excluded from research-
pragmatic aspects (time scale, costs, etc.). However, these restrictions do not apply
to the performance of examinations in accordance with the regulations of the
Vocational Training Act. For example, several days (or even weeks, e.g. in the
case of industrial clerks and craftsmen who have adhered to the tradition of the
journeyman’s project) are available for the part of the examination dealing with the
‘company assignment’ or ‘company project work’ as central elements of a modern
examination and the examination boards have the appropriate personnel and time
resources for the performance and evaluation of the examinations.
If competence is measured using the COMET method with the aid of com-
puters—instead of the traditional ‘paper pencil’ test—then for many occupations
which do not focus on manual skills, it is possible to measure professional skills in
the form of computer-assisted competence diagnostics. This is especially true when
programme-controlled work systems shape the content and form of the work process
in a profession. For occupations in the commercial and administrative employment
sector, for example, this situation is just as frequent as for IT occupations. This also
applies increasingly to industrial-technical and artistic professions, in which the
professional activity is shaped by the computer-, net- and media-supported working

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 193
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_7
194 7 Conducting Tests and Examinations

environment. A typical example is the car mechatronics engineer (Schreier, 2000).


As a result, the traditional separation between planning, conceptual and executive
activities is losing importance for these occupations and the relevant professional
fields of action.
For professions and professional fields of activity, for which—in contrast—a
fundamental distinction is required between planning and execution of work orders
and for which only the execution of the planned project allows information about the
level of competence and the competence profile, the question arises as to the
applicability of the COMET measurement procedure for the examination of profes-
sional action competence. The size of the step from the planned solution of a task to
its practical implementation must therefore be examined. This includes the question
of whether and to what extent the professional competence measured according to
the COMET test procedure can also be regarded as an indicator of professional
competence. Regardless of the individual occupational circumstances, ‘final exam-
inations’ are about checking the qualification requirements laid down in the job
descriptions and training regulations. Competence diagnostics, on the other hand,
measures cognitive disposition. As the categories ‘qualification’ and ‘competence’
are often used synonymously, it was obvious to specify them when establishing the
COMET competency model (Table 2.1).
This categorical differentiation also means that the two forms of testing voca-
tional skills overlap to a greater or lesser extent and that guiding ideas and forms of
competence diagnostics can diffuse into examination practice. On the basis of these
considerations and circumstances, it will therefore be examined in the following
whether the examination part of company assignments and company project work
and the form of ‘holistic’ and ‘complex’ tasks can be designed as forms of exami-
nation of professional competence with the methods of competence diagnostics
according to the COMET test procedure?

7.1.1 Reviews of Professional Competence: The New


Examination Practice

Answering this question requires an analysis of examination forms, which were


initially developed and introduced for technical draftsmen and IT occupations
(Borch & Weißmann, 2002; Petersen & Wehmeyer, 2001), and then for the
reorganised electrical and metal occupations and numerous other occupations.
These examples will be used to investigate the similarities and differences between
modern tests and COMET competence diagnostics and whether the COMET com-
petence model can be used to design tests.
According to § 38 of the BBiG, the function of the examination is ‘to determine
whether the candidate has acquired the professional action competence. The exam-
inee must prove that he/she has the necessary professional skills, the necessary
professional knowledge and abilities and is familiar with the teaching material
7.1 How Competence Diagnostics and Testing are Connected 195

Fig. 7.1 Examination structure for IT occupations

which is essential for vocational training and which is to be taught in vocational


school lessons’ (Fig. 7.1).
The examination practice, which was initially introduced for IT occupations,
provides for central examination parts that represent typical work and business
processes, namely,
• An ‘in-house project work’ (for IT occupations) or an in-house assignment (for
other occupations),
• Two ‘holistic tasks’ or a comparable examination format.

In the project work, current topics from the operational activities of the respective field of
application or specialist area of the candidate are to be taken up, which should also be
usable for the company if possible (Borch & Schwarz, 1999, 24).

The term ‘holistic task’ is intended to express that the purpose of this form of
examination is to test the understanding and knowledge of context as well as the
ability to solve professional tasks in full. This form of examination, introduced in the
1990s, has already met with a predominantly positive response from both trainees
and companies at the very first attempt (Fig. 7.2).
The majority of the trainees rated the examination part of company project work
as practice-oriented—on average 70%. Trainees regard the degree of difficulty of
these tasks to be appropriate. Approximately 30% consider this part of the
196 7 Conducting Tests and Examinations

Fig. 7.2 Evaluation of the in-company project work by the trainees (Petersen & Wehmeyer, 2001)

examination as too difficult, with only a small minority of 5% feeling undertaxed by


this form of examination.
In contrast to the positive assessment of the practical relevance and the degree of
difficulty of company projects, the trainees had considerable reservations about the
assessment of their examination performance. Only slightly more than one in three
trainees consider the evaluation of the performance of the project work to be fair.
This is mainly due to the fact that between 40 and 60% of the trainees saw
insufficient agreement between the training and examination contents. This disagree-
ment was seen as particularly large in the implementation and evaluation of the
company’s project work. Approximately 50% of trainees stated that the assessment
was not objective (Petersen & Wehmeyer, 2001, 300 f.). The differentiated assess-
ment of this form of examination by trainees provided important indications for the
further development of this form of open examination tasks (Fig. 7.3).

Characteristics of ‘Measuring’ and ‘Testing’ Professional Skills

A comparison of this ‘holistic’ evaluation concept with the COMET test procedure
reveals striking similarities at the concept level.
7.1 How Competence Diagnostics and Testing are Connected 197

Fig. 7.3 Evaluation of the assessment of the company project work in the final examination by the
trainees (Petersen & Wehmeyer, 2001)

7.1.2 Context Reference: Work and Business Processes

In 1984, Horst Kern and Michael Schumann published their research on


rationalisation in industrial production in the very sensational book Das Ende der
Arbeitsteilung? (Kern & Schumann, 1984, 738). Behind this question is a thesis that
has since found its way into ‘work and technology’—as well as into vocational
training research—and which was confirmed in the MIT study ‘The Machine that
Changed the World’ (Womack, Jones, & Roos, 1990). Two findings in particular
have since led to far-reaching effects in the practice and organisation of business
processes and the qualification of specialists for the directly value-adding work
processes:
1. The reduction of the horizontal and vertical division of labour strengthens the
productivity and competitiveness of enterprises and, consequently.
2. The implementation of lean, process-oriented business concepts depends on
vocational training that allows tasks and responsibility to be transferred to directly
value-adding work processes (Fig. 7.4).
The concept of process- and design-oriented vocational training is based on five
basic pedagogical principles and guiding ideas that take account of technological
and economic change as well as the normative pedagogical models of vocational
education aimed at understanding, the ability to reflect, personality development and
participation. The categories of ‘process’ and ‘process orientation’ replace those of
198 7 Conducting Tests and Examinations

Fig. 7.4 From a function- to a business process-oriented organisational structure

‘function’ and ‘function orientation’ and stand for a variety of process-related issues
that shape vocational work and training.
The complexity of this guiding principle posed considerable difficulties for VET
planning and research in its implementation in training and examination regulations.
This became apparent when formulating and establishing a competency model as the
basis for structuring the examinations. The ‘extended examination’ with parts 1 and
2 replaced the traditional intermediate examination and the first part of the final
examination—half-way through training - has since then been included in the
assessment of the final examination with a share of 30–40% (Fig. 7.5).
Part 1 of the final examination is a vocational examination that is tailored to the
various training occupations and refers to the qualifications that are taught in the first
18 months of dual vocational training. As an overriding reference, ‘the ability to
plan, carry out and control independently and to act in the overall operational
context’ is emphasised. ‘This ability must be demonstrated at a level appropriate to
the level of training achieved. The ‘complex work tasks’ examination format is used
to ‘focus’ on the qualifications required for the planning, creation and commission-
ing of technical systems or subsystems. In addition to assembly, wiring and connec-
tion, this also includes functional testing as well as the search for and elimination of
faults. Included are specific qualifications for assessing the safety of electrical
systems and equipment’ (ibid., 7).
The candidate receives a list of subtasks for the analysis and execution of the task
(see Fig. 7.6). The evaluation of the examination performance is based on this task
list. The system checks whether the sub-competences shown in the task list are
‘fully’, ‘partially’ or ‘not’ achieved. The situation description of this form of
complex work tasks differs only slightly from the form of the situation description
(scenario) of the COMET test tasks. It is striking that when explaining the concept of
7.1 How Competence Diagnostics and Testing are Connected 199

Fig. 7.5 Structure of the final examination of industrial electrical professions (BMBF, 2006b, 6)

the ‘complex work task’, for the processing of which—as with COMET test tasks—
120 min is provided, a gradual reduction of the ‘typical’ (complex) work tasks is
made. In a first step, this takes the form of reducing complexity by means of
restrictive partial contracts (BMBF, 28). In order to implement this concept of
reduced professional competence, an ‘examination hardware’ is specified (ibid.,
29). With an abstract test hardware ‘automation technology’, the typical character-
istics of the operational work processes evaporate. Questions about the usefulness
and practical value of a technical solution, its environmental and social compatibility
and the question of the creativity of the task solution no longer arise, or only to a very
limited extent.
For the construction of the ‘complex work task with reduced complexity’ in this
example, defined errors are to be installed ‘to ensure that all test specimens meet the
same basic requirements for troubleshooting. In addition, the evaluators can carry
out the evaluation more easily’ (ibid., 30). Similar requirements are proposed for
other parts of the test (e.g. planning the wiring and program changes). It is intended
to transform the ‘complex work task’ into numerous subtasks.
The solution scope is limited mainly to the criteria or the partial competence of
the functionality. The essential elements of professional competence, such as the
training regulations and the COMET competence model, are no longer in view. The
operationalisation of the open situation description leads to an examination format
guided by subtasks, which does not make it possible to check the professional
competence defined in the training regulations on the basis of the assessments of
the examination results.
The BMBF’s handbook points out this problem: ‘Unfortunately, it is possible to
agree on a procedure for an evaluation that is highly objective and reliable, but still
200 7 Conducting Tests and Examinations

Fig. 7.6 Work task (BMBF, 2006b, 31 f.): ‘Your supervisor instructs you to plan the changes,
carry out the changes in the control cabinet and test them at a pilot plant’

does not cover what is to be covered! It’s about the validity of a test. In this respect
one must be aware that one can record and evaluate very different performances of
an examinee. A central question always remains whether the examination perfor-
mance recorded is an appropriate indicator of what is to be examined. [. . .]’. It was
shown that examination requirements with a high affinity to the requirement dimen-
sion of the COMET competence model are defined in training regulations. When
implementing the training objectives and examination requirements in a manageable
examination procedure, the question arises as to the quality criteria of the new
examination practice.
In its evaluation report on the introduction of IT occupations, BBIB points to a
serious problem in the implementation of the new form of examination: ‘However,
the examination practice is quite different. In the first intermediate examination,
7.1 How Competence Diagnostics and Testing are Connected 201

sixty tasks were set instead of four—a blatant disregard for the training regulations.
The ‘holistic tasks’ are also subdivided and partly programmed [multiple choice
tasks]—but in no case holistic. To date, neither the Federal Government nor the
Federal States, as supervisory bodies over the Chamber of Industry and Commerce,
have prevented the unlawful examination practice’ (Borch & Weißmann, 2002, 21).
These conclusions derived from the evaluation results for the implementation of
the IT reorganisation procedure (Petersen & Wehmeyer, 2001) were obviously taken
into account in the reorganisation procedures in the following years. The renuncia-
tion of dissolving the ‘holistic’ tasks into a multitude of subtasks (according to
Aristotle: ‘The whole is more than the sum of its parts’) brought with it a new
problem: the evaluation of the very different solutions to ‘holistic tasks’ or the
quality of the work results of the wide range of different ‘operational tasks’.
According to the authors of the IHK NRW handbook Der Umgang mit dem
Varianten-Modell of 4 February 2010, a ‘still unsolved basic problem of this form
of examination’ is that ‘the examination board must arrive at a substantiated
statement at the end of the examination on the basis of a written work assignment’
(IHK NRW, 2010, 6).
With the COMET test procedure, a task format was developed, and a rating
procedure with which all quality criteria—both a high degree of substantive validity
and an equally high degree of objectivity and reliability—can be achieved. It
therefore suggests itself to investigate whether the COMET test procedure can be
applied to the tests and thus the ‘quality’ of the new tests can be increased.

7.1.3 Levelling of Test Results

With the form of fragmented division of the complex task (wholeness) into a
structure of subtasks, the solution scope of the task, as it is initially created in the
situation description, is clearly limited. As a consequence, this leads to a levelling of
the assessment of the examination performance. High-performance test subjects do
not have the possibility to make full use of the solution scope given by the open
situation description. The structuring of the task solution is precisely specified. Weak
trainees receive far-reaching support in solving the task through the question-guided
presentation of the task. There is a risk that the objectively given heterogeneity of the
competence characteristics of the subjects will be reduced (Fig. 7.7).
Since task-specific assessment criteria are applied in established examination
practice, the competence development in the form of competence profiles and
competence levels can no longer be compared across tasks. In addition, task-specific
evaluation items make it more difficult to qualify the examiners and to achieve a
sufficiently high degree of comparable examination results that depend on the
examiners. The COMET measuring method offers a solution. In addition to a
standardised measurement model (evaluation grid), the development of task-specific
solution spaces is a prerequisite (see above). These have the function of enabling the
raters (examiners) to interpret the rating items in a task-specific manner. After a rater
202 7 Conducting Tests and Examinations

Fig. 7.7 Levelling effect of the tests of 453 test participants (COMET electronic technicians) (TS:
total point value COMET test group)

training, the raters are able to evaluate the entire range of highly diverse solutions on
the basis of the standardised rating scale even without using the solution scopes.
The application of the COMET evaluation procedure for the evaluation of audit
performance in solving holistic work tasks simplifies audit procedures and increases
the reliability, accuracy and thus the comparability of audit results quite decisively.
At the same time, there is no need to divide the holistic tasks into a multitude of
subtasks.
The consequence of introducing an objective examination procedure is that the
heterogeneity of professional competences is reliably and objectively recorded.

7.2 The Measurement of Professional Competence

For some training occupations, the training companies are free to choose between the
‘operational order’ version of the examination and the standardised form of the
‘practical task’. Practical tasks are created nationwide and are a form of simulation of
real work processes. The relevant ‘implementation guides’ rarely emphasise in more
detail the fact that these are two fundamentally different types of examination. The
operational order is characterised by its embedment in the social context of a
company, the specific competitive situation as well as the uncertainties of
unforeseeable events. Simultaneously, this complicates the realisation of a compa-
rable evaluation of operational orders. It is precisely the central characteristic of
these orders that they are each singular events. This strengthens their authenticity.
However, this form of examination poses the challenge of developing selection and
evaluation criteria that ensure the comparability of this element of the examination.
Using the training regulations for the profession of electronics technician for auto-
mation technology as an example, the BMBF has presented an implementation guide
7.2 The Measurement of Professional Competence 203

that makes it easier for companies to decide for or against this examination element.
Initially, reference is made to the qualification requirements to be assessed:
‘The candidate should demonstrate an ability to
1. Evaluate technical documents, determine technical parameters, plan and coor-
dinate work processes, plan materials and tools,
2. Assemble, dismantle, wire, connect and configure subsystems, comply with safety,
accident prevention and environmental regulations,
3. Assess the safety of electrical systems and equipment, check electrical protective
measures,
4. Analyse electrical systems and check functions, locate and eliminate faults, adjust
and measure operating values,
5. Commission, hand over and explain products, document order execution, pre-
pare technical documents, including test reports’ (BMBF, 2006a, 2006b, 9).
The examination forms ‘operational project’ (IT occupations) and ‘operational
order’ (electrical occupations) are de facto identical examination forms, even if an
attempt is made, with extensive justifications, to construct an artificial distinction
between technical competence and process competence. The latter is assigned to the
operational order in the examination regulations for electrical occupations. ‘This
[. . .] should be a concrete order from the trainee’s field of application. What is
required is not a ‘project’ specially designed for the examination, but an original
professional action in everyday business life. However, the operational order must
be structured in such a way that the process-relevant qualifications required from
the candidate can be addressed and ascertained for evaluation using practice-
related documents in a reflecting technical discussion’ (BMBF, 2006a, 2006b, 6).
Arguments in favour of this form of examination say that this is about evaluating
process competence in the context of business processes and quality management in
the operational overall context. The evaluation encompasses the qualities with which
the coordination and decisions determining professional action can be perceived in
the work processes (ibid., 4). This serves to express that this is not a question of
checking for ‘professional competence’. A large number of implementation guides
and practical examples have since been used in an attempt to impart the separate
assessment of process and professional competence to examiners and examination
boards. This was not successful because experience of centuries of examination
practice—from the journeyman’s project to the operational order—speaks against
this regulation and there is no other vocational pedagogical justification for this
concept.
For example, a recommendation for companies and examiners of North Rhine
Westphalia’s Chamber of Industry and Commerce responds to the question of
whether ‘technical questions’ are prohibited in the technical discussion following
the operational order by stating: ‘The central focus of the new examinations for the
operational order is on both the ‘processability’ and ‘technical character’, or
‘process competence’ and technical competence’. Although so-called ‘technical
questions’ are by no means prohibited during a technical discussion, they should
directly relate to the operational order’ (IHK NRW, 2010, 14 f.).
204 7 Conducting Tests and Examinations

Fig. 7.8 Evaluation—operational order (BMBF, 2006a, 2006b, 12)

The work performed during the processing of an operational order is evaluated in


accordance with the following scheme (Fig. 7.8).
A comprehensive set of instruments was submitted for the evaluation of exam-
ination performance, which—in its basic structure—is presented here in simplified
form (Fig. 7.9).
There are major differences in the differentiation of the qualifications to be
demonstrated. They range from a few higher-level qualifications to 50 or more
individual items (ibid., 21).
A procedure to simplify the evaluation was developed by PAL. The individual
items assigned to the four higher-level qualifications Q1 to Q4 were combined in this
procedure to form four items (Fig. 7.8, Column 2).
This is followed by a determination of the degree to which this summarising
criterion is met, again differentiated into three levels. The examiners give the
‘comprehensive’ criterion a score between 8 and 10, the ‘incomplete’ evaluation a
score between 5 and 7 and the ‘inadequate’ evaluation a score between 0 and 4.
7.2 The Measurement of Professional Competence 205

Qualification areas / PAL rating and The solution is ...


qualifications to be demonstrated evaluation compre- incomplete inadequate
procedure hensive
Q1: Accepting orders, selecting Handling solution 8-10 points 5-7 points 0-4 points
solutions variants
ƒ Analysing work orders
ƒ Procuring information, clarifying
technical and organisational
interfaces
ƒ Evaluating and selecting solution
variants from technical,
economical and ecological
perspectives
Q2: Planning work processes Structuring the 8-10 points 5-7 points 0-4 points
ƒ Planning and coordinating order overall order
processes according to
ƒ Defining subtasks, compiling subtasks
planning documents
ƒ Considering work processes and
responsibilities at the site of
operation
Q3: Executing the order and Handling obstacles 8-10 points 5-7 points 0-4 points
examinations and faults
ƒ Executing orders
ƒ Testing and documenting
function and safety, observing
standards and specifications for the
quality and safety of systems
ƒ Systematically searching for and
eliminating errors and defects
Q4: Order conclusion and Documenting and 8-10 points 5-7 points 0-4 points
evaluation, product hand-over explaining work
ƒ Handing over products, result
providing specialist information,
preparing acceptance reports
ƒ Documenting and evaluating
work results and services,
invoicing services
ƒ Documenting system data and
documents

Fig. 7.9 Evaluation form for ‘operational orders’ (BMBF, 2006a, 2006b, 15 ff)

This procedure limits the possibility of achieving a sufficient level of objective


and reliable evaluation of the examination performance.
The decisive weaknesses of this examination procedure do not lie in the form of a
‘operational order’, but
1. In the incomprehensible attempt not to—officially—use this traditional form of
practical examination (e.g. The ‘journeyman’s project’) to examine professional
competence and to restrict it to the distinguishable ‘process competence’ instead.
This represents a fundamental difference from the successful examination con-
cept for IT occupations. It cannot be conveyed that professional competence can
also not be examined on the basis of a company contract.
206 7 Conducting Tests and Examinations

2. In the structure of the operational orders and the evaluation of their solution
according to the concept of complete learning and working. The basic theory
underlying the holistic solution of occupational tasks for the modelling of
vocational action and shaping competence therefore remains unconsidered. The
theory of complete action abstracts from the contents of professional work
processes, with the consequence that the individual competences to be taught in
modern vocational education and training are partly ignored. Professional com-
petence should enable people to solve professional tasks comprehensively. In this
case, ‘comprehensively’ also refers to the concept of complete action, but above
all to the theory of holistic problem solution. And the latter concerns the require-
ments dimension of professional competence and therefore its development as a
competence level and competence profile.
It is therefore evident that the further development of this examination concept
should be based on the regulations for IT occupations (operational project). How-
ever, the criticism expressed in the evaluation studies that ‘artistic projects’ are very
frequently only developed for examination purposes—far beyond real operational
practice—must be taken seriously. At the same time, it should be remembered that
the term ‘operational project’ also includes an element of prospectivity and therefore
points beyond existing practice. In contrast, operational ‘routine orders’ are oriented
to existing practice. All too easily, this could create a central idea of adaptation
qualification: qualifying for that which exists. However, this would contradict the
fundamental change of perspective introduced in VET in the 1980s with the concept
of ‘Shaping Competence’. It is therefore a question of examining professional
competence—differentiated according to the progressive competence levels of func-
tional, processual and holistic shaping competence.
A comparison of the qualification grid Q1 to Q4 with the COMET competence
model shows very differentiated qualification requirements Q1, Q2 and Q4 reflected
in the modelling of the requirements and action dimension of the COMET compe-
tence model. The high affinity that exists between modern training regulations, and
the COMET competence and measurement model was documented in detail in
COMET Volume II.
In this respect, it remains to be examined whether and, if so, how Q3 ‘Executing
orders’ can be integrated into the COMET test procedure.
The following procedure seems suitable here:
1. The items of the rating procedure are suitable throughout not only for the
evaluation of the conceptual-planning solution of a test item, but also for the
evaluation of the execution of the operational orders. In examination practice, the
work result (product) is transferred to the client.
2. As documentation and explanation of the order (result) as well as, for example, by
explanations on how the result was handled and managed (e.g. in the form of user
training).
3. In the form of an expert discussion (30 min.), during which the candidate has the
opportunity to justify the solution of his task.
7.2 The Measurement of Professional Competence 207

Table 7.1 Items on the sub-competence ‘Implementing the plan’


Implementing the plan
Was it possible
(1) ... To implement the plan in practice?
(2) ... To react appropriately to obstacles and faults?
(3) ... To verify the solution’s customer friendliness?
(4) ... To identify and, if necessary, to remedy the errors?
(5) ... To arrange the handover to the client in customer-oriented manner?

4. In this regard, there is a parallel to the COMET test procedure. As is the case
during the technical discussion and during handover of their work results, the
trainees are asked to give comprehensive and detailed reasons for their proposed
solutions.
5. As the execution of an operational order (Q3) includes the examination of the
functional capability, and the complete implementation of all standards and
regulations (safety, health, environmental compatibility, etc.), the qualification
requirements encompass the identification of faults and defects as well as their
elimination. COMET competence diagnostics does not provide this implementa-
tion aspect, as this has so far been limited to measuring the conceptual-planning
solution/processing of tasks. It therefore makes sense to supplement the COMET
competence and measurement model with this partial competence of
‘implementing the plan’ (cf. Table 7.1).
It is a good idea to integrate these assessment criteria into an appropriately
modified assessment scale. This applies in particular to the sub-competences ‘clar-
ity/presentation’ and ‘functionality/professional solutions’ (Table 7.2).

7.2.1 COMET as the Basis for a Competence-Based


Examination

The relevant examination regulations provide for an examination structure according


to which, for example, in part A (operational order/operational projects), vocational
skills are examined on the basis of an operational order selected as an example, and
with part B (two complex tasks) of max. 2  120 min, aimed at vocational work
process knowledge, while vocational competence is examined on the basis of an
operational order selected as an example. The duration of the intermediate exami-
nation or part 1 of the examination is a maximum of 10 h for the complex work task
and a maximum of 120 min for a written examination part.
According to this examination arrangement, comprehensive examination of the
qualification requirements defined in the training regulations is possible neither at
the level of knowledge nor at the level of ability.
208 7 Conducting Tests and Examinations

Table 7.2 Adaptation of the assessment criteria to the rating or evaluation of operational orders/
projects
The requirement is ...
in no way not partly fully
met met met met
(1) clarity/presentation
Is the presentation form of the solution suitable for
discussing it with the client?
Is the solution presented appropriately for professionals?
Was it possible to verify the solution’s customer
friendliness?
Is the documentation and presentation technically
appropriate?
Was it possible to arrange the handover to the client in
customer-oriented manner?
(2) functionality/professional solutions
Was the ‘state of the art’ taken into account during
planning?
Was it possible to react appropriately to obstacles and
faults?
Was it possible to implement the plan in practice?
Was it possible to identify and, if necessary, to remedy the
errors?
Are the solution of the assignment and the procedure
adequately justified?

The operational work assignment or the operational project work as well as the
holistic work assignment represents competences or qualification requirements of
the vocational fields of action.
The ability to work is assessed based on examination tasks which are developed
or selected according to the criteria of representativeness and exemplarity.
In the case of safety-relevant professional competences (e.g. mastery of the
relevant safety regulations for electrical installations), it makes sense to establish a
concept of competence diagnostics to accompany training. The vocational fields of
action and learning are suitable for the temporal structuring of an extended exami-
nation (cf. BMWi, 2005, p. 46). Simultaneously, the great advantage of such
adiagnostic competence during training is an extended examination with a high
feedback structure. And this is particularly important in the development of voca-
tional competence (cf. BMWi, 2005, pp. 9 and 46; Hattie & Yates, 2015, 61 ff.). This
would be the first time that continuous training guidance based on the recording of
vocational competence development would be regulated in binding manner. Such a
procedure—involving the examination and testing practice of vocational schools—
would not only strengthen the quality of training but also significantly reduce the
burden on the time-consuming final examination process. This would make it easier
to justify a one-off final examination to measure the level and profile of competence
on the basis of characteristic and representative test or examination questions.
7.2 The Measurement of Professional Competence 209

The objectivity, reliability and at the same time the validity of the content of the
examination can be realised on a high level based on the COMET competence and
measurement model, with the prerequisite of ensuring that the examiners are
instructed in the evaluation of examination results.
Comparability of the examination would be ensured by the complex and authen-
tic (valid in terms of content) examination tasks to be developed in accordance with
the COMET competence model and high-quality criteria for the examination pro-
cedure by a standardised rating procedure.

Examination Variant of the Examination Structure (TAKING into


Account the COMET Examination Procedure)

The testing concept is largely based on Recommendation 158 of 12.12.2013 of the


BIBB Board Committee (BAnz AT 13.1.2014 S1 (hereinafter referred to as ‘E
158’)).

7.2.2 Examination Format for the Extended Final


Examination (GAP)

The E 158 is used for this purpose: ‘Part 1 of the CAP can therefore only deal with
competences which are already part of the professional competence to be consid-
ered in the final examination’ (cf. E 158, p. 11). This recommendation suggests that
the same examination format should be used for Parts 1 and 2 of the CAP.

The Operational Order

‘The operational order is proposed by the company’ (E 158, p. 20). It is based on a


customer-oriented situational description. Specifications in the sense of a require-
ment specification as well as question-guided subtasks are to be avoided, since these
already represent essential elements of the solution. With the ‘translation’ of the
operational order formulated from the customer’s perspective into a specification
(requirement specification), a part of the solution would already be given. A clear
distinction must therefore be made between the work order specified (and to be
applied for) by the company and the processing of the order (planning, execution,
verification of the result) by the candidate in the examination procedure.
An appropriate evaluation of the work result as well as the work and procedure
can only be carried out if the candidate has the possibility to document the order
planning process as well as its execution and procedure (1) and (2) to justify it
comprehensively and in detail and to weigh alternatives against each other.
Table 7.3 Extended final examination (BBiG): Verification of professional competence (according to COMET quality standards) (Example:
electronics technician)
210
7 Conducting Tests and Examinations
7.2 The Measurement of Professional Competence 211

E 158 talks about the ‘execution of a complex task typical for a profession’: ‘The
work/procedure and the result of the work are evaluated’ (p. 20). Regarding
the operational order, it says: ‘The work and procedure are evaluated. The results
of the work can also be included in the evaluation’. However, this is only possible if
the candidate not only documents, but also justifies, the results of their work and
procedure. In the test model, the CAP therefore comprises both Part 1 and Part 2 of
an operational order (or alternatively, a ‘practical task’) with the following exami-
nation parts (Table 7.3).

I • Conceptual-planning solution/processing of the order including its justification (approx.


3–4 h under supervision).
• Subsequent rating of the conceptual-planning solution by the examiners.
II • Practical implementation of the plan and quality control (approx. 18–30 h).
• Preparation of the documentation and (if necessary) justification of the deviation from the
plan (approx. 8 h).
III • Expert discussion (including presentation) (max. 0.5 h).
• Final team rating and determination of the examination result.

Assessment of the Examination Result Part A (Operational Order/


Practical Task)

The planning of the operational order (OO)/Practical Task (PT)/as well as the
justification of the proposed solution and the planned procedure are evaluated by
the examination board on the basis of the standardised COMET rating scale (Appen-
dix B) in the form of a team rating.

Fig. 7.10 Evaluation of the planning and justification of an operational order/practical task
212 7 Conducting Tests and Examinations

Expert Discussion

The expert discussion takes place on the basis of the preliminary assessment result
and the documentation of the OO/PT. The examiners are therefore able to check
whether the candidate ‘knows and can do more’ than the preliminary evaluation
result shows. The competence profile determined in the rating procedure and the
documentation of the OO/PT form the basis for the expert discussion (Fig. 7.10).
The preliminary evaluation result shows on which points the expert discussion
should concentrate. The weaknesses of the solution and the procedure identified in
the rating procedure are questioned again in the expert discussion. The subject of the
expert discussion is also the deviations between planning and execution of the OO or
PT. After the expert discussion, the examiners supplement their assessment with
regard to the criteria of the implementation of the plan on the basis of the
corresponding positions on the rating scale. In addition, they can correct ratings
from the perspective of the skills shown.

Practical Task

Variant 2 of the practical examination—a ‘practical task’—can be retained in


modified form for companies or candidates for whom the concept of operational
orders cannot be implemented. For these cases, documents (e.g. drawings, construc-
tion kits) are made available, making it possible to develop ‘practical tasks’ in the
form of situation descriptions (from a customer perspective), taking into account the
criteria resulting from the COMET competence model. The rating procedure is then
identical to that for the operational orders.

Holistic Tasks

E 158 states: ‘The selected examination instrument(s) for an examination segment


must enable the candidate to demonstrate performance that meets the requirements by
means of coherent tasks’ (p. 16). With the help of the holistic tasks, competence can be
captured/checked at a high level of validity and reliability. The COMET test procedure
can be applied here without restriction. Holistic examination tasks are only possible if
they are not broken down into subtasks. These tasks are solved through concepts and
plans, and the proposed solutions are—if possible—to be explained comprehensively
and in differentiated manner. The COMET rating procedure enables an objective,
valid and reliable evaluation of the examination services provided.

Evaluation of the Task Solutions by the Examiners (Dual or Team


Rating)

The OO and, if applicable, the PT are evaluated using evaluation sheet A, and
the complex tasks are evaluated using evaluation sheet B (Appendix B). The
7.2 The Measurement of Professional Competence 213

Table 7.4 Explanation of the rating scale


0 1 2 3
Unmet Rather not met Partly met Fully met
If no or no valid If the information on a If it is evident that the If a solution aspect is
information is pro- criterion is only very candidate/test partici- not only well justified
vided for an item general and not pant can name and from a technical point
relevant to the situation-specific and justify the specific of view, but if it is
examination task. practice-related: If this aspects of the solution weighed up between
information is based without, however, alternative possible
on factual knowledge weighing them against solutions and if the
but has not really been each other: e.g. if a candidate weighs
understood. technologically high- between competing
quality solution is pro- solution criteria
posed without paying according to the situ-
attention to both utility ation:
value and costs. e.g. environmental
compatibility versus
utility value.

non-applicable evaluation criteria will be deleted for each task or OO/PT. The
examiners evaluate each of the remaining criteria according to the following grada-
tions (Table 7.4).

7.2.3 Procedure for the ‘Operational Order’


(OO)Examination

Application Process

The training company formulates an operational order according to the following


criteria:
• The OO is assigned to a vocational field of action or an examination area that
comprises several vocational fields of action.
• The OO must be at the level of employability (professional competence pursuant
to the training regulations).
• The description of the OO contains a description of the situation for which a
professional solution has to be developed (plan and justify) as well as
implemented and checked (control).
• When describing the situation, care must be taken to ensure that the criteria of the
complete task solution (Table 7.5) are applied.
The training companies and instructors are familiar with the evaluation sheets A
and B (Annex 2). This is already very sensible because this evaluation concept
represents an important didactic instrument for vocational training. The evaluation
sheets are suitable for the self-evaluation of competence development in a form
adapted to the projects and learning tasks.
214 7 Conducting Tests and Examinations

Table 7.5 Brief description of the criteria for a complete task solution (industrial-technical
professions)
Functionality The criterion refers to instrumental professional competence and
therefore to context-free expert knowledge. The ability to solve a
task functionally is fundamental for all other demands placed on the
solution of professional tasks.
Clarity/Presentation The result of professional tasks is anticipated in the planning and
preparation process and documented and presented in such a way
that the client (supervisor, customer) can communicate and evaluate
the proposed solutions. It is therefore a basic form of vocational
work and learning.
Sustainability/Utility value Professional work processes and orders always refer to ‘customers’
orientation whose interest is a high utility value as well as the sustainability of
the task solution. In work processes with a high division of labour,
the utility value and sustainability aspects of solving professional
tasks often evaporate in the minds of employees. Vocational edu-
cation and training counteract this with the guiding principle of
sustainable problem solving.
Economy/Effectiveness In principle, professional work is subject to the aspect of economic
efficiency. The context-related consideration of economic aspects
in the solution of professional tasks distinguishes the competent
action of experts.
Business and Work pro- It comprises solution aspects that refer to the upstream and down-
cess orientation stream work areas in the operational hierarchy (the hierarchical
aspect of the business process) and to the upstream and downstream
work areas in the process chain (the horizontal aspect of the busi-
ness process).
Social acceptability The criterion primarily concerns the aspect of humane work design
and organisation, health protection and, where appropriate, the
social aspects of occupational work which extend beyond the
occupational work context.
Environmental A relevant criterion for almost all work processes, which is not
compatibility about general environmental awareness, but about the occupational
and subject-specific environmental requirements for occupational
work processes and their results.
Creativity Indicator that plays a major role in solving professional problems.
This is also a result of the very different scope for design in the
solution of professional tasks depending on the situation.

The assessment criteria (rating items) that are not relevant from the company’s
perspective are marked when applying for an OO.
• The solution space to be specified by the client must define the cornerstones for
possible solutions. This is not an ideal solution proposal. Solution variants must
be possible which have as high a utility value as possible in line with the situation
description.
• The situation/order description also includes an overview of the technical and
business management options available in operation that are necessary for the
performance of the OO. This includes information on ordering and procurement
procedures.
• The applicant estimates the duration of the OO.
7.2 The Measurement of Professional Competence 215

Approval of the Application

The examination board approves the application taking into account


• The job description,
• The criterion of professional competence (complexity of the order and the
concept of a complete (holistic) task solution),
• The solution space and, if necessary, making corrections with reference to the
competence model
• The duration of the project.

Procedure for the OO Examination (Fig. 7.11)

Part 1: The OO is solved conceptually, follows a plan and is explained in detail.


This part of the examination lasts for 3–4 h and takes place under supervision.
The candidate can use the Internet and the relevant work documents provided by
the company.
Part 2: On the basis of the justified solution variant and the procedure, the candidate
prepares the implementation of the plan (orders and other preparatory measures).
The available time varies, as it very much depends on the type of OO. This is
followed by the implementation of the plan into practice as well as quality control
and documentation of the solution. Simultaneously, the team rating of the rea-
soned solution proposed by the candidate is carried out in accordance with the
COMET evaluation procedure.
Part 3: Expert discussion.
The expert discussion is initiated by a short presentation of the results of the work
with the main focuses: implementation of and, if applicable, any deviations from
the plan, assessment of the quality of the work result and the procedure.
The subsequent expert discussion includes questions and discussions:

Fig. 7.11 Examination procedure for operational order (BA)


216 7 Conducting Tests and Examinations

• On the basis of the rating result,


• Deviations from the plan,
• Alternative solutions.
Finally, the examination board supplements and corrects (if necessary) its ratings
and determines the examination result for the OO: the level of competence achieved,
the overall score or the examination mark for this part of the examination.

7.2.4 The Examination Result

Each candidate receives a printout of their competence profile, a brief description of


their level of competence and the total score (TS) of their task solutions (Fig. 7.12).
The homogeneity of the competence profile is given as coefficient of variation
V (Table 7.6).

Total Score (TS)

The total score results from the addition of the individual values for the eight
sub-competences. It is a rough indication of the level of competence achieved.
Candidates can compare their TS with that of their examination group/class to see
where their performance is compared to that of the other trainees. An accurate TS
takes into account the degree of homogeneity of the competence profile (Fig. 7.13).
This example shows that, taking into account the competence profiles, the same
raw TS results in two different (corrected) TS(k). This means that the level of
competence of the two candidates is different. Therefore, two candidates with the
same raw TS of 42 can reach two different competence levels.

Conclusion

The application of the COMET examination concept for the implementation of


examinations in vocational education and training poses several advantages.
1. The COMET competence and measurement model provides an interdisciplinary
procedure for the selection and formulation of holistic examination tasks and
operational orders. The content dimension of the competency model must be
concretised in each case by the vocational fields of action that are relevant for the
contextual description of employability.
2. This also makes examinations comparable for the ground-breaking examination
concept of the operational order on a supra-regional and interdisciplinary basis.
3. The introduction of a scientifically based examination format such as this would
therefore considerably simplify the communication between all those involved in
vocational education and training.
7.2 The Measurement of Professional Competence 217

Fig. 7.12 Documentation of the examination performance

Table 7.6 Degrees of homo- V < 0.15 Very homogeneous


geneity of competence profiles
V ¼ 0.16–0.25 Homogeneous
measured as coefficient of
variation V V ¼ 0.26–0.35 Less homogeneous
V ¼ 0.36–0.5 Inhomogeneous
V > 0.5 Very inhomogeneous
218 7 Conducting Tests and Examinations

Fig. 7.13 Correction of the raw TS values (comparison of the competence characteristics of two
commercial occupations)

4. This would also solve the integrated review of in-company and school-based
training, as the COMET competence model represents vocational training as a
whole. At the same time, the specific contributions of the learning locations to
achieving and verifying employability can be identified.
5. The examination results based on the COMET competence model also reveal the
strengths and weaknesses of the training performance of the learning locations.
The examination results therefore provide a good basis for educational guidance
and for quality assurance and quality development in vocational education and
training.
6. The evaluation of examination performance based on the COMET competence
and measurement model leads to the development of common evaluation stan-
dards. This examination practice should prove to be a form of informal rater
training for examiners and should be supported by introducing the examiners to
the new examination practice.
7. A high degree of interrater reliability (consistency of examiners in the assessment
of examination performance) can be achieved by dual or team rating—a proce-
dure tried and tested in COMET projects.
The significance of the examination results in accordance with this examination
concept is significantly higher than that of conventional examinations.
Not only a score is displayed, but also
• The level of competence achieved
• The competence profile.
In addition, this examination form satisfies the established quality criteria of
competence diagnostics.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 219

7.3 Interrelationship Analyses Between Examinations


and Competence Diagnostics for Automotive
Mechatronics Technicians

7.3.1 Comparison of the Examination and (COMET) Test


Results

In the context of the feasibility study ‘COMET testing’ (implications of the COMET
test procedure for achieving a higher quality (validity, objectivity and reliability) of
final examinations according to the Vocational Training Act), which was carried out
in cooperation with the COMET project (NRW) and the Chamber of Industry and
Commerce (NRW), it made sense to carry out a case study in which the examination
results of 96 candidates (motor vehicle mechatronics technicians) were compared
with their test results in the COMET test.
A correlation-statistical context analysis was performed, which made it possible
to map the relationship between two variables.
To investigate the interrelationships, differentiated scores are available both from
the final examinations and from the COMET test.
The following final examination values were used for a differentiated analysis:
• Mean value from the practical examination part
• Mean value from the written part of the examination
• The total examination score.
The COMET scores can be divided into the following areas according to the
competence model:
• Score of the competence level functional competence (FC),
• Score of the competence level procedural competence (PC),
• Score of the competence level (holistic) shaping competence (DC) and.
• Total score (TS).

7.3.2 Results of the Statistical Influence Analysis

Correlations can be used to map relationships between two characteristics, with the
correlation coefficient ‘r’ quantifying the strength of the relationship. The correla-
tions from r > 0.2 to r < 0.4 are considered weak. Mean correlations are
0.4 < r < 0.6. A strong correlation is indicated from r > 0.6 (cf. Brosius, 2004).
For the data of the automotive mechatronics technicians, the two test instruments
were first examined separately. It is evident that the elements of the chamber
examination—practical part, written part and overall assessment—are strongly
interrelated and therefore coherent. The areas of the COMET test are also closely
related and therefore measure the same construct. Each test forms a coherent unit in
220 7 Conducting Tests and Examinations

its own right. A comparison of the two tests revealed only a weak correlation of
r ¼ 0.25 ( p < 0.05).
The differentiated context analyses of individual elements of the two tests yielded
the following results1:
The score from the practical exam correlates to
• Strong with the score from the written test (r ¼ 0.63; p > 0.01),
• Not with the total score (ts) COMET (r ¼ 0.17; not significant),
• Weak with functional competence (fc: r ¼ 0.25; p < 0.05; pk: r ¼ 0.2),
• Not with the level of procedural competence (r ¼ 0.17; not significant) and,
• Not with the competence level of shaping competence (DC): r ¼ 0.06 (not
significant).
Although the result from the written part of the final examination correlates only
at a weak level, it is nevertheless significant with the COMET values for the
• TS (r ¼ 0.29; p < 0.01),
• Functional competence (FC) (r ¼ 0.31; p < 0.01),
• Procedural competence (PC) (r ¼ 0.28; p < 0.01).
The degree of correlation between the score of the written test and the score of the
COMET shaping competence (DC) is very weak and statistically insignificant. The
calculated correlation coefficient r can represent a random relationship.
The overall result of the test correlates at a low level with the COMET values.
• TS (r ¼ 0.25; p < 0.05),
• FC (r ¼ 0.29, p < 0.01) and.
• PC (r ¼ 0.24; p < 0.05).
There is no demonstrable link between the overall result of the final examination
and the (holistic) shaping competence identified in the COMET competence model
(r ¼ 0.12, not significant).

Interpretation of the Results

Overall Result (Final Examination): Total Score (COMET)

The weak positive correlation between the overall score of the final examination and
the COMET test result indicates that higher scores in the examination tend to be
accompanied by higher scores in the COMET test (see Fig. 7.14).

1
All calculations without extreme values, i.e. written part (chamber) > 0, practical part (cham-
ber) > 0, overall mark (chamber) > 5, functional competence >0, procedural competence >0 and
shaping competence >0, TS > 5.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 221

Fig. 7.14 Relationship between examination result (chamber test) and total score COMET, without
extreme values (TS > 5 and overall score > 5, r ¼ 0.25, p < 0.05; R2 ¼ 0.09)

The final examination of the chamber as a whole can therefore be partially


represented by the COMET examination. However, only 9% of the variation of
one test’s values can be explained by the other.

Practical Part of the Examination

It can first be assumed here that this aspect of the examination hardly correlates with
the values of competence diagnostics, as COMET is limited to measuring concep-
tual-planning competence. The correlation values to the TS with r ¼ 0.17 (not
significant) to the competence level of functional competence (FC) with r ¼ 0.21
( p < 0.05) and procedural competence (PD) with r ¼ 0.17 (not significant) show that
there is almost no correlation. Due to the divergence in content between the two
forms of examination (practical chamber examination and written examination at
COMET), an interpretation of the results only partly leads to added value. In
the practical examinations of the chambers, the trainees are asked to implement
the theoretical problem solution, while the COMET test asks them to formulate the
solution in writing. This is where translation errors can occur. The purely cognitive
solution of a problem does not necessarily mean that the actions are performed
according to the calculated procedure.
222 7 Conducting Tests and Examinations

Fig. 7.15 Relationship between the written examination (chamber examination) and functional
competence (COMET), r ¼ 0.31, p < 0.01, R2 ¼ 0.1 (without extreme values)

Written Examination

As expected, the most pronounced correlations to the COMET test are found for this
part of the examination. This predominantly applies to functional competence
(FC) with r ¼ 0.31( p < 0.01). A high score in the written part of the final
examination therefore goes hand in hand with a high score in the functional
competence area of the COMET test.2 This weak but still significant correlation is
shown in Fig. 7.15.
The correlation to procedural competence (PC) is somewhat weaker with r ¼ 0.28
(Fig. 7.16).
In contrast, with r ¼ 0.17 (not significant) there is no correlation with shaping
competence (GC) (Fig. 7.17).

2
Conversely, an upstream COMET test at the level of functional competence would be good
preparation for achieving high scores in the written part of the final examination.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 223

Fig. 7.16 Relationship between the written examination (chamber examination) and procedural
competence (COMET), r ¼ 0.28, p < 0.01, R2 ¼ 0.08 (without extreme values)

Fig. 7.17 Relationship between the written examination (chamber examination) and shaping
competence (COMET), r ¼ 0.17, not significant, R2 ¼ 0.02
224 7 Conducting Tests and Examinations

7.3.3 Conclusion

The higher the score for the written examination, the higher the score for functional
and procedural competence—and vice versa.
Higher scores in the written examination do not justify holistic shaping compe-
tence. This is not covered by the examination.
This context analysis therefore confirms the findings of the empirical surveys
cited that the objectives of process-oriented and competence-oriented vocational
education and training anchored in the examination practice are not covered in the
training regulations. With the COMET test format, this deficit can be eliminated for
the written part of the examinations. The application of the COMET examination
procedure to the entire examination requires a supplemented competence model.
The application of the COMET examination concept for implementing examina-
tions in vocational education and training has a number of advantages.
1. The COMET competence and measurement model is an interdisciplinary proce-
dure for selecting and formulating holistic examination tasks and organisational
orders. The content dimension of the competency model must be concretised in
each case by the vocational fields of action that are relevant for the description of
the content of employability.
2. On this basis, examinations shall also be comparable on a supra-regional and
cross-occupational basis for the ground-breaking examination concept of the
company mandate.
3. This would introduce a scientifically based examination strategy that would
greatly facilitate the understanding of all those involved in VET.
4. The integrated review of in-company and school-based training would thus be
solved, since the COMET competence model represents vocational training as a
whole. At the same time, the specific contributions of the learning locations to
achieving and verifying employability can be identified.
5. The examination results based on the COMET competence model also reveal the
strengths and weaknesses of the training performance of the learning locations.
The examination results thus provide a good basis for training guidance and
quality assurance in training.
6. The evaluation of examination performance on the basis of the COMET compe-
tence and measurement model leads to the development of common evaluation
standards. This examination practice should prove to be a form of informal rater
training for examiners. This should be supported by introducing the examiners to
the new examination practice.
7. A high degree of interrater reliability (consistency of examiners in the assessment
of examination performance) can be achieved by dual rating (two examiners)—a
procedure tried and tested in COMET projects.
8. The informative value of the examination results according to this examination
concept is significantly higher than that of conventional examinations. It iden-
tifies not only the score but also the level of competence achieved and the
competence profile.
In addition, this examination form satisfies the established quality criteria of
competence diagnostics.
7.4 Measuring the Test Motivation 225

7.4 Measuring the Test Motivation

The influence of test motivation on test behaviour and test results is the subject of
different and sometimes contradictory research results. As part of the first Pisa test
(2000), an additional experimental study was conducted in Germany in order to gain
insights into the test motivation of different groups of pupils (distinguished by
school types) and the influence of various external incentives on test motivation.
The overall result of the experiment is that the different experimental groups do not
differ in their willingness to make an effort (Baumert et al.: Pisa 2000, 60). The
results of the study also suggest that external incentives should not be used, as these
effects are negligible. Differences in test motivation between ‘Hauptschule’ (general
school in Germany offering Lower Secondary Education) pupils and ‘Gymnasium’
(high school in Germany offering Higher Secondary Education) pupils could not be
established in this study. Regardless of the type of school, the effort was relatively
high throughout. The various incentives had no significant influence on the test
results (ibid., p. 27 ff.).
In the first COMET project, a motivation test was therefore not included. How-
ever, test practice then suggested that the test motivation should be considered as a
context variable in the second test as part of the longitudinal study.

7.4.1 Preliminary Study: The Time Scope of the Test


as Influence on Test Motivation

Based on the experience of examination practice in dual vocational training and in


final examinations at technical colleges, a longitudinal study with a cross-over
design was selected for the COMET Electronics Engineer project (Bremen, Hesse
2007–2009) (Fig. 7.18). Afterwards, each test participant had to solve two complex
test items at each of the two test times.

Fig. 7.18 Cross-over design for the use of test items in longitudinal section (COMET Vol. I, 144 f)
226 7 Conducting Tests and Examinations

The test comprised a total of four complex test items (COMET Vol. I, 144 f.). Each
test participant had to solve two test items, with a maximum of 120 min available to
solve each test item. After the first test of the one-year longitudinal study, the
observation of the teachers involved in the test already indicated that the motivation
to work on the test items played a greater role than had initially been assumed. It was
therefore examined how the test time of two times 120 minutes was used by the test
participants and the proportion of test refusers. This resulted in first clues regarding the
test motivation of the test participants and for recording the test motivation.
In a pre-test, it was first examined whether there was a systematic drop in
performance when processing the second complex test item. In the evaluation of
the test results, a distinction was made between trainees in their second and third year
of training and students from technical colleges.
The experiences of the first phase of the study already suggested that the
motivation to complete the test items played a greater role than assumed among
the various groups of trainees. On the one hand, reports by teachers indicated that
test motivation among vocational school students varies. On the other hand, some of
the trainees have only partially exhausted the test time of 2  120 min; some pupils
have not seriously completed the test items, and they form the group of test refusers.3
Based on these findings, the test motivation and test behaviour were recorded at
the second test time (March/April 2009), whereby the formulation of the questions is
based on PISA test practice (Kunter et al., 2002). In addition, the teachers conducting
the tests answered questions on test motivation in the classroom and on the working
atmosphere; the results can be used for comparison at class level. In addition, the
comparison of processing time and test result of the first and second test items allows
conclusions to be drawn about the course of motivation during the test.

Results of the Survey on Test Motivation (cf. COMET Vol. III, Sect. 7.7)

The general interest of trainees and students in the COMET test items varies greatly.
More than half of the electronics engineers found the first test item interesting to very
interesting (55%). This figure is even higher for electronics technicians specialising
in energy and building technology (61%) and for students of technical colleges
(60%).
Overall, all test groups worked on the first task in concentrated (73%) and careful
(65%) manner. On the other hand, it is noticeable that almost every second elec-
tronics technician for industrial engineering states that he is less motivated to work
on the second task than on the first task; however, this is only stated by every fifth
electronics technician specialising in energy and building services engineering and
students of technical colleges (Fig. 7.19).

3
A refuser is a participant who has achieved a total score of less than five points or who has
completed both test items together in less than 60 min. In Hesse in 2009, 24 participants were
identified as refusers according to this definition, of which ten were E-EG trainees (7%) and
fourteen E-B trainees (6%).
7.4 Measuring the Test Motivation 227

Fig. 7.19 Frequency distribution according to test groups: ‘I am (a) less motivated to work on the
second test item, (b) similar, (c) more motivated than for the first test item

Comparison of the Results of the First and Second Test Item

In this context, by comparing the test results of the two test phases (2  120 min for
two test items), it is possible to examine whether and for which test groups there are
significant differences in the test results between the first and second test item. If the
test result of a test group is worse for the second test item, this can be interpreted as
an indication of decreasing test motivation.
This effect is not present for all test groups. In the case of the apprentice
electronics technician specialising in energy and building services engineering,
there is no difference between the results of the first and the second test item. This
may be because this group has a rather low overall test level. The electronic
technician trainees for industrial engineering achieve a significantly better result in
the first task than in the second: 15% achieve the highest level of competence in the
first test item and only 6% in the second test item (cf. Fig. 7.20). This also
corresponds to the lower motivation of this test group for the second test item
described above. In the case of the first task, the risk group is only 10%; in the
case of the second task, this figure rises to 23% (cf. Fig. 7.21). Here, too, a test for
228 7 Conducting Tests and Examinations

Fig. 7.20 Competence level distribution of the group of electronic technicians for industrial
engineering (Hesse), comparison of the results based on the first and second test item 2009

Fig. 7.21 Competence level distribution of the group of technical college students (Hesse), and
comparison of results based on the first and second test item 2009

mean value differences4 shows that the average total score for the first task is
significantly higher than for the second task.
This results in a considerable loss of motivation, especially among weaker
students. Higher-performing students improve from the first to the second test
item. The majority, however, do slightly worse than in the first task. Figure 7.22
illustrates this effect: each cross in the diagram represents a trainee, the axes show

4
t-Test for dependent samples.
7.4 Measuring the Test Motivation 229

Fig. 7.22 Scatter diagram to compare the results for the first and second task (Hesse, electronics
technician for industrial engineering, 2009, n ¼ 297)

the total scores achieved for each of the two tasks completed and the horizontal and
vertical lines in the diagram show the total score (26.5) for both tasks. The diagonal
line divides the graphic into two parts. The top part (A) shows the trainees who did
better in the second task than in the first, and the bottom part (B) shows those who
did worse in the second. Part B contains significantly more participants (63%).

Comparison of the Processing Time for the First and Second Test Item

The recording of the processing time also allows an assessment of the extent to
which the motivation decreases in the course of the test. For the first test item, the test
participants worked an average of 100 min and for the second test item only 83 min.
This can be interpreted as fatigue or decreasing motivation.
However, a shorter processing time cannot be exclusively attributed to a lack of
motivation on the part of the test participants from the start of the test. It must also be
considered that an excessively demanding task solution can lead to a participant
ending the test early. However, this is contradicted by the fact that there is only a
small correlation between the test result and the processing time.
This pre-test on the relationship between test motivation, processing time and test
results led to the decision to reduce the test scope for each test participant to the
processing of a test item. Only in subsequent projects was test motivation included
more comprehensively in the context analysis as a determinant of competence
development.
230 7 Conducting Tests and Examinations

7.4.2 Explorative Factor Analysis of the Relationship


Between Test Motivation and Test Performance

Capturing the Test Motivation

When recording test motivation, a distinction is made between primary and second-
ary motivation aspects.
The primary motivational aspects are
• The assessment of the occupational relevance of the test items. It is assumed that
for test participants with a developed professional identity, the occupational rele-
vance of the test items has a motivating effect on the processing of the test items.
• The benefit of the test items. The evaluation of the benefit of the test items results
from the assessment of the test participants that participation in the test has a
positive effect on training.
• The interest in task processing represents another primary motivational aspect. On
the one hand, this motivational aspect is based on the two other primary motiva-
tional aspects and, on the other hand, on the interest in the content of the tasks.
The secondary motivational aspects are
• Commitment.
• Concentration.
• Diligence and.
• Task-solving effort.
(cf. the test motivational model in Fig. 7.23).
The primary motivational aspects represent the evaluation of the test items as
relevant for vocational education and training, without this already being associated
with a willingness to make an effort in processing the test items. If, for example, a
test is conducted just before a final examination, this may lead to a lack of interest in
the test, as it is perceived as a disruption in exam preparation. The test motivation is
then impaired without affecting the evaluation of the occupational relevance of the
test items and their basic benefit for vocational training. The evaluation of the
interest in the test items or the test results from the occupational relevance and, at
the same time, from the benefit of the test for the training as well as for the
examination preparation if necessary.
The secondary motivational aspects result from the primary motivational dimen-
sion. The four secondary motivational aspects represent different aspects of the
willingness to make an effort.
The recording of the processing time for the solution of the test items can be
regarded as a dimension of the test motivation, as shown by the study cited above. At
the same time, it is immediately evident that the processing time is also an indicator
of the competence level. The test results show that more efficient test participants use
the available test time (120 min) to a greater extent than less efficient test partici-
pants. The processing time is therefore an indicator for both the competence level
and the test motivation.
7.4 Measuring the Test Motivation 231

Questionnaire for Recording Test Motivation

Dear trainees,

We would like to hear from you how you assess the test task you have worked on. For this
purpose we would like to ask you for some information. Then please place this sheet in
the envelope provided for your task solution.

Thank you very much for your cooperation!

How long did you work on the test task?

less than 1/2 hour


1/2 –1 hour
1–11/2 hours
11/2–2 hours

fully rather rather fully


undecided
disagree disagree agree agree

The processing of the test task was


very interesting.

These types of test items are very


useful.

The test task has a lot to do with my


job.

I worked on the test task with a lot of


concentration.

I worked on the test task very diligently.

I put a lot of effort into processing


the test task.

For things that are very important to you personally, you make a special effort and
give your best (e.g. sports, hobbies, ...).
In comparison, how much effort did you put into the test task?
(Please mark with a cross!)

1 2 3 4 5 6 7 8 9 10
minimum effort maximum effort

Fig. 7.23 Item structure and test motivation


232 7 Conducting Tests and Examinations

The Test Motivational Model: Data Structure of Motivational Variables


in the COMET Test Procedure

In the previous COMET projects, the test motivation was analysed on the basis of the
individual items. An exploratory factor analysis was carried out to check whether
connections between the observable motivational aspects can be explained by
superordinate dimensions. This makes it possible to uncover non-observable (latent)
dimensions that can be superior to the observable items.
The test motivational model is based on the hypothesis that a perceived occupa-
tional reference, benefit and interest in the submitted test items leads to careful,
concentrated processing.

Sample

The data were collected during a COMET test of second- and third-year nursing
students from a total of six locations in Switzerland (locations: Aarau, Basel, Bern,
Lucerne, Solothurn, Zurich/Winterthur). A total of N ¼ 477 persons took part in the
survey, 87% of whom were female (n ¼ 417).
The items used in the motivation questionnaire (Table 7.7) are therefore subjected
to an explorative factor analysis, which is intended to reveal the underlying struc-
tures of the items. Due to the correlative character of the items, the assumption is
made that possible factors also correlate with each other. Accordingly, a direct
rotation is used for factor analysis, which largely allows for a possible correlation
between the factors.
As a result, two factors or motivational dimensions can be extracted. The factor
loads are listed in the following table (Table 7.8).
Factor 1 consists of the items Commitment, Diligence, Concentration and Effort.
The statements on Interest, Meaningfulness and Occupational relevance are based on
a factor of 2. With regard to the formulation of the content of the items (cf. Fig. 7.24),
factor 2 can be seen as meaningfulness (primary motivation). The factor describes
the benefits for the professional future identified in the test items and links an interest

Table 7.7 Test instruments for the first test time and second test time of the first COMET project
(COMET Vol. II, 41)
Use from test
Testing instrument time
Open test items T1 (2008)
Context questionnaire t1 (2008)
Questionnaire on test motivation for trainees t2 (2009)
Teacher questionnaire on the test motivation of trainees t2 (2009)
Rater questionnaire on the weighting of competence criteria t2 (2009)
Test of basic cognitive ability (subtest ‘figure analogies’ of the cognitive ability t2 (2009)
test (CAT))
7.4 Measuring the Test Motivation 233

Table 7.8 Results of the factor loads of the motivational items on the extracted factors (data:
nursing staff Switzerland 2014, N ¼ 477)
Item parameters Factor loads
Item M SD rit 1 2
Commitment 2.48 1.00 0.78 0.89 0.01
Diligence 2.44 0.95 0.74 0.87 0.01
Concentration 2.58 1.01 0.71 0.82 0.03
Effort 4.86 2.22 0.67 0.74 0.04
Interest 2.44 1.04 0.65 0.03 0.83
Meaningfulness 2.37 1.02 0.6 0.02 0.77
Occupational reference 2.99 1.11 0.36 0.02 0.49
Comments: Factor loads >0.30 are marked bold; Bartlett test: χ 2 ¼ 1709.19(df ¼ 21), p < 0.001;
Kaiser-Meyer-Olkin (KMO) measure ¼0.86. N ¼ sample size; M ¼ mean value, SD ¼ standard
deviation; rit ¼ selectivity.

Fig. 7.24 Results of the explorative factor analysis (data: nursing staff Switzerland 2014,
N ¼ 477). r ¼ correlation coefficient; a ¼ factor charge

with them. Factor 1 describes the behaviour when processing the test items and is
referred to as investment (secondary motivation). The term investment refers to the
motivational skills used during processing.
The defined factors have a medium positive correlation (r ¼ 0.66). Both factors
explain an overall variance of 62%.

The Connection between Test Motivation and Test Performance

The studies cited above already indicated a connection between test motivation and
performance. With regard to the content dimension of the two factors, it is reason-
able to assume that meaningfulness functions as the primary motivation dimension
and investment as the secondary motivation dimension, as presumably people who
feel that testing makes sense also invest more in test processing. The mediator
analysis described below was carried out in order to further investigate the relation-
ship between the two extracted motivational dimensions and the relationship
between these dimensions and the test performance.
234 7 Conducting Tests and Examinations

Question and Hypothesis

Based on the results of the factor analysis described above, it was examined whether
the two motivational dimensions make an explanatory contribution to the COMET
test performance (measured as the total score (TS)).
The following hypothesis was made for the statistical investigation of the
problem:
The motivation factors meaningfulness and investment causally explain the
competence performance in the COMET test procedure. As a mediator, the invest-
ment factor mediates the connection between the meaningfulness factor and the
performance factor (TS) in the COMET test procedure.

Method

In order to investigate the hypothesis statistically, a mediator analysis was carried out
with the three variables meaningfulness as independent variable, investment as
mediator variable and TS as dependent variable. The analysis was carried out in
the four steps usual for a mediator analysis (Preacher & Hayes, 2004), in which
various linear regression analyses were calculated. In Step 1, the regression of the TS
was examined for meaningfulness. In the second step of the analysis, a regression of
investment was examined for meaningfulness, and in the third step, the regression of
the total score to investment was examined. In the last step of the analysis, a multiple
regression of the total score was examined for meaningfulness and investment.
The Sobel test was also carried out to check the statistical significance of a media
effect found (Preacher & Hayes, 2004).

Outcomes

The results of the mediator analysis are shown in Fig. 7.25. The analysis showed that
with a corrected R2 ¼ 0.043, the variable meaningfulness can explain 4.3% of the
variance in the total score. Even if this variance portion is small, the regression
model from step 1 becomes significant with F ¼ 22.13(1;475), p < 0.001. In this
model, the relationship between meaningfulness and total score is significant and
positive (b1 ¼ 4.314.70; p < 0.001). The results of the regression from step two
show that with F ¼ 210.48(1;477) and p < 0.001, there is significant regression. This
model explains 30.5% of the variance of investment. The relationship between
meaningfulness and investment is positive and significant (b2 ¼ 0.60; p < 0.001).
The regression of the third analysis step showed a significant result with F ¼ 32,53
(1;475) and p < 0.001. The model explains 6.4% of the variance of the total score.
The relationship between investment and total score is also significant and is positive
(b3 ¼ 4.79; p < 0.001).
The regression from step four indicates that at 6.8%, a significant degree of total
variance of the total score can be explained by the two variables meaningfulness and
7.4 Measuring the Test Motivation 235

Fig. 7.25 Mediator analysis to determine the relationship between primary and secondary moti-
vation and performance in the COMET test procedure (data: nursing staff Switzerland 2014;
N ¼ 477). bi ¼ regression coefficient; * p < 0.05; ** p < 0.01

investment (F ¼ 18.28(2;474), p < 0.001). The relationship between investment and


total score remains positive and significant (b4 ¼ 3.72; p < 0.001), while the
relationship between meaningfulness and total score is no longer significant
(b ¼ 2.12; p ¼ 0.051). The Durbin-Watson value of this model is 0.74, so a strong
positive autocorrelation (first order) of the residuals must be assumed (Brosius,
2013).
The Sobel test was significant with Sobel test statistics ¼ 5.31, p < 0.001 in this
study.

Discussion

The results of the mediator analysis indicate that the investment completely mediates
the connection between meaningfulness and performance. The result of the Sobel
test confirms this result. This means that people who see more benefit in the COMET
test procedure invest more and achieve better performance in this way. The invest-
ment can therefore be confirmed as primary motivation and the investment as
secondary motivation. This confirms the assumption made in earlier COMET studies
that more motivated people also achieve better test results. However, the analysis
clarifies that the willingness to invest in the test item depends on how strongly the
benefit of testing is perceived. For the future, therefore, the test participants should
be advised of the benefit of the COMET test procedure before carrying out the test, in
order to avoid poor test performance resulting from poorly perceived benefit.
The relatively low overall model’s explained variation of 6.8% indicates that a
large part of the variance of the performance remains unexplained by the present
model. This indicates that, in addition to the motivational components, numerous
other factors have an influence on the test performance. This is immediately obvious,
236 7 Conducting Tests and Examinations

since achievements depend on knowledge, skills and abilities. A further indication


for this assumption is provided by the result of the Durbin-Watson test, which points
to a strong positive autocorrelation (first order) of the residuals. This could be an
indication that important explanatory variables are missing in the calculated model.
The consequence of strong autocorrelation of residuals may be that the true standard
errors are underestimated. This continues to distort the results of the significance
tests (Brosius, 2013). The available results must therefore be interpreted with
caution.

7.4.3 Influence of Processing Time on Overall Performance

Based on the relatively low explanatory variance of the performance by the two
motivational factors, which was shown in the mediator analysis, the variable of the
processing duration could lead to an increase of the explained variance. Based on the
hypothesis that a comprehensive, reflected task solution with detailed justifications
(corresponding to the COMET task) inevitably results in a longer processing time,
the processing time is recorded even after the COMET test has been shortened by
one task in order to be able to further investigate this aspect as an indicator of test
motivation.
The processing time for the test, which is asked on a four-step scale, is as follows
in this sample:

How long did you work on the test item?


Number Percentage
(1) less than 1/2 h 16 3.4
(2) 1/2–1 h 83 17.4
(3) 1 1/2 h 127 26.6
(4) 1 1/2–2 h 236 49.5
Absent 15 3.1
Total 477 100

It is apparent that almost half of the trainees exploited the full processing time. A
correlation of the processing time with the total score of r ¼ 0.53 culminated in a
significant result. This means that there is a medium-strong positive correlation
between the processing time of the COMET test and the total score achieved.
The question of the interaction between the motivational aspects of meaningful-
ness and investment, the processing time and the total score will now be examined in
greater depth. The research interest here is particularly aimed at investigating the
influence of processing time on the performance of the trainees in addition to the two
motivational dimensions.
For this purpose, the factors investment and meaningfulness as well as the
processing time are examined in a hierarchical regression analysis with the total
7.4 Measuring the Test Motivation 237

Table 7.9 Summary of the hierarchical regression analysis for predicting the variable ‘Total score’
(n ¼ 462)
Variable B SE β R2 Corrected R2 p
Step 1
Meaningfulness 2.47 1.12 0.12 0.077 0.073 0.028
Investment 3.62 1.03 0.19 0.001
Step 2
Meaningfulness 1.30 0.99 0.06 0.296 0.291 0.187
Investment 2.69 0.91 0.14 0.003
Processing time 19.02 1.60 0.48 <0.001
Note: dependent variable: total score; variable duration has been dichotomised: (0) ¼ 0–1 h,
(1) ¼ 1–2 h)

score as a dependent variable. The result shows a significant additional explained


variance of 22% through the variable processing time.
The mean value of the performance (total score) in the present sample is M ¼ 46.9
(SD ¼ 16.5).
Table 7.9 shows the proportion of the explained variance of the performance
value by the influencing factors based on the corrected R2 values. A very small
explained variance of the total score can be observed by the factors Meaningfulness
and Investment (R2 ¼ 0.07). Nevertheless, the regression model from Step 1 shows
statistical significance with F (2, 459) ¼ 19.22; p < 0.001). The inclusion of the
dummy-coded variable processing time in the model of Step 2 leads to a significant
increase in the explained variance of 22%. However, the meaningfulness factor loses
importance and cannot make a significant contribution to the explanation of this
model. Statistical significance can also be determined for the regression model in
Step 2 (F(2, 459) ¼ 64.09; p < 0.001).
If the investment factor is increased by one unit while at the same time controlling
the factors meaningfulness and processing time, the total score is increased by 2.7
points. Consequently, an increase in secondary motivational aspects (investment
factor) leads to higher performance values. The variable processing time can be
formulated from model 2: The trainees who invested 1–2 h to complete the test item
achieved an overall score 19 points higher than trainees who spent less than 1 h
completing the test item by checking the factors meaningfulness and investment.
The duration of the processing time has therefore become an important influenc-
ing factor on the performance value in interaction with the investment factor.

Examples: Test Motivation of Nursing Students (Switzerland)

The test motivation of nursing students is above average and varies between
locations (training centres) in terms of the criterion of ‘commitment’ between 6.1
and 7.5 (on a scale of 1–10) (Fig. 7.26).
238 7 Conducting Tests and Examinations

Fig. 7.26 Criterion ‘commitment’—test motivation of nursing students

This shows that the willingness to commit at the individual locations has shifted
between the training centres among the test participants over the course of a year.
If one combines the values of the primary and secondary motivational aspects
into a motivation profile, then four special features are highlighted (Fig. 7.27).
At many locations, students rate the occupational relevance of the test items as
high to very high.
It is therefore surprising that the benefit of the test is sometimes estimated to be
significantly lower. If one assumes that these students relate the benefit of the test
(the processing of the test items) to their educational situation, then this supports the
thesis that the students clearly distinguish between their education and the occupa-
tion to be learned. A clarification of this difference can be based in a first step on the
evaluation of the context data on training quality (see below).
The interest in the test seems to be fuelled by its professional relevance and
benefits: the values for the interest aspect of motivation therefore frequently lie
between the assessments of these two other primary motivation aspects.
The secondary motivational aspects obviously represent the same, largely agreed
motivational dimension (see below).
In the course of a year, there has been a significant increase in test motivation at
Training Centre A. In contrast, the test motivation at another training centre (L) has
dropped. Using the context data, the teachers were able to clarify this development in
a feedback workshop. Here, for example, conflicts within study programmes or
identification with the test procedure as the cause of the different test motivation
of the participants were discussed (Evaluation Workshop of Nursing Training
Switzerland, January 2015).
7.4 Measuring the Test Motivation 239

Fig. 7.27 Test motivation at the various locations, first main test

Representation of Factor Values in the form of a Matrix

Based on the factor analysis described above, the test motivation results can be
summarised in a factor matrix (Fig. 7.28). The factor values indicate whether the test
group achieved above- or below-average scores in relation to the sample examined.
Values >0 represent above-average and values <0 below-average motivation levels.
Figure 7.28 shows that the test participants of the training centres E and C 2013 have
an above-average test motivation—in relation to both motivation dimensions.
240 7 Conducting Tests and Examinations

Fig. 7.28 Factor matrix nursing training Switzerland 2013

Fig. 7.29 Factor matrix nursing training Switzerland 2014

Complementary to this, the test participants of the training centres F and B show
below-average test motivation in 2013.
For the participating training centres, it is interesting to see how their position in
the factor matrix changed from the first to the second test time—in the course of one
year (Fig. 7.29). This shows that the heterogeneity of the test motivation has
decreased slightly overall and that the test motivation has changed significantly on
some occasions.

Example: Test Motivation of Electronics Engineers

Within the scope of the first main test of the electronic engineers for industrial
engineering (E-B), as well as the electronic engineers for energy and building
7.4 Measuring the Test Motivation 241

Fig. 7.30 Comparison of E-B and E-EG according to interest and evaluation of benefit legend:
2 ¼ rather agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and 5 ¼ fully
disagree are not proven)

Fig. 7.31 Comparison of E-B and E-EG according to care and concentration—Legend: 2 ¼ rather
agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and 5 ¼ fully disagree are not
proven)

technology (E-EG) (NRW), the test motivation was also recorded. If one compares
the test motivation of the two test groups with each other, there are clear differences
in the level of motivation (Figs. 7.30, 7.31 and 7.32).
Deviating from the widespread thesis that there is no significant correlation
between competence levels and test motivation, these two test groups (E-B:
N ¼ 170; E-EG: N ¼ 192) show a pronounced correlation between test motivation
and competence development. This connection is particularly clear in the case of the
E-EC-A for the item ‘Interest in task processing’ (Fig. 7.30). The test motivation data
were available when interpreting the COMET test results. In the context of the
feedback workshops with the occupation-related project groups also mentioned in
this text, comparisons were made repeatedly between motivation during examina-
tions—on the one hand—and participation in COMET tests—on the other. Experi-
enced examiners justified their assumption that the examination motivation of the
trainees was significantly higher than the test motivation of the COMET test
242 7 Conducting Tests and Examinations

Fig. 7.32 Comparison of E-B and E-EG according to commitment and effort in solving the test
items Legend: 2 ¼ rather agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and
5 ¼ fully disagree are not proven)

procedures. After all, a good examination result would also benefit a professional
career, while participation in a COMET test would at best provide insights into the
state of competence development.

Differences Between (COMET) Test and Exam Motivation

As mentioned above, relationships between motivation and test performance within


the COMET project were investigated.
Based on a sample of N ¼ 34 to 35 examination participants (motor vehicle
mechatronics technicians) who had also taken the COMET test for motor vehicle
mechatronics technicians with a total of N ¼ 400 trainees, the test motivation could
be compared with the examination motivation.

7.4.4 Results of the Comparative Study: Test


and Examination Motivation among Motor Vehicle
Trainees

Figure 7.33 shows a striking difference between test and exam motivation.
The importance of the primary motivational aspects of occupational relevance
and benefit is (clearly) rated higher for the COMET test procedure than for exam-
inations. The large difference for the primary motivation criterion of occupational
relevance is particularly striking. The average assessment of the occupational refer-
ence is estimated to be significantly higher for the COMET test with an average
value of MV ¼ 4.2 than for the occupational reference of the examinations
(MV ¼ 3.2). The evaluation of tests under the aspect of benefit is indifferent with
a mean value of MV ¼ 3.0. The final examination has neither a high occupational
7.4 Measuring the Test Motivation 243

Fig. 7.33 Primary motivation aspects in the comparison of examinations and COMET tests

Fig. 7.34 Secondary motivational aspects in the comparison of examinations and COMET tests

relevance nor a high benefit (for the training and the concrete occupational activity)
for the motor vehicle trainees.
The values for the secondary motivational aspects are complementary. In the case
of examinations, the willingness of the trainees to make an effort is significantly
higher than in the case of tests performed as part of competence diagnostics
(Fig. 7.34).
Overall, this results in a complementary motivation structure for COMET tests
and final examinations. The primary motivation factors (excluding interest) are rated
higher for the COMET test procedure. This is based on a slightly above-average
motivation regarding the secondary motivational factors. In other words, the test
244 7 Conducting Tests and Examinations

motivation results from the assessment of the test participants that the test items have
high occupational relevance and are therefore highly beneficial for training.
The motivation profile for examinations is completely different. The secondary
motivation factors are highly rated: on average between MV ¼ 4.2–4.5. These high
motivation values for the indicators care, concentration, effort and commitment
stand in clear contradiction to the indifferent ratings of the primary motivation
factors. The insight of the trainees that a good examination result is beneficial for
their professional career obviously characterises the high examination motivation,
which is not diminished by the fact that the professional reference and the benefit of
the exam are assessed as indifferent.
This case study in a highly sought-after training occupation supports the interest
of organisations in the working world in an application of the COMET competence
and measurement model for the procedures of final examinations.

7.4.5 The Cultural Dimension of Test Motivation

The example of the COMET project South Africa (electronics technician) shows that
the national labour market and training structures—the cultural context—are a
decisive determinant of the test and training motivation of the trainees. The example
of COMET (electronics technician) South Africa confirms this thesis particularly
impressively, as all the courses involved have achieved only a low level of compe-
tence in an international comparison (Germany, China, South Africa) (COMET RSA
Study 2013). At the same time, the test participants were very highly motivated. In
summary, the project report states
The test takers were highly motivated and interested in the test items. Still, the test results are
often below the level of functional competence. Professional and holistic shaping compe-
tence have rarely been reached. On the other hand, the South African learners were very
motivated to take the test and are very committed to their learning in general, as well
(ibid., 44).

Both the occupational orientation and the benefit of the test items were highly
rated by the test participants (Figs. 7.35 and 7.36).
When asked about their interest in processing the test items, 53% of the respon-
dents stated that they were very highly interested and 34% were highly interested.
Only 10% stated that they were less interested in the test items and a further 10%
were not at all interested.
The effort with which the test items were processed was correspondingly high
(Fig. 7.37).
These high values of the test motivation correlate with the identification with the
training companies (Fig. 7.38).
7.4 Measuring the Test Motivation 245

Fig. 7.35 Evaluation of the occupational relevance of the test items (COMET RSA, 2013)

Fig. 7.36 Evaluation of the benefit of the test items (COMET RSA, 2013)

The high youth unemployment rate in South Africa is regarded as the decisive
reason for the high organisational identity and the very high training motivation of
young people who have a training contract. This is also transferred to the test
motivation.
246 7 Conducting Tests and Examinations

Fig. 7.37 Evaluation of the effort spent on the test items (COMET RSA, 2013)

Fig. 7.38 Identification with the companies providing in-company vocational training

7.4.6 Conclusion

In this research field of competence diagnostics, fundamental research is still in its


infancy. In a first step, the motivational model presented in this report has proven to
be a structure that differentiates the questions, methods and results. Primary and
secondary motivational aspects could be distinguished and related to each other, and
their influence on the test performance could be determined.
7.4 Measuring the Test Motivation 247

The in-depth analysis of the individual motivational aspects makes it clear that the
connections between the individual motivational items can be explained by two
underlying dimensions: Meaningfulness and investment.
It has been shown that the test performance of the trainees is determined by three
factors: estimating the meaningfulness (primary motivation) of the test items as high
usually leads to a committed processing of the test items (secondary motivation/
investment). Both motivational dimensions have a significant influence on the test
performance. Highly motivated test participants therefore achieve better results.
However, the variance of the test performance clarified by the motivational dimen-
sions is not exhaustive. Another significant, strong influencing factor on the test
performance has been the duration of the test processing.
The results discussed and the research on test motivation also open up the
possibility of investigating the job-specific differences in test motivation.
A comparison of the motivation regarding the processing of COMET test items
and examination tasks resulted in a contrary motivation picture. While the assess-
ment of the primary motivation aspects regarding the COMET task was (signifi-
cantly) higher among the trainees, the assessment of the secondary motivation
aspects with regard to the examination tasks was significantly higher in contrast.
The results suggest that the fact that examinations have an influence on professional
career, while COMET testing has no such influence, works as an external incentive.
This extrinsically motivates trainees in the examination situation, while no such
motivation occurs in the course of the COMET test procedure. The extrinsic
motivation is expressed in the result that trainees invest heavily in the completion
of the examination task, although they tend to assess the occupational relevance and
the benefit of the examination task rather indifferently and do not have an excessive
interest in the examination task. Accordingly, the inadequate inclusion of the
COMET test in the feedback structure at the vocational school poses a major
problem. This can also be a reason for the low test motivation. The effects of good
feedback on the ‘learning success’ of pupils/trainees/students by teachers, trainers or
lecturers, e.g. based on tests or other forms of evaluation of competence develop-
ment, are unanimously rated as very high in empirical educational research. Feed-
back in the form of learning and training counselling is rightly regarded as the
linchpin of a good learning culture (Weinert, 1996).
Although COMET test participants receive individual feedback on their test
results, they are aware that the test results are not grade relevant. After all, there
was a constant interest in feedback on the test results. This expresses the fundamental
interest in feedback. However, as long as this form of competence diagnostics is not
used in its potential for training and learning counselling and is not systematically
integrated into a new feedback culture, test motivation will remain low for some of
the test participants. This effect is reinforced by the fact that the trainees are fixated
on the two selective examination times (Part 1 and Part 2 of the examination) and
give significantly less weight to all other forms of assessment of their training
success. This also applies universally to all forms of school performance measure-
ment. Part 1 of the examination takes place after 18 months and part 2 at the end of
the training period—after three to three and a half years.
248 7 Conducting Tests and Examinations

It can be assumed that the interest in this form of competence diagnostics will
increase if the test participants experience competence diagnostics as an evaluation
instrument which is of high diagnostic importance for the evaluation and therefore
also for the improvement of training quality.
In individual cases, however, the COMET competence survey is currently even
experienced as a disruption of regular education. The processing of test items is
associated with a considerable effort if it is carried out in concentrated and commit-
ted manner. The vast majority of trainees quite obviously rate this as an additional
performance to be provided. The impact of this setting on the test result is closely
related to the timing of the test. A case study in which a group of trainees in their
fourth year of training were tested just before the final examination (Part 2) shows
that the motivation to take the test at this point was so low that the overall results
were significantly worse than could be expected from the advanced training. This is
confirmed by the information on the motivation of the trainees and is caused by
trainees concentrating on passing the examination well. A test that is not included in
the examination preparation or in the examination itself is therefore perceived as a
disruption of training in individual cases.
In addition to these aspects, the hypothesis also arose that context-related factors
such as the school climate have an influence on test motivation. This hypothesis
should be investigated in follow-up examinations. An investigation of the relation-
ship between the development of occupational and organisational identity, work-
related commitment and test motivation is also still pending.
A new field of research in this context is the recording of test motivation and
training commitment under the conditions of different national training traditions
and cultures. For example, the unusually high training and test motivation of
South African trainees and students could be attributed to the fact that they belonged
to the minority of young people who could expect to be employed in the training
companies once they had successfully completed their training. A comparatively
high level of test motivation was also measured among Chinese trainees and students
at higher vocational schools. The Chinese COMET consortium’s hypothesis that this
can (also) be traced back to a social norm according to which ‘official’ surveys
express a positive view of the situation has not yet been empirically confirmed.
The approach presented here for recording test motivation and the test results
show that research into test motivation for the implementation of competence
diagnostics projects in vocational education and training is a prerequisite for the
analysis of test results.

7.5 Planning and Executing COMET Projects

7.5.1 Research Design and Research Strategies

Competence diagnostics in occupational training—a field of vocational education


and training research—cannot be easily classified in the systematics of research
7.5 Planning and Executing COMET Projects 249

methods and research designs. Elements of hypothesis-driven and action research


are as much elements of this field of research as are the methods of (quasi-)
experimental and grounded theory research, large-scale and casuistic research
designs, and international comparative research.

Hypothesis-Driven versus Discovering Research

For competence research, hypothesis-driven research is just as central a research


method as discovering research. This applies above all to the education of vocational
teaching and learning structures. The COMET competence model, for example, is a
hypothetical model whose justification is based on the theory of complete action, the
novice-expert paradigm and the model of the development of cognitive partial
competences (multiple competence). A central concern of competence research is
therefore the empirical examination of the hypothetical competence model (Martens
& Rost, 2009, 89 ff) as a key concept for competence research in vocational
education and training. Key concepts provide access to a broad spectrum of both
hypothesis-based and discovering research projects (Glaser & Strauss, 1967, 38;
Fischer, Rauner, & Zhao, 2015a, 2015b).
The modelling of the requirements dimension of the COMET competence model
was based on extensive data from a project on the development and evaluation of
vocational competences based on the concept of development tasks (Havighurst,
1972). The evaluation method applied in this project is based on the theory of
professional development hermeneutics (Bremer, 2001, 276 f.), applied by the
evaluators (teachers, subject didactics) to develop their subjective criteria or criteria
grids for the interpretation of the task solutions. The great diversity of the resulting
evaluation concepts did not permit a valid and reliable evaluation of the tests, but
they were characterised by a considerable creativity, which ultimately facilitated the
systematisation of the evaluation criteria applied in sum and their consolidation into
a model of a complete (holistic) task solution (Rauner, Grollmann, & Martens,
2007). The validity of the eight competence criteria (partial competences) identi-
fied is based on the plausibility of the concept of the complete (holistic) solution of
professional tasks as an a priori setting. Furthermore, the broad international
application of the COMET Competence Model and the dialogue between the
actors of the international COMET research network can be regarded as a process
of communicative validation of the COMET Competence Model (cf. Lechler,
1982). The subject of the psychometric examination of the competence model
(the hypothesis) was the design of this concept into a measurement model (Martens
& Rost, 2009, 95 ff.).
In empirical social and educational research, the discussion on the weighting of
quantitative and qualitative research methods continues. Particularly in vocational
training research, reference is made to the special importance of the content of
teaching and learning and that this is not covered by empirical social research
methods in the design and evaluation of vocational training processes. A further
argument for the increased use of qualitative methods is based on the argument of
250 7 Conducting Tests and Examinations

openness to the questions and development tasks in educational research, which is


restricted by hypothesis-led research: ‘If the researcher presupposes modelling,
there is a danger that his hypotheses do not do justice to the subject under study.
A further danger must be seen in the fact that, with the advancing development of a
research field, the hypotheses and questions become increasingly specific, move
away from the object in funnel-like manner and lose more and more relevance for
the everyday life of the investigated subjects’ (Dörner, 1983, cited in Flick, 1995,
151). The converse position is equally decisive. The renunciation of explicit hypoth-
eses poses the danger that research does not really produce ‘something new’
(ibid., 151).
The COMET project, with its widely ramified research network, shows that both
research traditions, i.e. hypothesis-driven and discovering open research, are not
opposing mutually debilitating research concepts, but that they each have specific
potentials that support each other.

Example: Discovering the Phenomenon of Stagnation in Competence


Development (Rauner, Piening, & Zhou, 2014)

The research design of the COMET projects in more than 15 industrial-technical,


commercial and personal service occupations (Hesse, NRW) is based on two cross-
sectional studies and one longitudinal study with two test periods separated by 1 year
and the participation of apprentices/technical school students from the last and
penultimate year of apprenticeship/study. The test items are developed to measure
professional competence at the end of training. The teams of authors of the test items
must consider the fact that trainees/technical school students in the penultimate year
of training should in principle also be able to solve these tasks.
In the first COMET project, to the great surprise of the teachers/trainers and
researchers involved, stagnation in competence development was already measured
during the first testing time on the basis of the cross-sectional analysis. The second-
and third-year trainees had almost identical levels of competence (Fig. 7.39).
In the search for an explanation, an effect was brought into play that could be
based on the experiences of teachers. According to the study, the motivation of
trainees to train after passing the intermediate examination (Part 1 of the final
examination) decreases significantly in the second half of their training. Only
towards the end of the training, during preparations for the final examination (part
2 of the final examination), does training motivation increase again. This effect has
become known in the scientific literature under the name ‘Konstanzer Wanne’ (the
Lake Constance Dip) (Müller-Fohrbroth (1973, 108). Teachers often refer to this
effect as a ‘sluggish phase’. Trainers report that trainees in the second half of their
training are often already treated as prospective skilled workers—the ‘new
employees’—who are tasked with performing routine work. The training aspect is
often relegated to the background. This could also be one of the causes for the
stagnation of competence development. As plausible as these explanatory attempts
7.5 Planning and Executing COMET Projects 251

Fig. 7.39 Comparison of the competence profiles of the second and third training years using the
example of the training occupation of electronics technician for industrial engineering (Interim
Report COMET Vol. I, 27)

may seem at first, they could not, however, be proven on the basis of the
empirical data.
The phenomenon of stagnation was again measured in the follow-up projects
Automotive Mechatronics Technician and Industrial Mechanic (Hesse) as well as in
the eight occupation-related subprojects of the COMET project NRW. The hypoth-
esis of a fundamental structural characteristic of dual vocational training could
therefore be assumed (Fig. 7.40).
In this example, the ex-post experiment (Karlinger, 1964) on which the compar-
ative measurement is based defines the training period as an independent variable:
the second and third years of training and, as an independent variable, the compe-
tence development in the form of the distribution of the test participants among the
four competence levels or the competence profiles of the comparison groups. The
formation of the two comparison groups ensures a systematic variation of the
independent variables: the training period as well as the measurement of a dependent
variable, the competence development.
The formation of the two comparison groups amounts to a dissolution of class
structures and therefore fulfils a central condition of experimental research: the
control of disturbance variables (Campbell & Stanley, 1963). What is most impor-
tant in the classroom: the teacher and the classroom-specific learning environment,
which is decisively shaped by the teacher, are therefore not available as indicators for
explaining more or less successful learning.
The stagnation hypothesis was exacerbated by the comparison of the competence
profiles of 205 trainees (industrial mechanics) with 102 vocational school students
(of a comparable subject).
252 7 Conducting Tests and Examinations

Fig. 7.40 Average competence profiles of industrial mechanics 2011, by training year

Fig. 7.41 Comparison of average profiles of trainees and students at vocational schools

A comparison of the competence profiles of the test groups (Fig. 7.41) shows that
the competence profiles of the trainees and the technical college students largely
coincide. At V ¼ 0.18, the competence profile of the trainees is slightly more
homogeneous than that of the students at V ¼ 0.25. The values of the trainees are
those of the second test time.
This result indicates that the transition from dual vocational education and
training to vocational school studies may also lead to stagnation in competence
7.5 Planning and Executing COMET Projects 253

development. When interpreting the test results of the technical college students, it
was taken into account that all participating technical colleges are located in voca-
tional schools that had also taken part in the COMET project and that the teachers
generally teach both the trainees and the technical college students. The thesis that
the phenomenon of stagnation can be traced back to examination practice is out of
the question for this comparison, as the forms of examination differ between
technical colleges and dual training programmes.

Looking for an Explanation for the Stagnation Hypothesis: Longitudinal


Studies

In the meantime, a wealth of results from numerous COMET projects from the
international research network is available, which make it possible to clarify the
phenomenon of stagnation in competence development and condense it into a
hypothesis.

Industrial Mechanic (Hesse)

At the second test time (longitudinal section), 69% (!) of the test subjects in the
COMET project Industrial Mechanics (Hesse) reached the highest level of compe-
tence. In the previous year, these test persons only reached 38% of the highest
competence level as second year trainees. Over the same period (1 year), the
proportion of risk students in this test group fell from 13% to only 5% (Figs. 7.41
and 7.42). A very high increase in competence from the second to the third year of
training was measured. The seemingly insurmountable hurdle for competence

Fig. 7.42 Comparison of the competence levels of industrial mechanics in the longitudinal average
(second year 2011 and third year 2012)
254 7 Conducting Tests and Examinations

Fig. 7.43 Comparison of competence level distribution of industrial mechanics in their 2second
(n ¼ 71) and third year of training 2012 (n ¼ 133) in cross-section

development in the second half of the training period had—in this case—lost its
significance (Fig. 7.43).
If one compares the competence characteristics and development of second- and
third-year trainees at the second test point (cross-sectional study 2009), it becomes
very clear that the phenomenon of stagnation in competence development has
completely evaporated. In most follow-up projects with two testing periods of
1 year apart, the phenomenon of stagnation proved to be an obstacle in established
VET practice, which was quite obviously overcome by the introduction of the
COMET method of quality assurance and quality development.
If competence development is depicted in the form of competence profiles, it
becomes clear that the increase in competence affects all sub-competences
(Fig. 7.44).
The phenomenon of stagnation in competence development took a new turn with
the evaluation of data from longitudinal studies. The assumption that the established
examination structure in dual vocational education and training with its two exam-
ination times caused a stagnation in competence development in the second half of
training (the ‘resting-on-ones laurels’ effect: that is, on the successes of the inter-
mediate examination passed) could not be empirically confirmed.
Another factor that has a demonstrably high influence on competence develop-
ment came to the fore: the competence of the teachers/lecturers.
Due to the willingness of a larger group of Chinese vocational schoolteachers in
electronics and automotive mechatronics to participate in the COMET tests, it was
possible to systematically compare the competence profiles of these teachers and
lecturers with those of their pupils and students. The 2009 study already showed that
the competence profiles of a group of teachers specialising in electrical engineering/
electronics were very similar to those of their students at the Vocational Colleges in
Beijing (COMET Vol. III,160 f.).
7.5 Planning and Executing COMET Projects 255

Fig. 7.44 Longitudinal comparison of competence level distribution in industrial mechanics in


their second year of training 2011 (n ¼ 107) and third year of training 2012 (n ¼ 132) according to
competence criteria. This shows the mean value for the individual criteria, whereby this can assume
a value between 0 and 30 based on the calculation of the competence criteria (cf. COMET Vol. III,
53 f.)

On a much larger data basis, the hypothesis of the transfer of the problem-solving
patterns of teachers/lecturers to their students was investigated in the COMET
project Auto Service (China). For the first time, this study confirms that teachers/
lecturers subconsciously transfer their problem-solving patterns and the technical
understanding they incorporate to their students (Fig. 7.45).
The project consortium attributed the fact that the trainees (and their teachers) at
the skilled worker schools in Guangzhou (GZ) have both a higher and more
homogeneous level of competence than the students (and their lecturers) at the
vocational universities to the fact that the skilled worker schools in Guangzhou
were involved in a pilot project aimed at introducing vocational training oriented
towards learning fields (Zhao & Zhuang, 2013).
Based on this insight, the phenomenon of stagnation in competence development
can now be condensed into a well-founded hypothesis. For the competence devel-
opment of trainees/students, this means that whenever their teachers/trainers have a
more or less inhomogeneous understanding of the subject matter or of problem-
solving, this pattern in fact also limits the competence development of their pupils/
students in the institutionalised forms of training or vice versa. ‘Competent teachers
have competent pupils’ (Rauner, 2015a, 432; Zhao, 2015; Zhou, Rauner, & Zhao,
2015, 396 ff.)
256 7 Conducting Tests and Examinations

Fig. 7.45 Transfer of vocational problem-solving patterns from teachers to their pupils (Rauner,
2015a, Fig. 3.6)

In the division of tasks between training companies and vocational schools, it is


more up to the teachers to convey the acquisition of work process knowledge that
explains and reflects action. The test subjects can only achieve a high level of
competence if they are able to justify their task solutions in detail. In this case, the
decisive aspect is the knowledge acquired in vocational schools, which is based on
the knowledge leading to action and which explains and reflects action.
For the teachers in the Industrial Mechanic Hesse project (see above) and in all
COMET projects in which the competence level of the trainees or students has
increased from the first to the second test time, this can be attributed to the fact that
the teachers have acquired an extended professional technical understanding and a
problem-solving competence based on thereon through rater training, rater practice
and the examination of the competence profiles of their trainees at the first test time.
This has changed their didactic actions and enabled them to consistently support
trainees in their competence development. The phenomenon of stagnation in com-
petence development had lost its significance.
7.5 Planning and Executing COMET Projects 257

7.5.2 Defining the Project Design

Test practice determines whether one or more occupations should be included in a


COMET project.
Large-scale projects are generally only carried out for one profession or one
occupational field of activity, such as the COMET project ‘Nursing training at higher
technical colleges in Switzerland’ (Gäumann-Felix & Hofer, 2015) or the EU project
‘COMCARE’ (Fischer, Hauschildt, Heinemann, & Schumacher, 2015). Projects
with a large number of professions, such as the COMET NRW project with its
eight professions, would require such a large organisational and management effort
as well as very extensive scientific support as a large-scale project that the form of a
coordinated pilot study would be appropriate. The expected test results are not
representative but rather characteristic (Rauner et al., 2015e).
It must also be determined whether a test arrangement exceeding the qualification
levels is chosen with the participation of (e.g.) dual, technical college and tertiary
programmes, or whether only one form of training should be selected. In testing
practice, the responsible project management agencies usually opt for a project
design that exceeds the qualification level. In this case, the project groups of the
programmes must clarify which programmes form the primary test group. The test
items must then be developed for this qualification level. For the secondary test
groups, the content validity of the test items must be evaluated (Martens et al., 2011,
93 ff.).
If an internationally comparative project is conducted, then a variety of questions
and tasks arise for project planning:
• The identification of the test items to be used in all national projects
• Adapting, where appropriate, the solution areas to national specificities
• A very careful translation of the rating scale, as any change in content would call
into question the comparability of the test results.

Agreement on Project Objectives

The agreement on the project objectives is a key factor in determining the project
design.
Possible project goals are
• Testing and introduction of the COMET methodology as an instrument of quality
assurance and development at the level of educational processes.
• Carrying out a pilot project to initiate innovations in curriculum development and
the didactics of vocational education and training (e.g. the introduction of the
learning field concept or new forms of examination).
• To design the COMET project as a research and development project for the
further development of COMET methods and for the scientific qualification of
multipliers.
258 7 Conducting Tests and Examinations

• Gain knowledge for policy guidance and vocational training planning (e.g. for the
development and modernisation of occupations, vocational education and train-
ing, the improvement of cooperation between learning locations and the modern-
isation of framework curricula).
• Testing international cooperation in vocational education and training (e.g. in the
development of European job profiles and international standards for vocational
education and training.

The Establishment of a Project Organisation

The linchpin for the success of the project is the formation of the professionally
related project groups (teachers, trainers, theorists). It is their task to develop and test
the test items (including the solution areas) in cooperation with the scientific support,
to qualify as advisors, to carry out the rating and to implement the knowledge gained
in this process in their didactic actions and, if necessary, in curriculum development
and in the design of tests and examinations.
Together with the scientific support, the steering group takes over the steering of
the project within the framework of the specifications set by project planning. The
steering group also has the task of transferring the project results. Performing this
task is a crucial factor in determining whether pilot projects lead to organisational
development and a new quality of VET practice—or whether the results of pilot
projects disappear again under pressure from the established structures of established
VET programmes. Model experiment research shows that model experiments often
fail due to the transfer of model experiment results (Rauner, 2004).

Determining the Participants (Sample)

When the test item development is complete (pre-test), the next step is the final
determination of the test participants.
Representative test groups take part in large-scale competence diagnostics pro-
jects. These represent populations of trainees/students of
• Training occupations and study subjects,
• Vocational education and training such as dual vocational training, vocational
schools and vocational colleges (vocational colleges) at the level of,
• Regions and states or,
• International comparative projects.
Two criteria are decisive for the participation of the programmes.
Evaluating the teachers/trainers/lecturers of the courses to be involved:
1. The professional validity of the test items (Table 7.10);
The question to be answered is: ‘How highly do you rate the validity of the test
items in terms of content in relation to the occupation to be learned on a scale of
0-10’.
7.5 Planning and Executing COMET Projects 259

Table 7.10 Characteristics of the Occupational Validity scale


Professional validity: Very low Low Medium High Very high
12 34 56 78 9 10
Not suitable Suitable

The content validity of the test items for an educational programme is given if the
teachers/lecturers of an educational programme rate the professional validity at least
as ‘high’. In this assessment, the curricular validity of the test items is initially
excluded.
If the test items are classified as valid in terms of content for an educational
programme, then the prerequisite for participation in the test is given.
2. The curriculum validity of the test items (Table 7.11).
The question is: ‘How highly do you rate the curricular validity of the test items
for your educational appetite?’ Or: ‘Do the test items suit the curriculum/framework
curriculum of the educational programme?’
The question of the curricular validity of the test items has two functions.
1. It examines whether and to what extent the curricula aim to impart vocational
decision-making competence.
In the case of a (higher) school education programme, it is to be expected that
even if the teachers/lecturers highly assess the professional validity (the assess-
ment yardstick is the later occupation), the assessment of the curricular validity
will be lower, as the teachers/lecturers must take into account the fact that a phase
of familiarisation with the occupation is still required following the (higher)
school education programme.
The participation of degree programmes with a high proportion of practical
experience, on the other hand, is generally possible. This has been demonstrated
by the participation of ‘professionally qualifying’ higher education programmes
in COMET projects (Heeg, 2015). As a rule, the interest lies in finding out to what
extent (higher) scholastic training courses can convey employability to the stu-
dents (Heeg, 2015; Fischer, Piening, Heinemann, Hauschildt, & Frenzel, 2015;
Zhou, Rauner, & Zhao, 2015).
2. If the teachers/trainers assess the curricular validity of the test items as low or very
low, then—according to the quality criterion of fairness—it is not fair to involve
this test group in the test. Participation in such a test can only be justified if
trainees/students show great interest in this type of test item despite (very) weak
test results (e.g. in a pre-test) or if there are serious educational policy reasons to
assess the quality of national forms of education/training, e.g. in an international
comparison.

Table 7.11 Validity of the Curricular Scale


Curricular validity: Very low Low Medium High Very high
12 34 56 78 9 10
Not suitable Suitable
260 7 Conducting Tests and Examinations

Table 7.12 Assessment (in %) of German and Chinese teachers/raters on (a) ‘To what extent can
the four test items be used to record the training objective (cognitive potential for action) identified
in the occupational profile’ and on (b) ‘The content of the items corresponds to  % of the
framework curriculum for the 2nd/3rd year of training’. Survey 2009 (COMET Vol. III, 93)
(a) ‘Professional’ (b) Learning field (curricular)
Respondents validity validity
Teacher/rater Germany 72% 2nd year: 47%
(n ¼ 26) 3rd year: 63
Teacher/rater Peking (n ¼ 32) 78% 2nd year: 39%
3rd year: 57%

Examples: First Example: COMET Electronics Technician (China)

The project consortium of the Chinese COMET project ‘Electronics Technicians’


unreservedly decided to participate in the comparative study, although only 39% of
the second-year trainees/students rated the curricular validity as given (Table 7.12).
When selecting the test groups, it is also a question of whether a representative
study or a pilot project should be carried out.

Representativeness

In vocational education and training, statistical representativeness can be achieved in


occupations with high training figures. Projects based on total surveys are ideal for
occupations with low and very low numbers of trainees.
The question of whether representativeness should be sought in a competence
diagnostics project (large-scale project) depends on the objectives pursued with the
project.
In the foreground are usually questions such as
• What distinguishes the quality of national VET systems in specific occupations
and training pathways?
• How do different forms of initial vocational training affect the development of
vocational competence and identity?
• What influence do the counties’ vocational classification systems have on the
quality of vocational education and training?
• How do training and examination regulations affect professional competence
development?
• How do local institutions and actors make full use of their leeway in the
qualification of skilled workers?
Comparative studies of occupations that are representative of occupational fields
and employment sectors reveal a wide range of issues of interest for quality
assurance and quality development: education policy, education planning and voca-
tional training practice.
7.5 Planning and Executing COMET Projects 261

Example 2: Representativeness in the PISA Project (Prenzel et al., 2004,


Sect. 2.4).

The test population of the PISA project (2003) in Germany was 884,358 fifteen-
year-olds. The sample size was 4660 test participants. In a two-stage procedure, the
schools to be involved were first identified as the primary sample unit. In a second
step, 25 fifteen-year-olds per school were randomly selected. Statistical representa-
tiveness could not be achieved according to this procedure. However, this two-stage
procedure for determining the sample is sufficient to achieve an approximate repre-
sentativeness. The random selection of test participants in the schools is decisive.
When selecting the number of schools (primary, higher secondary, lower secondary
and vocational), this procedure takes into account the distribution of fifteen-year-
olds among the school types. Given the small number of school locations in the
central and smaller federal states, however, it is hardly possible to map the regions
with their specific social structures.
The random formation of representative test groups at the selected schools
contributes to the representativeness of the test results insofar as it relates to the
population of 15-year-old pupils. However, it is not possible to clarify the hetero-
geneity of competence development within and especially between classes, as these
cannot be represented by the few randomly selected pupils. The learning climate of
the school classes is abstracted in accordance with this test design and leads to a
considerable restriction of the informative value of the test results for the participat-
ing schools and teachers.

Example 3: Automotive Mechatronics Technician (NRW)

This problem can be illustrated using the example of the COMET project Automo-
tive Mechatronics Engineer (NRW), as the professional competence development in
and between the subject classes—including the same school locations—is recorded.
A comparison of a group of second and third year trainees in a given sample of
trainees shows the phenomenon of stagnation of competence development (at the
first test time): The competence development of the two test groups does not differ or
differs only very marginally from another (Fig. 7.46). A comparison between the
same test subjects as trainees in their subject classes reveals an impressively high
degree of heterogeneity between the classes (Fig. 7.47).
This example illustrates that the learning climate of the classes has a very large
influence on competence development.
The category of representativeness must therefore be conceptualised in such a
way as to capture the situation-specific peculiarities that characterise real learning. In
the sense of situated learning, a school class represents a class-specific learning
climate that has a decisive influence on the competence development of the students
to be trained. Therefore, in the comparative analysis of competence development and
262 7 Conducting Tests and Examinations

Fig. 7.46 Average competence profile per training year (automotive mechatronics technicians
NRW)

Fig. 7.47 Class comparison: highest vs. lowest TS (automotive mechatronics technicians NRW)

the learning climate, the subject classes play a fundamental role in learning
research.
The new research field of competence diagnostics in vocational education and
training requires the investigation of problems and questions on the basis of well-
founded hypotheses. The formation of randomised comparison groups is also of
some importance. In the randomisation technique, situational ‘interference variables’
are usually neutralised (Bortz & Döring, 2002, 58). This, however, severely limits
the potential of educational research. Competence diagnostics requires the scientific
examination of situated learning, the singularity of vocational learning processes, as
7.5 Planning and Executing COMET Projects 263

is the case, for example, in the subject instruction of vocational school, technical
college and specialised school classes. While the clarification of fundamental laws of
vocational learning must be abstracted from the specific conditions of learning
situations in classes, the clarification of class-specific effects on the analysis of
learning situations and learning environments shaped by teachers and trainers is
important. In the first case, the question of the representativeness of the sample of
test persons to be involved in an investigation must be clarified. In the second case,
abstracting from the learning climate of the classes would prevent the clarification of
essential factors for the competence development of the learners or classes.
In competence diagnostics in vocational education and training, the samples of
the occupations involved often represent both forms of representativeness.

Example 4: Realisation of Representativeness and Situatedness

In the COMET project ‘Electronics Technician (Hesse)’, a sample of trainees/


apprentices and technical college students from the profession of electronics tech-
nician or the subject of electronics was selected:
• Electronics technician specialising in energy and building technology (crafts).
• Electronics technician for industrial engineering.
Both occupations comprise a training period of 42 months. Table 7.13 shows the
number and proportion of trainees in the two selected occupations in the German
counties participating in the project.
The state’s vocational schools selected by the Hessian Ministry of Education
(HKM) for the COMET project train electronics technicians for the two training
occupations of electronics technician in industry and trade. For the sample, school
locations in the metropolitan milieu (e.g. Kassel, Frankfurt) as well as vocational
schools were selected to represent the more rural area (e.g. Melsungen and
Dillenburg). This selection of school locations made it possible to examine addi-
tional location-specific factors influencing vocational competence development.
The Federal State of Bremen participated in this COMET project as part of a
comparative study. Unlike in Hesse, where a sample of test persons had to be drawn

Table 7.13 Distribution of trainees in the selected training programmes among the Länder
involved in the project (Federal Ministry of Education and Research (2006), Vocational Training
Report 2006 Part II and Annex; Federal Statistical Office 2006)
Trainees Trainees absolute and Trainees absolute and
(nationwide) in percent 2005 in in percent 2005 in
Training occupation 2005 Hesse Bremen
Electronics technician FR 22.325 1.619 175
energy and building (7.3%) (0.8%)
technology
Electronics technician for 14.165 1.012 234
industrial engineering (7.1%) (1.7%)
264 7 Conducting Tests and Examinations

for research-economic reasons, a total survey was carried out in Bremen. The
training of apprentices as electronics technicians is carried out at three vocational
schools.
The number of test participants at the first test time was n ¼ 697, of which
371 were trainees for Hesse and 255 for the state of Bremen as well as 65 were
students at technical colleges (Hesse).
The sample at the second test time (1 year later), as the basis for a new cross-
sectional and longitudinal study, had the same size. The trainees in the second year
of training at the first test time, who formed the sample for the third year of training at
the second test time, took part in the longitudinal study.
The samples of all educational programmes involved in the COMET project
Electronics Technicians (Hesse) are representative in two ways. They represent
both the test population and the class population, as all participating VET schools
involved all second- and third-year classes of electronics technicians as well as the
technical school classes in the survey.
This form of representativeness of the project also made it possible to analyse the
class-related effects of competence development. In this respect, there is also a high
degree of situational representativeness, which formed the basis for the inclusion of
the COMET project in the schools’ internal quality assurance and development
procedures (Table 7.14).

7.5.3 Selecting and Developing the Test Items, the Test


Documents for the ‘Commitment’ Survey
and Performing the Context Analyses

• The scientific and didactic orientation of teachers and lecturers as well as trainers
often represents a considerable barrier to the development of test items in
accordance with the COMET competence model. One successful way for the
development of test items is the participation of one or more experienced
teachers/trainers who have already gained previous experience in the develop-
ment of test items in a COMET project of a related profession.
• The test items are not developed based on an existing curriculum, but in relation
to the fields of action of professional practice (following the practice of Interna-
tional World Skills— IWS).
• Nevertheless, once the drafted test items have been tested and revised, it is
necessary to assess the curricular validity of the test items for all educational
programmes involved in a project.
• Modular structures in training and examination practice cannot be considered in
the development of test items because they contradict the requirements of the
COMET competence model (concept of (holistic) task solution).
• Conceptualisation of the context analysis.
7.5 Planning and Executing COMET Projects 265

Table 7.14 Overview of the number of test participants at the first recording time (Haasler,
Heinemann, Rauner, Grollmann, & Martens, 2009, Section 4.7)
Electronics
Electronics technician
technician FR energy
for and
industrial building
engineering technology
(industry) (handicraft)
2nd 3rd 2nd 3rd
year year year year
Hesse Heinrich Emanuel Merck School, 50 22 72
Darmstadt
Commercial schools of the Lahn-Dill dis- 16 18 6 13 53
trict, Dillenburg
Werner von Siemens School, Frankfurt am 32 17 15 29 93
Main
Ludwig Geissler School, Hanau 32 27 59
Oskar von Miller School, Kassel 17 26 13 18 74
Radko-Stöckl-School, Melsungen, 11 9 20 371
Germany
Bremen Technical training Centre Mitte (TBZ), 22 30 27 41 120
Bremen
Schulzentrum Sek. II Vegesack—voca- 28 16 0 8 52
tional schools for metalworking and elec-
trical engineering, Bremen
Commercial educational institutions, 10 13 31 29 83 255
Bremerhaven
207 169 103 147 626

The first step in the conceptualisation of the context analyses is to define the interest
in knowledge and the development goals in relation to the test groups involved in a
COMET project, as well as the educational programmes and systems which repre-
sent the test groups.
The COMET instruments are used to measure
1. The pedagogical-didactic potential of vocational training programmes and sys-
tems for the qualification of skilled workers on the basis of individual competence
characteristics and development
2. The commitment, identification with the occupation to be learnt and, in dual
training programmes, emotional attachment to the training companies. The
conceptualisation of recording the willingness to learn must—in contrast to
recording the vocational competence—take into account the structures of the
vocational training programmes and systems. When developing and applying the
scales for occupational and organisational identity and for occupational and
organisational commitment, for example, it must be considered whether and in
266 7 Conducting Tests and Examinations

what form learning in the work process (the company as a learning location) is
integrated into vocational learning.
The context surveys provide data for the analysis and interpretation of the test
results. This research approach goes far beyond the established descriptive
analysis approaches in competence diagnostics. With the introduction of the
multidimensional ‘heterogeneity model’, the PISA project has succeeded in
partially overcoming the educational policy requirement for a descriptive project
design (PISA, 2003; Prenzel et al., 2004, 377 ff.).

Survey of Trainees/Students

To interview trainees/students, a standardised survey was developed in the COMET


project, which is modified in the individual projects depending on the particularities
of the test populations, the vocational training programmes and the knowledge and
development objectives of the project consortia.
The survey includes
• The personal characteristics of the learners (Table 7.15).
• The characteristics of the training companies and in-company training.
• The characteristics of the school as a learning location.
In test practice, it must be decided whether methods for the direct recording of
prior knowledge and general cognitive abilities should be provided for by recording
the prior school education and the quality of vocational preparation training.

Characteristics of In-Company Training

Four classes of instruments can be distinguished which have so far been used to
determine the context characteristics of in-company learning:
• Instruments that focus on the objective conditions of the quality of in-company
learning within the framework of dual vocational training;

Table 7.15 Personal characteristics of learners


School level and
Socioeconomic pre-vocational learning
background career Training motivation
– Language at home – School-leaving qualifica- – Desired occupation
– Parental support tion – Relative importance of the occupation
– Parents’ school- – Pre-vocational measures in relation to the training company
leaving qualifications – Final grades in German, – Further information on career guidance
mathematics and English
7.5 Planning and Executing COMET Projects 267

Table 7.16 Context characteristics of in-company vocational training


General company Work-process orientation of the Company training
characteristics training situation
– No. of employees – Learning locations (training work- – Work climate
– No. of trainees shop, company work process) – Social inclusion
– Sector – Organisation of training (full-time – Measures to promote
– Position of the company trainers, part-time trainers) transparency
(branch, independent) – Integration into the
expert culture
– Complexity of the tasks
– Variety of tasks
– Autonomy
– Suitability of require-
ment and ability level
– Meaningfulness of the
tasks

• Instruments and items of (work) psychological provenance developed in connec-


tion with the question of the ‘general’ learning content of work processes (Butler
et al., 2004; Skule & Reichborn, 2002);
• A group of instruments which can be described as ‘industry-oriented’ and which
are used for internal quality assurance of training processes or the accreditation of
companies and training programmes (National Automotive Technicians Educa-
tion, 1996; Ripper, Weisschuh, & Daimler Chrysler, 1999).
• Instruments from teaching and learning research for commercial training
(MIZEBA) (Zimmermann, Wild, & Müller, 1999).
Apart from their origin and intended use, the instruments differ in terms of
recording subjective assessments of learners on the one hand and objective indica-
tors on the other. Firstly, it can be assumed that the general characteristics of the
training company already influence the structure of the training. Experience shows,
for example, that the breadth of everyday professional requirements varies with the
size of the company. Within the framework of COMET, it was decided to use the
instrument Mizeba (Wosnitza & Eugster, 2001), which in the meantime has also
been validated for the commercial-technical sector. The different scales recorded by
the instrument are listed in the Table 7.16.

Characteristics of Vocational Schools in Dual Vocational Education


and Training

The characteristics of the vocational schools involved are recorded by means of


assessments and surveys of trainees. The scales used were validated in various
projects on school quality by a working group of the German Institute for Interna-
tional Educational Research (DIPF) (Gerecht, Steinert, Klieme, & Döbrich, 2007).
However, not the entire inventory was used, but only those items that correspond to
268 7 Conducting Tests and Examinations

the special tasks in vocational schools. For example, in contrast to general schools,
parental work plays a minor role in vocational schools. In contrast, deviant behav-
iour and pupil orientation are also important determinants of school quality in
vocational schools. In addition to items from general school quality research, some
items were added to genuine vocational-educational quality dimensions such as
learning location cooperation and practical relevance of teaching (Pätzold, Drees,
& Thiele, 1998; Pätzold & Walden, 1995).
The hypothesis to be tested with regard to vocational competence development as
a function of the school learning environment would be that a school environment
characterised by high pedagogical quality and high work process orientation would
be conducive to a coherent competence development process with regard to all three
concepts of vocational competence (functionality; process functionality; utility value
and customer orientation; social relevance). The first step would be to determine the
impact and strength of individual conditions on the course of vocational competence
development. Moreover, the replacement of a ‘school learning concept’ by a ‘voca-
tional learning concept’ can be examined and differentiated over the course of
training (Bremer, 2004, 114 f.).

Context Analyses of the Scientific Support

An online procedure is recommended for carrying out the context analysis, as this
considerably shortens the time between the implementation of the test and the
feedback of the test results. With this form of quantitative context analysis,
generalisable phenomena and insights can also be gained in large-scale projects.
The clarification of the learning and training situation of classes also requires
qualitative methods and, above all, dialogue between teachers and trainers about
‘their’ test results and the analysis results concerning them. A collegial interpretation
of class-specific training situations can therefore provide insight into the quality of
vocational training processes that goes far beyond the quantitative results. The
innovation potential in vocational education and training practice ultimately depends
on this.

Quality Diagrams

Eight scales were formed from the total items available for the survey of test
participants (Table 7.17).
The scales are arranged in a quality diagram in such a way that the three central
scales for evaluating in-company vocational training are assigned to the upper half of
the diagram:
• Business process orientation of training quality.
• Training quality.
• Training support (by the trainers).
7.5 Planning and Executing COMET Projects 269

Table 7.17 Scales context survey


Scale Cronbach’s Alpha
Business process orientation α ¼ 0.64
Training quality α ¼ 0.76
Training support α ¼ 0.62
School climate α ¼ 0.67
Teaching quality α ¼ 0.64
Teacher assessment α ¼ 0.86
Learning location cooperation (structure) α ¼ 0.70
Learning location cooperation (contents) α ¼ 0.87

The business process orientation of the training is regarded as a quality criterion,


the significance of which is primarily justified from a business and professional
pedagogical perspective. If trainees are able to integrate their activities and assign-
ments, which they carry out both independently and in a team or under supervision,
into the company’s business processes, this contributes to the development of a
sense of responsibility and quality. This in turn enables companies to assign quality
control tasks to trained specialists and to establish flat organisational structures and
participatory forms of organisational development. The result is an increase in labour
productivity.
The vocational-educational aspect of business process orientation in training is
that the trainees experience and learn what significance their work has for the
company and for the customers. This justifies a deeper understanding of their
activities as well as an increase in professional identity and the associated
motivation.
The training quality scale comprises the items relevant to training success. The
correlation with the values of competence development, the development of occu-
pational and organisational identity and motivation shows which items are particu-
larly suitable for these scales.
The training support primarily comprises the trainers—but also the cooperation
experience in cooperation with other trainees and specialists of the joint company
practice. The quality of training support radiates to all other elements of the
in-company training quality. In addition, the quality of training support influences
attitudes towards school-based learning. If training support is experienced as inad-
equate, the interviewees usually perceive school as a learning location and the
teachers as a place that compensates for the weaknesses of in-company training.
Complementary to these three quality scales for in-company instructors, the
following scales are used to record the training quality at vocational school:
• Learning climate at the school.
• Teaching quality.
• Teachers.
The learning climate at the vocational school is—complementary to the learning
climate at the workplace—a fundamental prerequisite for more or less successful
270 7 Conducting Tests and Examinations

Table 7.18 Items on the vocational school as learning location and on the scholastic self-concept
(cf. Piening, Frenzel, Heinemann, & Rauner, 2014 for details)
Cronb.
Scale Items alpha
Didactic Classmates frequently disturb the lessons. (recoded) α ¼ 0.71
climate Classmates have little consideration for other students.
What we do in class, I usually find interesting.
Students often bunk school.
I feel comfortable at my school.
Teacher Our teachers consider the interests of the students in their lessons. α ¼ 0.84
evaluation Our teachers make the lessons interesting.
Our teachers take our students seriously.
Our teachers have a good overview of company reality.
Our teachers cooperate with instructors and master craftsmen from
our company.
Our teachers are well versed in the subject.
Didactic Our teachers also take care of individual students. α ¼ 0.83
quality Our teachers coordinate the planning and execution of lessons with
each other.
Our teachers can also teach difficult topics in an understandable
way.
Our teachers give us the opportunity to solve problems indepen-
dently and give us advice.

learning at school. The items were selected to facilitate the evaluation of the great
heterogeneity with which the trainees, vocational school pupils and technical college
students experience the learning climate of the different vocational institutions.
In dual vocational training, the scholastic learning climate is generally assessed
by trainees in competition with the training quality of in-company vocational
training. Their experience of cooperation with other trainees and specialists as well
as their relationship with their teachers is included in their assessments of the quality
of the scholastic learning climate (Table 7.18).
Teachers are the key factor for the qualities of scholastic learning. This is shown
by the empirical data of the COMET projects—in accord with the state of the art in
relevant research. One strength of the scales is that they differentiate between items
with which the practical competence of the teachers can be evaluated: ‘Our teachers
have a good overview of company reality’. For many professions, these items are
particularly important, as passing the exam and therefore achieving employability
are the yardsticks by which learners assess their teachers and trainers. As the
trainees/students clearly distinguish between the occupation-related and the sub-
ject-related competence of their teachers, this is taken into account when selecting
the items.
Two scales for learning location cooperation are assigned to both learning
locations in the quality diagram. They are therefore arranged in the middle of the
diagram. A distinction is made between a scale based on the content and another
7.5 Planning and Executing COMET Projects 271

Table 7.19 Items for the differentiated evaluation of learning location cooperation
Cronb.
Scale Items alpha
Learning location coop- My training company and the vocational school coordi- α ¼ 0.64
eration (structure) nate training with each other.
Joint projects are carried out between our company and
the vocational school.
The training company is satisfied with the school’s work.
My company attaches great importance to attending
vocational school.
Learning location coop- Learning at vocational school and in the company is well α ¼ 0.89
eration (contents) matched.
Vocational school lessons are oriented towards company
practice.
The lessons at vocational school help me to solve the
tasks and problems of the work at the company.
I can apply the content I learn in vocational school to
my work.
The work that I carry out in the company is also dealt
with at vocational school.

based on the structure of the learning location cooperation. Both scales make it
possible to very clearly illustrate how the two learning locations are communicated
with each other (Table 7.19).

The Benefit of the Quality Diagram for Presenting and Interpreting


the Results of the Context Analyses

The quality diagram simplifies the illustration of training quality so that 57 individual
items can be reduced to eight quality criteria. This increases the informative value of
the analysis results in several respects.

Capturing the Quality of Training at a Glance

This presentation form for context analyses illustrates the strengths and weaknesses
of the vocational training quality at a glance (in this case, from the pupil’s perspec-
tive). The selected examples are taken from the project ‘Recruitment and training
organisation’ (Piening et al., 2014). The size of the area in the diagram indicates the
level of training quality and the homogeneity of the quality characteristics indicates
the quality of learning location cooperation and the preference that the trainees have
for the learning locations (Fig. 7.48).
In-company training is rated more positively than school-based training. Learning
location cooperation is the central weakness in this example. The cause of this
quality deficit seems to be attributed to school as a learning location.
272 7 Conducting Tests and Examinations

Fig. 7.48 Example of the quality profile of process mechanics (Saxony)

The quality values are derived from the scale’s mean values. The advantage of
this form of presentation is that the quality diagrams can also be compared with the
values of the evaluation for the individual items.

Illustration of Heterogeneity

This presentation form for context analyses also makes it possible to illustrate the
degree of homogeneous and inhomogeneous quality profiles and to quantify them
using a coefficient of variation. For example, the Qualified Groom has not only a
slightly above-average qualification profile, but also a relatively homogeneous one.
In contrast, the quality profile of the Specialist Warehouse Clerk is very inhomoge-
neous. Here, too, the overall very low quality of training is likely to be caused
primarily by the low level of training support (Fig. 7.49).

Data Protection and Coding of the Personal Data of the Test Persons

For the neutral rating of the respondent solutions, it is necessary to anonymise the
personal assignment of a respondent to their solution to the test item. The response to
the task solutions identified by name was therefore first coded centrally before being
forwarded to the evaluators for rating. Consequently, the rater can determine neither
the creator of the solution variant, the training year or training occupation nor the
attended vocational school. All information that could consciously or subcon-
sciously influence a rater’s evaluation was withheld.
In this design, the personal assignment of the data to the subjects can only be
carried out centrally by the scientific supervisor who leads the investigation. The
7.5 Planning and Executing COMET Projects 273

Fig. 7.49 Quality profiles of the training occupations Qualified Groom and Specialist Warehouse
Clerk

personal code of each subject is required both for the second survey in the longitu-
dinal section and for identification in the written survey on the context
characteristics.
The data of the two measurement points at which the test items were used are
anonymously compiled by Scientific Support with the help of a code word. The
collected individual data of the subjects will not be passed on to third parties. All
pupils who took part in the survey receive individual feedback on their personal level
of competence.
274 7 Conducting Tests and Examinations

The use of test items in cross-over design results in a problematic situation from a
professional pedagogical perspective: after the first use of the test items, the sub-
jects—quite rightly—expect feedback on their results. As all test items are used
again at the second recording time in order to confront each subject with all four test
items in the portfolio, the items cannot be intensively discussed and didactically used
in vocational school lessons for methodological reasons.
Participation in the study was left up to the vocational school students. However,
in the cover letter of the study to the subjects, it was pointed out that the test is part of
the training in order to substantiate the serious character of the survey (see cover
letter).

7.5.4 Informing about the Objectives and the Implementation


of the Test

When informing test participants about the objectives and implementation of the
tests, a distinction must be made between the initial performance of a COMET test
and the repeated participation of test groups in a test, e.g. in a longitudinal study with
two test times.
As a rule, when participating in a COMET test for the first time, the competence
level of the corresponding population is examined. The ‘initial situation’ is measured
when it is intended to also use the COMET test procedure as an instrument for
quality development.
All test participants are informed in writing about the objectives, the procedure
and the evaluation of the test.
In order to ensure the objectivity of implementation, it is essential to avoid
teachers/trainers challenging their trainees/students to a special effort through a
motivating speech or other motivating activities (competitive situation). Equally
problematic are derogatory remarks with which the test is presented, for example,
as an annoying duty that has nothing to do with the actual training.
If a test group takes part in a COMET test for the second time (in a longitudinal
study), it should also be informed in writing about the test. It must be explained what
a longitudinal study is and that each individual test participant, each of the classes
involved, an education centre or even the test participants in a region learn whether
and how the quality of the training has changed and how it can be improved; why—
if necessary—a new context survey is carried out and how and when the test
participants are informed of the test results (see the example of a letter to the test
participants).
Test Items for Electronics Engineers
Dear Trainees,
As part of your vocational training, we would like to confront you today with two
typical tasks for electronics technicians.
7.5 Planning and Executing COMET Projects 275

Participation in the test is part of your training. As a prospective electrician, we


ask you to take a serious look at the tasks and work out solutions. The tasks are not
class tests and are not early examinations for skilled workers or journeymen.
However, the results of the individual student are included and considered in the
performance assessment for vocational education.
The results of the tests serve to improve the school-based and in-company
vocational training for electronics technicians. On the one hand, they are discussed
at the vocational school and, on the other hand, they are made known to the training
companies.
Our binding commitment: Each student receives individual feedback on the status
of his/her professional competence development.
The tasks are deliberately formulated openly. This gives you the opportunity to
put everything on paper that you consider important in connection with solving the
tasks. Use this leeway.
The test consists of two tasks clearly separated by a break. You have 2 h to
process each task. At the end of the processing time, please place this coloured task
sheet and all papers you wish to submit for solution in the prepared DIN A4
envelopes. Please return the sealed envelopes to your teacher.
Thank you very much for your effort and support (Fig. 7.50).

Test items for electronics engineers


Dear Trainees,
As part of your vocational training, we would like to confront you today with two typical
tasks for electronics technicians.
Participation in the test is part of your training. As a prospective electrician, we ask you
to take a serious look at the tasks and work out solutions. The tasks are not class tests and
are not early examinations for skilled workers or journeymen. However, the results of the
individual student are included and considered in the performance assessment for
vocational education.
The results of the tests serve to improve the school-based and in-company vocational
training for electronics technicians. On the one hand, they are discussed at the vocational
school and, on the other hand, they are made known to the training companies.
Our binding commitment: Each student receives individual feedback on the status of
his/her professional competence development.
The tasks are deliberately formulated openly. This gives you the opportunity to put
everything on paper that you consider important in connection with solving the tasks. Use
this leeway.
The test consists of two tasks clearly separated by a break. You have two hours to process
each task. At the end of the processing time, please place this coloured task sheet and all
papers you wish to submit for solution in the prepared DIN A4 envelopes. Please return
the sealed envelopes to your teacher.
Thank you very much for your effort and support.

Fig. 7.50 Letter to the test participants


276 7 Conducting Tests and Examinations

Example: COMET Project Electronics Technician (Hesse)

In this project, the tests were carried out at the vocational school as learning location.
The subjects were confronted with the test items as part of the regular vocational
school curriculum. The first recording time at which the test items were applied
encompassed a period of 5 h. The test began at 08:00 hours with the instruction of the
test participants by the teacher, and the test ended at 13:00 hours. In order to achieve
a uniform approach of the subjects, the teachers were informed beforehand about the
exact course of the test by means of written instructions:
• Each student received a total of two test items for processing. Each task was
scheduled to take a maximum of 2 h to complete.
• The publication and processing of the two tasks were strictly separated in time.
There was a half-hour break between processing task 1 and task 2.
• One week before the test was due to take place, students were informed by their
teachers that a test was taking place and within what framework it was taking
place. A written instruction for the teachers was also prepared for this preliminary
information.
The results of the competence measurement in vocational education and training
suggest that the test results will also differ considerably due to the great heteroge-
neity of the trainees’ previous schooling and other personality traits. This is
reinforced by the different attractiveness of the occupations (their identification
potential) and the large differences in the commitment of the trainees. It was
therefore necessary to decide whether the survey of the trainees’ test motivation
was required in order to obtain additional data for the interpretation of the test results
if these differed greatly between different groups of pupils.

Time Schedule of the Project and the Problem of Test Duration

As a rule, either one-off tests or longitudinal examinations with two test points are
carried out. The decision for the temporal course of a project is directly related to the
project objectives. Longitudinal studies have the greater research and development
potential. The timing of COMET projects also depends on whether the transfer of
project results and experience is already systematically integrated into a project with
two test points. This particularly concerns the qualification of multipliers for the
implementation of COMET projects.

Test Scope

So far, two variants of the temporal scope of the test have been tested.
Variant 1 Each test participant works on a complex test item. The maximum
processing time is 120 minutes. The COMET test is not only a measurement
7.5 Planning and Executing COMET Projects 277

according to the performance principle, in which the processing time is included in


the test result as a performance factor. How well the test participants make use of the
solution space offered by each test item and how detailed and with what depth the
task solutions are justified are also examined. Each test participant is given sufficient
time to do this. As a rule, the time of 120 minutes is not exhausted, and test
participants rarely need more than 120 minutes. This is expressly made possible
because it is intended to record the level of competence and knowledge at which test
participants can solve, present and justify the tasks.
Option 2 Each test participant works on two test items from two different profes-
sional fields. This test variant is based on the practice of final vocational examina-
tions, in which two ‘holistic examination tasks’ must usually also be solved in a
comparable time frame.
With this variant, it must be considered that, in the second test item, some of the
test participants may experience a slight drop in test motivation (COMET Vol. III,
194 ff.). This can be avoided by making it clear to the test persons that each of the
two test items measures vocational competence in a different field of action. A break
can be provided between the two test items—but in such a way that the test
participants can communicate with each other about the test items.
The advantage of the test procedure with two complex test items (on two different
fields of action) is that the test results more accurately represent the professional
competence of the test participants (comparable to an examination). If, on the other
hand, the focus is on competence diagnostics to record the quality of training courses
and systems, then one complex test item per participant is sufficient to record the
competence development of training courses at a sufficiently high level of reliability.
The following requirements must be taken into account when carrying out the
tests.
It is necessary to regulate the access of test participants to the tools needed to
solve the test items. The authors of the test items must expressly specify these
regulations. Separate rules need to be developed for computer-based testing. Also,
additional criteria for the design of test items apply to computer-aided test
procedures.
A variant of the provision of ‘aids’ is that these are made available to each test
participant as an appendix to the test items. Within the framework of the pre-tests, it
is possible to test different forms of provision of aids. The decisive criterion for the
scope and form of the aids is their practical relevance: What do specialists of the
respective profession usually resort to when solving or processing professional
tasks?
The implementation of the tests should be prepared in such a way that test
participants need not pose any questions during the test. In addition, the established
rules of objectivity of implementation apply.
278 7 Conducting Tests and Examinations

Online Rating

The processed test items (item solutions) are forwarded to the raters for dual online
rating in anonymised form. The online rating enables prompt feedback to the test
participants. The feedback documents contain the competence profile of the test
participant for the fields of action that were processed with the test items. The
responsible teachers/trainers explain the feedback documents to ‘their’ trainees—
comparable to the return of tests or exams. The feedback document also contains the
competence profile of the respective test group, so that each test participant can
classify his/her test result accordingly.
As the competence profiles of the individual test participants in a test group (e.g. a
vocational school or technical college class or a test group of a training company) are
generally similar to the competence profile of the overall group, the competence
profiles also show which partial competences in the training process were imparted
at which level of ‘success’.
The responsible teachers and trainers have the opportunity to reflect on the
strengths and weaknesses of the training with reference to competence profiles and
to take appropriate quality development measures.

7.5.5 Research as a Cooperative Project between Science


and Practice

In competence diagnostics in vocational education and training, the participation of


teachers and trainers is a prerequisite for the preparation, implementation and
evaluation of the research process.
1. Development of test items.
In all COMET projects, the test items are developed by teachers/trainers. The
scientific basis is formed by the COMET competence and measurement model
and the pre-test.
2. The development of the test items includes the development of the solution
spaces. Only the occupation-related project groups have the competence to
carry out these development tasks.
3. If rater training is included in the pre-test, then the rater groups determine for each
test item which of the 40 rating items are not relevant for the rating.
4. Rater training and rater practice go hand-in-hand with the acquisition of a
domain-specific understanding of the subject, which is reflected in the didactic
actions of the teachers/trainers.
7.5 Planning and Executing COMET Projects 279

Interpretation of Test Results in the Context of Feedback Workshops

In the feedback workshops, the researchers present their test results on the compe-
tence development of the test participants, structured according to the learning
groups involved (classes, companies, school locations) and present their interpreta-
tions based on analysis data. The focus is on the results of the questions:
• What competence levels do the test groups have?
• What are the differences between the different types of competence, differentiated
according to class, school location, training company, region and country?
• What is the degree of heterogeneity of the competence values within and between
the test groups?
• Which interpretations of the test results do the results of the context analysis
permit?
It would then be a good idea to give teachers and trainers the opportunity to
interpret the test results and their interpretation by scientific support on the basis of
their own teaching and training experience. In this case, it is particularly important to
identify the causes for the differences in test results between the test groups involved
(e.g. the classes).
The same procedure is used for the research results on the survey of vocational
and organisational identity as well as vocational and organisational commitment and
on the assessment of training quality by the test participants.
One of the guiding principles of the feedback workshops is: Learning from each
other.
In a final step, teachers/trainers report on the application of the COMET compe-
tence and measurement model as a didactic tool for the design, organisation and
evaluation of vocational training processes. Further points include the exchange of
experience on the changed learning behaviour of trainees, the application of the
rating scale for new forms of evaluation of teaching/training projects and the
handling of obstacles in the introduction of COMET-based forms of vocational
teaching and learning.

Basis of the Identified Examples of Good and Best Practice

One aim of the feedback events is to use the test and discussion results to justify and
agree didactic and training organisational measures and to define a time frame in
which these can be implemented and their success verified.
Three overarching questions must be considered (see also Chap. 9):
1. How can the COMET competence model be translated into didactic action?
2. How can the development of competences be checked using a self-evaluation
concept based on the rating procedure?
3. Which competences are not covered by the COMET test procedure and how can
these also be promoted?
280 7 Conducting Tests and Examinations

The primary benchmark for the justification and evaluation of innovations is the
competence of the test participants to solve professional tasks completely and at a
high level of knowledge. The competence profiles of the test participants are useful
for identifying the problem-solving patterns of the test participants. At the same
time, they represent the expertise of their teachers and trainers (Zhou et al., 2015,
401 ff.).
Example The importance of feedback from test results—above all in the form of
competence profiles—is very clearly demonstrated in a German-Chinese follow-up
project in automotive mechatronics (Zhou et al., 2015). Students from industrial and
comprehensive colleges in several provinces as well as trainees and master students
from technical colleges took part in the Chinese subproject. The latter had partici-
pated in a pilot project for the introduction of the learning field concept and in this
context also dealt with the concept of the holistic solution of professional tasks on
the basis of competence profiles of the electronics engineer project. This is clearly
reflected in the test results (Fig. 7.51).

Fig. 7.51 Competence distribution of the test groups China (Industrial Colleges, Comprehensive
Colleges, Technician Colleges), Hesse, NRW (second- and third-year trainees)
7.5 Planning and Executing COMET Projects 281

Of the Hessian trainees and master students, 64% reached the second and third
competence levels, and 18% the third level. Of the formally somewhat more highly
qualified (senior) trainees and master craftsmen studying at the skilled worker
schools in China, 61% reached the second and third competence levels, of which
43% reached the second and 18% the third. The share of the risk group (whose
competence is lower than the nominal competence) is 12% for Hessian trainees and
only 5.4% for Chinese trainees.
Of the NRW test group, slightly more than one-third of trainees (38%) reached
the second and third competence levels, of which 30% reached the second and 8%
the third.
The weakness of Chinese higher education (IC, CC) is evident in the very small
proportion of students who achieve the highest level of competence. This is only
1.7% of IC students and 7.4% of CC students.
The high degree of functional competence among Chinese CC students (65%)
and IC students (52%) is an indicator of a study with a subject-systematic orienta-
tion—and therefore with hardly any practical orientation. Only 22% of CC students
and 28% of IC students therefore have a professional work concept (procedural
competence).
At 38.24%, the proportion of risk students in the NRW test group is significantly
higher than in the comparable groups.
If one compares the best classes of the Hesse and China trainees, it becomes
apparent that the class of Hesse trainees has a TS ¼ 46.7 and is therefore above the
competence level of the best Chinese class with a TS ¼ 42.3, but that the Chinese
class has a much more homogeneous competence profile (V ¼ 0.20 versus V ¼ 0.28)
(Fig. 7.52).
The skilled worker and master craftsman students in all three test groups

Fig. 7.52 Comparison of the competence profiles of the two best test groups (classes) in China and
in Germany
282 7 Conducting Tests and Examinations

• Trainees (SII) (Intermediate Workers)


• Master students (Post-SII) (Senior Workers)
• Technicians.
show similar competence profiles.
The differences in competence levels cannot be explained based on the context
data. This primarily applies to the difference in competence level between trainees
and master students. In a feedback workshop with the teachers/lecturers, it was
reported that the trainees were familiarised with the concept of complete task
solution and the self-evaluation of the learning outcomes using the COMET evalu-
ation scale as part of a pilot project. According to the teachers/lecturers, this is not a
‘training to the test’, as the aim of this model was to introduce the learning field
concept into training on the basis of the COMET competence and measurement
model. The trainees were not aware of the test items.

7.5.6 Transfer Activities

One of the most important steps in project planning is the development of a transfer
concept. In this regard, two transfer methods compete with each other: a quasi-
experimental project design and innovation projects with an integrated transfer
concept.
In a quasi-experimental project design, the focus of the project activities lies on
the verification of a pedagogical-didactic model by scientific support. A consistent
control of the framework conditions during the course of the project is the prereq-
uisite for securing the project result. Based on this result, the project-implementing
agency then decides on the implementation of the test model—e.g. in the form of
legal regulations, decrees and other administrative provisions.
The alternative transfer variant is to set up a pilot project from the outset as an
innovation project with a transfer component. This variant is widespread. Soon after
they were established—in the tradition of experimental research—the model test
programmes of the Bund-Länder Commission (BLK) and the so-called economic
model tests (controlled by BIBB) were increasingly defined as innovation projects
(Deitmer et al., 2004). However, the implementation of sustainable transfer methods
is considered a weakness of this model experiment variant (Rauner, 2004).
For the introduction of competence diagnostics in vocational education and
training as a method of quality assurance and quality development, there are
numerous possibilities for a successful transfer of project results. The challenge
here is to implement a multiple transfer concept. This requires close coordination of
the project activities between the actors involved in the innovation project in
vocational training practice, the steering and support systems and the scientists
involved.
A key function for the transfer of the COMET methodology is played by the final
examinations of vocational training programmes (Rauner, 2015a). Final
7.5 Planning and Executing COMET Projects 283

examinations are regarded as the ‘secret curricula’ and, in vocational education and
training, they require cooperation between experts in vocational education and
training planning and practice, therefore strengthening cooperation between learning
locations.

Documentation and Publication of Project Results

The documentation and publication of project results is a necessary, if not sufficient,


prerequisite for the success of projects. Documentation and publication of the project
results with a view to sustainability is designed to be specific to the addressees. It
includes in particular
1. feedback of the test results to the test participants so that they can be reflected in
dialogue with their teachers;
2. the preparation of the analysis results for feedback events of scientific support
with the project groups; this form of feedback is also a step of an in-depth analysis
of the test results, which is only possible with the involvement of the contextual
knowledge of teachers and trainers. Comprehensive documentation of the test
results at the level of the test groups involved (e.g. classes) is a prerequisite for
reflection of the test results in the local teacher/trainer teams.
3. For the experimental model provider, a differentiation can be made between a
report summarising the main results and recommendations for the decision-
makers and a supplementary data report for the experts in education administra-
tion and those interested in science.
4. In this young field of vocational training research, the publication of new findings
in specialist journals is a prerequisite above all for the qualification of young
researchers and the expansion of research activities.
Chapter 8
Evaluating and Presenting the Test Results

8.1 Classification of Individual Performance in Professional


Competence Levels

The individual test performance is determined on the basis of open complex test
tasks with a processing time of up to 120 min. The evaluation of the solutions is each
carried out by two teachers independent of one another. Each of the eight criteria is
operationalised by five items each. The evaluators must first decide which of the
eight criteria or which of the 40 items can be applied to the respective task. Test
practice has shown that the test items were formulated in such a way that almost all
forty items are used. It was specified that
• All eight criteria of the competence model’s requirement dimensions are captured
by each one or two test task(s) to be processed by the test participants,
• At least two of the five items each, with which the competence criteria are
operationalised, must be used. Items that are insignificant for a test item are
deleted by the raters.
For each item, the raters indicate the degree to which it has been fulfilled
(Table 8.1).
This form of interval scaling rating is formed by the numerical gradation of digits
0–3. The decision for four scale levels was made to exclude neutral answers. The
contextual justification for this is given by an allocation of the three successive levels
of work process knowledge to the numerical gradations. Table 8.2 exemplifies how
the evaluations of the raters must be converted into scores.
The average value of 2.4 is calculated from the sum of the average values for the
functionality criterion. The average values are rounded up or down to one decimal
place and multiplied by a factor of 10. The example shows a point value of
PF1 ¼ 24.0.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 285
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_8
286 8 Evaluating and Presenting the Test Results

Table 8.1 Requirement and corresponding score per item


Requirement: fully met partly met not met in no way met
Score: 3 2 1 0

Table 8.2 Example for Rater 1 Rater 2 Average value


determining the scores for the
Criterion 1 3.0 3.0 3.0
Functionality criterion
Criterion 2 2.0 2.0 2.0
Criterion 3 2.0 3.0 2.5
Criterion 4 3.0 2.0 2.5
Criterion 5 2.0 2.0 2.0
∑ 12.0

Table 8.3 Calculation of the scores for the three competence dimensions
Competence Scores (competence Scores (competence
Competence dimension criteria criteria) dimension)
Holistic shaping compe- 8 PG1 11 PG 12.0
tence (DG) 7 PG2 13
6 PG3 12
Procedural competence 5 PK1 18 PP 17.3
(DP) 4 PK2 16
3 PK3 18
Functional competence 2 PF1 24 PF 22.5
(DF) 1 PF2 21
Professional competence TS: 51.8

8.1.1 Determination of the Scores for the Three Competence


Dimensions

The scores for the competence dimensions are determined as arithmetic averages of
the competence criteria defining a competence dimension. The scores are rounded to
one decimal place (Table 8.3).
If each test participant processes two test items, the average of the results of the
two test items is used to calculate the results for each test participant.
The data on the individual test subjects can be aggregated according to various
criteria such as subject classes, vocational school locations, federal states, states and
characteristics of the test participants.
8.1 Classification of Individual Performance in Professional Competence Levels 287

8.1.2 Sub-Competences, Competence Dimensions


and Competence Levels

The eight competence criteria represent sub-competences, which make up the


professional competence. In publications on the COMET competence model, the
sub-competences are often referred to as vocational competence criteria if the focus
is on competence development as a whole.
The basis for the rating is the evaluation of the items with which the partial
competences are operationalised.
The psychometric evaluation of the competence model shows (! 6) that a
distinction must be made between the three competence levels of functional, proce-
dural and holistic shaping competence as well as the competence dimensions of the
same name, DF, DP and DD. The competence profiles show the characteristics of
both the eight sub-competences and the three competence dimensions.
This dimensionality of the competence profiles illustrates the strengths and
weaknesses of the more or less homogeneous competence development of test
persons or test groups at the level of sub-competences and competence dimensions.
This dimensionality of the competence profiles illustrates the strengths and
weaknesses of the more or less homogeneous competence development of test
persons or test groups at the level of sub-competences and competence dimensions.
This means that the level of holistic shaping competence includes functional and
procedural competence. A test participant who reaches the level of procedural
competence will then also have functional competence.
The following definition applies to the diagrams showing the distribution of
competence levels. CF (functional competence) or CP (procedural competence)
indicates whether a participant or what proportion of a test group has reached this
level (but no higher). However, this also means that all participants who reach the
second and third competence levels also have the sub-competences of the lower
competence levels.

8.1.3 Classification of Individual Performance


in Professional Competence Levels

According to the specifications described above, the value ‘fully met’ corresponds to
a score of 22.5 (with a maximum of 30 points). In examination practice, a (sub-)
examination is considered passed if at least 50% of the points attainable for the
uppermost result interval are achieved. In this case, that would be 22.5 points.
According to the 50% rule, the minimum score of 11.3 is therefore set for the
achievement of a competency level.
When assigning a sub-result to one of the three competence levels (competence
characteristics), the dual character of vocational competences as successive and
interrelated competence levels on the one hand and as independently existing
288 8 Evaluating and Presenting the Test Results

Fig. 8.1 Example for 17 test persons and their test results

competence characteristics on the other (cf. COMET Vol. I, Sect. 4.3) is considered
by taking both aspects of competence into account when defining the criteria for
assigning individual test results to a competence level. In this respect, the definition
for calculating the minimum scores for the competence levels was based on criteria.
The allocation of individual performances to competence levels follows a
criterion-oriented interpretation of the task solutions. This is based on the compe-
tence model (! 4.2).
The competence level of a test person is determined in accordance with the
criteria of the operations he or she masters (Table 8.3).
Two evaluations are determined to assess the test results for individual
participants:
1. The number of points achieved,
2. The level of competence.
Within the framework of the test evaluation, it was examined whether the
competence level of functional competence can be assumed as basic competence
for the other competence levels. The results in Sect. 4.2 indicate that both interpre-
tations—successive and interrelated competence levels as well as independently
existing competence dimensions—are justified. As the results of the Latent Class
Analysis show, it is appropriate to assume an increasing difficulty of the competence
dimensions (! 4.2). At the same time, the analyses have also shown that the
dimensions are independent of each other in terms of content, which justifies
describing them as different competence dimensions.
The COMET consortium has provided the following definitions for the assign-
ment of trainee performance to a competence level (Fig. 8.1).
8.1 Classification of Individual Performance in Professional Competence Levels 289

Table 8.4 Rules for the compensation of missing functional competence scores
Achieved score for ‘Functional Minimum score required for CP + CD to reach competence
competence’ (PF) level 1 (∑(PP + PD))
>11.2 –
10.3–11.2 3
9.3–10.2 6
8.3–9.2 9
8.2 No further compensation possible

Competence Level 0: Nominal Competence


The test persons who only reach this competence level do not yet have a professional
competence. This applies whenever the conditions for reaching the first level of
competence are not met. In Fig. 3.1, this applies to test persons 13 to 17.
Competence LEVEL 1: Functional Competence
In order to achieve this level of competence, the following conditions apply:
1. A test person will reach competence level 1 if his/her score for functional
competence is higher than 11.2 and if the conditions for reaching competence
level 2 are not met.
2. If the score for functional competence is less than or equal to 11.2, the missing
scores of up to 8.3 can be compensated by scores achieved in the other two
competence dimensions ‘Procedural competence’ and ‘Holistic shaping compe-
tence’. Table 8.4 shows the applicable rules.
Table 8.4 also shows that, with a score of 8.2 or less for functional competence,
competence level 1 can no longer be reached in any case.
In Fig. 3.1, test persons 2, 5 and 6 as well as 8 to 12 do reach the first competence
level.
Competence Level 2: Procedural Competence
In order to achieve this level of competence, the following conditions apply:
1. A test person reaches competence level 2 if both the score for functional compe-
tence and the score for procedural competence are greater than 11.2 and if the
conditions for reaching competence level 3 are not met.
2. If the score for procedural competence is less than or equal to 11.2, the missing
points—analogous to condition 2 for reaching the first competence level—can be
compensated by scores achieved in the competence dimension ‘Holistic shaping
competence’. Table 3.3 shows the applicable rules.
3. Table 8.5 shows that, with a score of 8.2 or less for procedural competence,
competence level 2 can no longer be reached in any case.
In Fig. 3.1, test persons 1, 3, 4 and 7 do reach the second competence level.
290 8 Evaluating and Presenting the Test Results

Table 8.5 Rules for the compensation of missing functional competence scores in procedural
competence
Achieved score for ‘Procedural’ (PP) Minimum score at CD to achieve competence level 2 (PD)
> 11.2 –
10.3–11.2 3
9.3–10.2 6
8.3–9.2 9
 8.2 No further compensation possible

Competence Level 3: Holistic Shaping Competence


The third competence level has been reached by those who have achieved more than
11.2 points each for all three dimensions of competence. In Fig. 3.2, no test person
meets this requirement.

8.2 Graphical Representation of the Test Results

The test results can be displayed in different ways. The representation in Fig. 3.1
summarises the results of a school class.
In addition, a network diagram is created for each individual test participant
(Fig. 3.8). This representation, which includes not only the three competence
dimensions but also the eight competence criteria, emphasises the multidimensional
character of the competence model.

8.2.1 Competence Levels

The COMET competence model comprises is a concept of competence dimensions


that considers competence levels to be relatively independent competence dimen-
sions. This concept is paid special attention by the representation of the test results in
the form of network diagrams. The eight competence components allow a complete
description of professional competence profiles.
According to Schecker and Parchmann, the definition of a hierarchy of compe-
tence levels has its origins in the grading interest of educational planning. In contrast,
it should be borne in mind that the concept of competence development facilitates
the qualitative description of abilities which are not necessarily to be classified in an
ordinal scale (Schecker & Parchmann, 2006, 51). Therefore, in the COMET com-
petence model, functional competence (first vocational competence level) is not a
subordinate or inferior competence, but rather a competence which, on the one hand,
has the significance of a basic competence and, on the other hand, also has the
functions of a relatively independent quantity in a multidimensional space of
multiple competence. The lack of functional abilities or their inadequate
8.2 Graphical Representation of the Test Results 291

Fig. 8.2 The graphical representation of competence level distribution in a test group of vocational
school students, n ¼ 27

development can be compensated neither by procedural competences (second com-


petence level) nor by abilities assigned to the third competence level. The degree to
which competence levels represent successive and interrelated competences and the
degree to which they are dimensions of multiple competence require empirical
testing.
On the basis of the survey data of the first test date (April 2008), the measurement
and evaluation procedure was psychometrically analysed (COMET Vol. II, Erdwien
& Martens, 2009). This confirmed, among other things, that competence levels
represent successive and interrelated skills:
In compliance with the theoretical model according to which higher performance in
‘procedural competence’ can only be achieved if ‘functional competence’ is sufficiently
developed and ‘shaping competence’ continues to be developed to a greater extent only if the
competence levels ‘functional competence’ and ‘procedural competence’ are sufficiently
developed, all types show a tendency towards a pronounced drop in the assessment ratings.
(COMET Vol. II, 80).

The results of the analysis also show that the competence levels and components
are relatively independent (COMET Vol. II, 67 ff.). This dual structure of compe-
tence components, both as competence levels and as dimensions of competence
profiles, considerably expands the possibility of evaluating test results. In this case, it
should be borne in mind that both forms of representation, the competence levels and
the competence profile representation of vocational competence, represent two
complementary evaluation perspectives, each of which in itself is subject to abridge-
ments. For practical handling of the illustrated test results, it is therefore advisable to
interpret both forms of representation in relation to individual results in context.
This is exemplified by the results of a test group of students from technical
colleges (Fig. 8.2).
292 8 Evaluating and Presenting the Test Results

Fig. 8.3 Comparison of competence level distribution in several test groups (results from 2009)

Fig. 3.2 shows that, with 41%, the first competence level is the most pronounced;
30% achieve the second, and only 26% achieved the third competence level. This
could be read as if only 41% of the test persons reached the first competence level. If
the diagram is read correctly, the statement is 96% of the test persons reach the first
competence level, 56% of them the second and 26% the highest competence level.
The risk group comprises 4%.
This means that, as a rule, all test persons who reach the second and third
competence levels also have functional competence—as a basic competence—
which can be assumed for the second and third competence levels. This also applies
to the relationship between the second and third levels of competence. The third
competence level includes the skills of the first and second competence levels.
For the comparative representation of a larger number of test groups, a form of
representation is suitable in which the distribution of competences at competence
levels is represented as shares of 100% in a bar (Fig. 8.3).
This form of representation serves to clearly illustrate the differences between test
groups with regard to the competence levels achieved.

8.2.2 Differentiation according to Knowledge Levels

In the COMET measurement model and in the rating procedure, this is reflected in
the evaluation of the solution aspects on the basis of items that are rated according to
a four-stage interval scale (0–3) (Table 8.6).
8.2 Graphical Representation of the Test Results 293

Table 8.6 Assignment of the interval scale to the levels of work process knowledge
Fully met Partly met Not met In no way met
Interval scale 0–3 3 2 1 0
Levels of work process knowledge Know Why Know How Know That

Fig. 8.4 Distribution of overall scores for nominal, functional, procedural and holistic shaping
competence

The assignment of the scale values 1–3 to the levels of work process knowledge is
based on a pragmatic justification. An item is then evaluated as ‘fully met’ if the
respective solution aspect is not only considered but also ‘justified in detail’. Each
test task therefore states: ‘Justify your solution in full and in detail’. If this is
completely successful, then this corresponds to the level of action-reflecting or
‘Know that’ knowledge. An item is then not (or ‘still’) met if the underlying rules
for the complete solution of a task were considered but could not be justified. This
then corresponds to the level of ‘Know that’ or the value ‘1’. An item is ‘partly met’
if the corresponding solution aspect could be justified in principle, but without
adequately taking the situational context into account.
The definition of the three successive and interrelated competence levels on
which the COMET measurement model is based leads to (in test practice) relatively
large intervals in the overall scores (TS).
This means that it is possible for subjects with a higher overall score to be placed
at a lower level of competence. This happens whenever they reach this level at a
higher level of knowledge.
Figure 8.4 shows that, for example, a TS of 45 can mean that a test person/test
group has achieved both the competence level Procedural competence (high) and the
competence level shaping competence (low). This differentiating form of evaluation
and representation of the competence characteristics depicts the reality of vocational
education and training much more validly and accurately than a score on a
294 8 Evaluating and Presenting the Test Results

Table 8.7 Differentiation of competence levels according to knowledge levels


5% Lower Mean Upper 95%
Competence level Quantile third value third Quantile
Nominal competence 6.2 10.7 13.1 14.9 20.8
Functional competence 17.2 22.3 24.7 26.8 32.9 (34.5)
(25.0)
Procedural competence 29.2 33.4 38.3 10.0 51.1
(35.0) (41.0)
Holistic shaping 40.0 47.7 53.1 55.9 71.5
competence

continuous competence scale which defines competence levels in accordance with


quantitative level differences.
The two forms of representation can be summarised for representations aimed at
the clear hierarchisation of the test persons. This requires the introduction of an index
indicating whether a certain level of competence is associated with a relatively high,
low or average total score. The introduction of such an additional index makes it
possible to classify the test persons more precisely according to the work process
knowledge incorporated in their competence (Table 8.7).
A percentile band is calculated for each competence level to determine the
thresholds for differentiation in accordance with the three levels of work process
knowledge: high, medium and low. The 33rd and 66th percentiles are determined for
each percentile band. The resulting three equally large subdivisions of the respective
competence levels represent the three successive and interrelated levels of work
process knowledge (Fig. 8.5).
Figure 8.5 shows that the percentiles of the three competence levels (5%, 33%,
50%, 66%, 95%) linearly increase from the first (CF) to the third (CD) competence
level. If one applies the model of a linear increase in percentile values from the first
to the third competence level to the determination of the differentiation according to
the three levels of work process knowledge incorporated in the vocational compe-
tences, then the result comprises the limit values given in Table 3.5. The values show
great similarities between the empirical and the model-based values for the
standardised differentiation according to the action-leading (Know That), action-
explaining (Know How) and action-reflecting knowledge (Know Why).
Depending on the overall score achieved, the results within the competence levels
can be differentiated into ‘low’, ‘medium’ and ‘high’, so that the evaluation can be
further refined. This differentiation corresponds to the three levels of occupational
work process knowledge (Fig. 8.6):

Work process knowledge Level


Action-leading knowledge Know that
Action-explaining knowledge Know how
Action-reflecting knowledge Know why
8.2 Graphical Representation of the Test Results 295

Fig. 8.5 Standardised subdivision of competence levels into ‘low’, ‘medium’ and ‘high’

Fig. 8.6 Example: Competence levels differentiated according to low/medium/high (NRW


carpenters)
296 8 Evaluating and Presenting the Test Results

8.2.3 Transfer of Competence Levels Differentiated


according to Knowledge Levels into a Grading Scale

In many countries, test or examination performance is graded on a scale that usually


ranges from ‘1’ (very good) to ‘5’ (inadequate).
• 1 . . .. . .very good
• 2 . . .. . .good
• 3 . . .. . .satisfactory
• 4 . . .. . .adequate
• 5 . . .. . .inadequate
• 6 . . .. . .unsatisfactory.
This grading scale is also used in the examination practice of vocational education
and training. In this case, an examination performance that is not ‘adequate’ is
assessed as a failed examination.
Figure 8.7 shows the allocation of competence/knowledge levels to grades.
For the evaluation of vocational competence, however, marks or scores are less
significant. A competence classified into a grade does not say anything about the
competence profile of an examinee and which professional tasks can be assigned to
him or her for independent processing. This also applies to the selection of appro-
priate further training courses in order to close gaps in the competence profile.
Figure 7.12 (! 7.2.4) shows one way in which the examination performance can
be shown in a test certificate.

Fig. 8.7 Allocation of competence levels to grades


8.3 Competence Development as a Competence Profile 297

8.3 Competence Development as a Competence Profile

The representation of the same test results in the form of a network diagram
(Fig. 8.8) realistically illustrates the characteristics of all eight competence compo-
nents, but not the effect of the successive and interrelated competence levels. This is
why the pointers for the development of the competence levels also appear here as
independent of each other. In contrast, the length of the pointers here represents the
extent of the competence development in the form of scores. The average score for
functional competence (CF) is 14.6 points. The average value of procedural compe-
tence (CP) is 11.7 points and that of shaping competence (CD) is 9.2 points. These
values do not contradict the other forms of representation of occupational
competence.
Modelling the requirements dimension of the COMET competence model
(! 4.2) is based on
1. the concept of the complete solution of professional tasks,
2. differentiation according to the three levels of successive and interrelated work
process knowledge: action-leading, action-explaining and action-reflecting.
The scaling model for vocational education and training is not assigned the
difficulties of the individual test items on the respective competence scale. Instead,
the probable skill values of the competence development of the competence dimen-
sions functional, procedural and organisational competence form three overlapping
courses. For example, the average functional competence can be defined as a value
between 0.33 and 0.66 on the CF scale (functional competence). Accordingly, a
medium to high ‘functional competence’ value can correspond to a low to medium
‘procedural competence’ value. Based on more than 7000 test results, the distribu-
tion for the test groups was determined in accordance with test levels and the total
score. This distribution forms the basis for the definition of low, medium and high

Fig. 8.8 Average competence profile of a test group of vocational school students (type ‘Voca-
tional education and training’), n ¼ 27
298 8 Evaluating and Presenting the Test Results

competence development of the respective competence level. When representing


nominal competence (risk group), the differentiation of knowledge levels is omitted,
as this level is below the first competence level of functional competence. This
differentiating form of evaluation and representation of the competence characteris-
tics depicts the reality of vocational education and training much more validly and
accurately than a score on a continuous competence scale which defines competence
levels in accordance with quantitative level differences.
This means that in the development of each of the eight competence criteria, as
well as the functional, procedural and holistic shaping competence, a distinction can
be made between the three levels of work process knowledge. A test task can be
solved, for example, at the level of shaping competence, by
• taking into account all relevant solution criteria.
If the complete task solution is based on the level of action-leading knowledge,
then this means that the test person knows all relevant rules and can apply them to the
task at hand without being able to explain them technically.
• also being able to explain the complete task solution technically—with reference
to the relevant professional work process knowledge.
This knowledge is the basis for ‘understanding what you do’ and therefore also
for a sense of responsibility and quality as well as a higher level of shaping
competence.
• justifying the complete task solution in relation to the situation and weighing the
solution criteria against each other as well as selecting and justifying the best
possible alternative solution professionally.
This level of knowledge constitutes the highest level of shaping competence.
These three successive and interrelated knowledge levels can therefore be distin-
guished for each of the three competence levels.
In psychology, stage models serve to describe qualitative differences in develop-
ment processes. This is exemplified by the levels of moral and intelligence devel-
opment of children and adolescents (Kohlberg, 1969; Piaget, 1973).
In curriculum theories, a qualitative distinction is made between successive and
interrelated knowledge levels. In Natural Science Didactics, Bybee (1997) intro-
duced a didactically founded level concept that has found its way into the PISA
project and into the COMET competence model. The competence levels identified in
the COMET competence model are based on a qualitative justification of compe-
tence development. As competence development in vocational work and vocational
learning can also be represented as multidimensional competence profiles in accor-
dance with the concept of complete vocational task solution, with which the quality
of the competence development achieved can be quantified, continuous competence
development according to levels and scales can be ruled out for the definition and
scaling of vocational competence. Professional competence is measured as the
ability to exploit the specific scope for solutions or design. The resulting format of
8.3 Competence Development as a Competence Profile 299

open, complex test tasks requires a capability-based measurement model and a


corresponding rating procedure.
Professional competence cannot be measured with a large number of more or less
difficult test items that can be solved correctly (or incorrectly). In the professional
world, it depends on the understanding of the context. For example, electronics
technicians in energy and building technology do not have to deal with correct or
incorrect lighting systems, just as cabinetmakers do not produce ‘correct’ furniture.
It is always (and inevitably) a matter of exhausting the respective solution spaces: of
searching for adequate and good compromises when weighing up all relevant
solution criteria.

8.3.1 Homogeneous versus Selective Competence Profiles

The representation form of the competence profiles using network diagrams contains
two items of information: the three competence levels and the eight competence
components.
To this end, a measure of the homogeneity or variance of the competence profiles
can be determined independent of the level of the total score: the coefficient of
variation. It is quite possible that the test persons solve a test task at a relatively low
level, but nevertheless with equal consideration of all eight solution aspects. The
coefficient of variation V indicates whether the competence profile is more balanced
or imbalanced. A low coefficient of variation stands for relatively high homogeneity
of the competence profile, while high values stand for low homogeneity (Fig. 8.9).
An analysis of the network diagrams shows that the criteria ‘environmental’ and
‘social compatibility’ in particular have the lowest significance in vocational train-
ing. In the world of work, however, this would have a considerable impact as,
depending on the task at hand, violations of environmental and social compatibility
regulations can have far-reaching consequences. In the demand for the professional
execution of a work order, ‘professional’ is often associated with the categories
‘specialist’ or ‘technical’ in the context of scholastic learning. In in-company
vocational training, on the other hand, the ‘professional’ category refers to ‘skilled’
work. If the vocational school succeeds in designing vocational education and
training from a work and business process-related perspective, this also means a
change of perspective in the technical understanding (Bauer, 2006), as corresponds
to the COMET concept of the complete solution of vocational tasks. In contrast, if
‘professional’ is associated with specialist science, we lose sight of work and
business processes. The consequence is vocational education and training moves
away from its function of imparting vocational competence (cf. KMK, 1996).
Conclusion The COMET test procedure enables the representation of competence
not only in the form of competence levels, but also of competence profiles. This is
mainly due to the didactic quality of the test procedure. Teachers/trainers as well as
trainees and students can read directly from the competence profiles as to which of
300 8 Evaluating and Presenting the Test Results

Fig. 8.9 Differentiation of the competence profiles according to the total score (ATS) and the
coefficient of variation: (a) E-B, class no. 7, n ¼ 26; (b) E-B, class no. 5, n ¼ 18; (c) E-B, class
no. 24, n ¼ 19; (d) E-B, class no. 23, n ¼ 17 (results COMET Electronics Engineers (Hesse) 2009)

the eight sub-competences in an educational programme or learning group have been


developed and which have been neglected. At the aggregation level of local, regional
and national education levels or in an international comparison of vocational edu-
cation and training systems, competence projects can also be used to draw conclu-
sions about the quality of training regulations, training plans and training courses as
well as the strengths and weaknesses of vocational education and training systems.
Although the competence profiles comprised in the COMET projects in a large
number of very different occupations show that the change of perspective from a
subject-specific systematic and technically structured VET to a VET concept based
on work and business processes was largely implemented in the educational
programme and theoretical discussion, the broad implementation of this central
idea in VET practice still has to be achieved.
An analysis of the test results—especially the competence profiles of the test
participants and the test groups—together with the teachers/trainers and lecturers
8.4 Heterogeneity of Professional Competence Development 301

involved in the project is an essential element of quality development. This is mainly


due to the extended (holistic) technical understanding that the teachers acquire in the
COMET projects.

8.4 Heterogeneity of Professional Competence


Development

The differences in the vocational school performance of trainees and students are a
well-known phenomenon. It is not uncommon for high-school graduates and young
people without a lower secondary school leaving certificate to learn the same
profession. This tends to occur more frequently in craft trades as, for example, a
high-school graduate wants to continue his/her training as a master craftsman after
passing the journeyman’s examination in order to assume responsibility in the
parental company and to be able to perform the function of a trainer. The heteroge-
neity of performance in these vocational school classes is therefore very high. In
occupations under the auspices of the IHK (German Chamber of Industry and
Commerce), the proportion of trainees with university entrance qualifications has
risen significantly in recent years, in line with the motto: ‘Learn a ‘real’ profession
first before I tackle the jungle of the new study courses’. In 2013, for example, 30%
of trainees in IHK occupations in their first year of training had a higher education
entrance qualification (cf. Report on Vocational Education and Training 2014, 28 f.).
The heterogeneous performance structure has less of an impact on trainees in the
training companies. Companies have the opportunity to select applicants who meet
their requirements. This informal selection leads, for example, to the fact that the
majority of apprentices in occupations such as media designer, industrial clerk and
IT occupations are high-school graduates (‘high-school graduate occupations’). In
these occupations, this form of informal selection also tends to reduce the heteroge-
neity of the performance structure in the classes of vocational schools.
In countries with school-based VET systems, a distinction is generally made
between two to three successive and interrelated school-based programmes: voca-
tional colleges, technical colleges, higher technical colleges and, more recently,
so-called ‘vocationally qualifying bachelor’s degree programmes’ based thereon.
If the admission requirements for the vocational track or the academic track for
higher education programmes are controlled by selection and admission regulations,
this reduces the spread of competence development among pupils and students.
The phenomenon of heterogeneity in vocational education and training, the extent
of which has so far been underestimated, will be presented below and analysed on
the basis of empirical results, whereby the degree of heterogeneity measured in
previous COMET projects is shown first. This is followed by an interpretation of the
causes for the heterogeneous performance structure in vocational education and
training programmes and considerations on ‘dealing with heterogeneity’.
302 8 Evaluating and Presenting the Test Results

8.4.1 Heterogeneous Levels of Competence

The heterogeneous performance structure becomes particularly clear when the


classes participating in a COMET project (of a profession or occupational group)
are differentiated into the proportion of pupils at risk (nominal competence) and the
proportion of those who achieve the highest level of competence. This was demon-
strated, for example, in the COMET Electronic Technicians project (NRW), in
which nine classes of electronic technicians for industrial engineering (E-B) and
twelve classes of electronic technicians specialising in energy and building technol-
ogy (E-EG) took part in 2013/14.
The proportion of trainees in the E-B classes who reached the highest level of
competence ranges from 48% to 0%. In the E-EG classes, the dispersion of 44%
0% is similarly large. This is complemented by the great variance in the proportion
of trainees assigned to the risk group (nominal competence), which is also pro-
nounced. This value varies between 13% and 64% for the E-B classes and even
between 17% and 79% for the E-EG classes (Figs. 8.10 and 8.11).
Another aspect of heterogeneity is the differences in the distribution of test
subjects among competence levels measured in related occupations (Fig. 8.12).
The two test groups of the two commercial occupations do not differ with regard
to their prior education: the proportion of pupils with a university entrance qualifi-
cation is roughly the same in both occupations at around 70%. Nevertheless, the
distribution of competences among the competence levels in the two occupations
differs greatly; 70% of INK trainees, but only 8% of SPKA trainees, reach the third
competence level. 32% of the SPKA, but only 7% of the INK are risk students.

8.4.2 Percentile Bands

The standard method for representing heterogeneity in the COMET project is the use
of percentile bands (Fig. 8.13).
The differences and dispersion in competence levels, determined by scores,
between test subjects or test groups formed according to different characteristics
such as occupations, states, age and prior schooling, provide information on the
degree of heterogeneity assumed in vocational education and training. The percentile
bands also used in the PISA studies can be used as a form of representation.
Representation using percentile bands makes it possible to graphically bundle
three different types of information on different groups (school locations, sectors,
years of training, educational programmes and education systems) (Fig. 8.14). The
centre mark (CM) shows the average value of the groups. Differences in average
performance become visible by comparing the different averages.
Whether these differences are significant can be seen from the grey area around
the average value on the bands, the confidence interval. With a 95% certainty, this is
where the ‘true’ average value lies, i.e. the projection from the respective group to
8.4 Heterogeneity of Professional Competence Development 303

(n) Electronics technicians (EB)


ATS

class 1 (n = 23, 3rd year) 13 % 42,2


nd
class 7 (n = 26, 2 year) 15 % 23,4
nd
class 4 (n = 24, 2 year)
17 % 40,1
rd
class 5 (n = 23, 3 year)
22 % 37,9
rd
3 year (n = 76) 30 % 36,3

Total (n = 170) 31 % 34,4

2nd year (n = 94) 31 % 32,9

class 8 (n = 16, 3rd year) 38 % 34,2

nd
class 13 (n = 14, 2 year) 43 % 29,3

class 15 (n = 12, 2nd year) 50 % 25,8

nd
class 18 (n = 15, 2 year) 60 % 23,4

class 16 (n = 14, 3rd year) 64 % 26,2

0% 20% 40% 60% 80% 100%

Electronics technicians (EEG)

class 3 (n = 29, 3rdyear) 17 % 37,2


class 9 (n = 16, 2ndyear) 25 % 32,9
class 2 (n = 27, 2nd year) 26 % 41,3
class 11 (n = 12, 3rd year) 42 % 31,9
2nd year(n = 102) 44 % 33,4
nd
class 6 (n = 15, 2 year) 47 % 37,2

Total (n = 205) 49 % 31,3


class 10 (n = 13, 3rd year) 54 % 32,4

3rd year(n = 103) 54 % 29,4


nd
class 14 (n = 19, 2 year) 58 % 28,3
nd
class 19 (n = 13, 2 year) 62 % 23,4
nd
class 12 (n = 12, 2 year) 67 % 30,1
rd
class 17 (n = 12, 3 year) 75 % 25,3

class 20 (n = 18, 3rd year) 78 % 21,6


rd
class 21 (n = 19, 3 year) 79 % 19,2

0% 20% 40% 60% 80% 100%

Fig. 8.10 Percentage of trainee electronics technicians in the risk group (NRW project 2013/2014)
304 8 Evaluating and Presenting the Test Results

Electronics technicians (EB)


(n) Total score
class 5 (n = 23, 3rd year) 48 % 37,9

nd
class 4 (n = 24, 2 year) 46 % 40,1
rd
class 1 (n = 24, 3 year) 39 % 42,1

rd
3 year(n = 76) 34 % 36,3

class 8 (n = 16, 3rd year) 25 % 34,2

Total (n = 170) 26 % 34,4

nd
2 year(n = 94) 21 % 32,9
nd
class 7 (n = 26, 2 year) 15 % 35,0

class 13 (n = 15, 2nd year) 13 % 29,3


rd
class 16 (n = 17, 3 year) 12 % 26,2

class 18 (n = 15, 2nd year) 7 % 23,4

class 15 (n = 12, 2nd year) 0 % 25,8

0% 20% 40% 60% 80% 100%

Electronics Technicians EEG


(n) Total score
class 2 (n = 27, 2nd year) 44 % 41,3
rd 31,9
class 11 (n = 12, 3 year) 42 %

class 6 (n = 15, 2nd year) 40 % 37,2


rd
class 3 (n = 29, 3 year) 38 % 37,2

2nd year apprenices (n = 102) 26 % 33,4

Total (n = 205) 24 % 31,3

class 10 (n = 13, 3rd year) 23 % 32,4


rd
3 year apprenices (n = 103) 22 % 29,4

class 12 (n = 12, 2nd year) 17 % 30,1


rd
class 17 (n = 12, 3 year) 17 % 25,3

class 14 (n = 19, 2nd year) 16 % 28,3


nd 13 %
class 9 (n = 16, 2 year) 32,9
class 20 (n = 18, 3rd year) 11 % 21,6
class 19 (n = 13, 2nd year) 8 % 23,4
class 21 (n = 19, 3rd year) 0 % 19,2

0% 20% 40% 60% 80% 100%

Fig. 8.11 Proportion of trainee electronics technicians at the level of holistic shaping competence
8.4 Heterogeneity of Professional Competence Development 305

Fig. 8.12 Comparison of the distribution of competences of industrial clerks (INK) and shipping
clerks (SPKA) (COMET NRW, 2013)

Fig. 8.13 Example of a percentile band

Fig. 8.14 Sample percentile band (surveys from 2009)


306 8 Evaluating and Presenting the Test Results

the population. Accordingly, differences between two groups are significant and
most likely not accidental if the average of one band is outside the grey area of
another.
The third important information of the percentile bands concerns the spread of the
results, i.e. the professional distance between worse and better test results. The white
areas represent the values for 25–50% or 50–75% of a group. This range includes the
values for half of the test participants grouped around the average value. Finally, the
outer grey areas contain those cases which form the lower (10–25%) or upper
(75–90%) range. The best and weakest 10% of the results are not captured by
the bands so as not to distort their width by individual outliers. The white part of
the bands (including the grey confidence interval) therefore indicates the range of the
average 50% of the test results. The entire band shows the range of results of 80% of
the participants. The 10% best or worst results are to the right or left of the band.
To avoid major distortions in the representation of class results, test groups with
less than 15 participants are not included in the representation of percentile bands in
this report.
The total scores (TS) and the variation range of the percentile bands can be
represented in the form of learning times and learning time differences (LTD). As
vocational training with a duration of 3–3.5 years corresponds to a maximum of
approximately 70 points, a training duration of 1 year roughly corresponds to a score
of 20 points.
The following represents characteristic percentile bands for different vocational
education and training programmes.
For larger samples, the percentile bands are extended to the fifth and 95th
percentiles.
Example The spread of the competences of the second- and third-year test group of
electronic technicians in industrial engineering (E-B), building and energy engineer-
ing (E-EG), and of the technical college students (electronic technicians) of full-time
and part-time students (F-VZ and F-TZ) is striking in several respects and was
something the responsible project consortium did not expect in this form. This
mainly concerns
1. the extraordinarily wide range of variation (spread) of the competences of the test
participants in the classes. This often amounts to 40 and more points and therefore
corresponds to a learning time difference of two and more years.
2. the large differences in competence levels between the classes. Despite compa-
rable prior training of E-B and E-EC trainees, the levels of competence differ, in
some cases considerably, from one another. The E-B-Class with the weakest
performance differs from that with the highest performance by a learning time
difference of almost 1 year.
3. The formal difference between the qualification levels for initial vocational
education and training and the level of vocational schooling apparently has hardly
any influence on the competence levels measured (Fig. 8.15).
8.4 Heterogeneity of Professional Competence Development 307

Fig. 8.15 Percentile bands for professional competence across test groups at class level for trainees
(results from 2009)

8.4.3 The Heterogeneity Diagram

The evaluation of all empirical values available so far on the spread of the compe-
tence of test persons and test groups results in values from 0 to a maximum of 80.
Theoretically, values of up to 90 are conceivable. In fact, however, values above
80 were only measured very rarely. This also applies to the value ‘0’. If the empirical
values of the spread are plotted as a learning time difference of 0–3 years on the
vertical line and the corresponding average values of the test groups on the horizon-
tal line, this results in test-group-specific patterns for the heterogeneity of the
competence development as well as a characteristic function, with which the depen-
dence of the learning time difference on competence levels can be described.
y ¼ ai - ba2i ( x  b)2.(a1 ¼ 2.5; a2 ¼ 1.5; a3 ¼ 0.5 for each max. LTD; b ¼ max.
Achievable LTD).
308 8 Evaluating and Presenting the Test Results

Fig. 8.16 Heterogeneity diagram of various occupations (shipping clerks (SPKA), industrial clerks
(IK), electronics technicians in China)

According to the heterogeneity diagram, it can be expected that with increasing


competence levels up to the value of TS ¼ 40 in accordance with the function on
which the heterogeneity diagram is based, the spread of the competence develop-
ment and therefore the learning time difference in the learning groups/educational
programmes will increase. If a higher competence level is reached, then the spread of
the learning time difference or the variation range of the competence characteristics
decreases again.
According to the previous test results of the regional, national and international
projects of the COMET network on approximately ten occupations, three levels of
heterogeneity (high, medium and low) can be distinguished.
The educational courses in the Beijing region (n ¼ 800) participating in the
international comparison project ‘Electronics Technicians’ are characterised, for
example, by a low level of heterogeneity at an overall low level of competence. In
comparison, the heterogeneity level of industrial clerks (IK trainees) is at a medium
level and that of shipping clerks (SPKA trainees) at a high level.
Comparing the results of the COMET motor vehicle project (China, Hesse, North
Rhine-Westphalia) also results in educational course-specific characteristics of het-
erogeneity (Fig. 8.16).
8.4 Heterogeneity of Professional Competence Development 309

8.4.4 The Causes of Heterogeneity

The different forms of evaluation and representation of more or less heterogeneous


competence characteristics convey an impression of the complexity of the problem.
The main determinants of heterogeneous competence—with reference to the state of
research in the COMET project—will be compiled below. Only when it is possible
to analyse and understand the phenomenon of heterogeneity in its situational cir-
cumstances and genesis can the didactic actions of the teachers and trainers pose a
targeted reaction. In this case, the primary aim is not to avoid the extent of
heterogeneous performance, but to develop didactic concepts with which heteroge-
neous performance structures in learning groups can also be understood as an
element of learning opportunities for all.

Prior Schooling

The COMET projects confirm that prior schooling has a considerable influence on
the development of heterogeneous performance structures. The composition of
learning groups in VET programmes is a crucial determinant of heterogeneity.
This finding applies to large-scale studies (cf. Heinemann, Maurer, & Rauner,
2011, 150 ff.). On the other hand, this does not permit the conclusion that, for
example, a high proportion of trainees with a lower secondary school leaving
certificate in a class determines the level of competence or that a high degree of
heterogeneity in prior schooling in a class causes a correspondingly high degree of
heterogeneity in the level of competence. As could be shown in this report, classes
with a comparable structure of trainees in the same occupation and at the same
location can achieve very different levels of competence.

Selection and Admission Rules for Vocational Education and Training


as Determinants of the Degree of Heterogeneous Performance Structures

The more comprehensively and tightly regulated the admission requirements for
vocational training programmes, the more homogeneous the development of com-
petence structures. Characteristic examples are the vocational programmes of the
upper secondary level, post-secondary and tertiary vocational programmes of the
Chinese vocational training system. In China, admission to the general branch of
upper secondary education is decided by a nationwide test procedure. Students who
do not pass this test are referred to vocational programmes. Access to ‘higher
vocational education’ (at universities) is also regulated by a selection procedure.
These admission and access regulations contribute to the fact that the heterogeneity
of competence development in Chinese vocational education and training at all
qualification levels is low to medium.
310 8 Evaluating and Presenting the Test Results

The Teacher as an Influencing Factor

According to the results of the COMET projects available to date, the teacher plays a
decisive role in the competence development of trainees and students. Apart from the
curricular structures specific to educational programmes, teachers/trainers are the
most influencing factor for professional competence development. This can be
predominantly seen in the fact that the competence level of learning groups can be
very different despite having the same prior schooling and the same training
programmes, without this having an effect on the spread of competence develop-
ment—insofar as this does not result from the competence level (see above). This
result points to the particular challenge teachers and trainers face in dealing with
heterogeneity.

The Heterogeneity Diagram

The degree of heterogeneity depends on the competence level of the learning groups.
In learning groups with a low competence level (TS  40), the degree of heteroge-
neity tends to increase—irrespective of whether the learning groups are homoge-
neous or heterogeneous. In the learning groups with a high competence level
(TS  40), the degree of heterogeneity decreases again with a further increase in
the competence level in accordance with the function underlying the heterogeneity
diagram.

Learning Venue Cooperation

A further determinant of the degree of heterogeneity of competences in the learning


groups (classes) is the quality of learning venue cooperation and the quality of
in-company and school-based vocational training. With increasing differences in
the quality of training at the two learning venues, the quality of learning venue
cooperation decreases, and the heterogeneity of competences increases.
Conclusion The recording of the heterogeneity of competence development in
vocational education and training programmes is an essential prerequisite for the
development and implementation of didactic concepts for dealing with heterogene-
ity. A strategy that is primarily aimed at restricting the range of variation of
competence characteristics could lead to a drop in the qualification level of the test
groups (see heterogeneity diagram).
Up to the intermediate competence level, heterogeneity in the learning groups
tends to increase. Attempts to strengthen the heterogeneity in the learning groups by
giving special support to trainees with learning difficulties should therefore be
combined with measures to provide individual support for high-performance
learners (cf. Piening and Rauner, 2015g). The COMET test results on the heteroge-
neity of vocational competences also show that one sidedness in the professional
8.5 Measuring Identity and Commitment 311

understanding of teachers and lecturers not only leads to inhomogeneous compe-


tence learner profiles, but also tends to impair the development towards a higher
level of competence.

8.5 Measuring Identity and Commitment

The research interest in the commitment of trainees and employees mainly consists
in identifying different reference fields to which commitment refers as clearly as
possible. As depicted by the model description (! 4.7), there are three main factors
that can be taken into account here: emotional attachment to the organisation:
organisational identity, identification with the occupation: professional identity and
an abstract willingness to perform that refrains from concrete work content: work
ethics (! 4.7). Some fundamental assumptions and research questions are reflected
in the design logic of the instruments.
It cannot be assumed that such ‘types’ are normally available in pure form, but
that the various forms of commitment interact with each other. Positive experiences
in the company influence professional commitment and work ethics, while on the
other hand, it seems difficult to maintain professional commitment even when faced
with disappointments in relation to one’s own organisation. Relationships such as
these can be analysed with the help of the instruments.
In addition, these types do not exhaust the possible fields of reference of com-
mitment—the relationship to individual colleagues, teams, certain activities etc. can
also play a major role and must be surveyed separately.
It is of particular interest for vocational education research whether the process of
developing professional identity leads to shifts in the dominant field of motivational
reference. The development of professional identity is referred to the willingness to
develop it subjectively. Commitment can be generated from the affiliation to the
occupation or the enterprise or even the work as such. It can be shown that these
different normative fields of reference of commitment in turn have repercussions on
the development of competence and identity (! 8.6).

8.5.1 On the Construction of Scales

Since professional identity is related to the respective occupation, it is not possible


for an inter-occupational concept to develop a scale for its capture which is based on
assumptions about the respective specific content of such an identity. The extent to
which a particular professional role has been assumed is therefore not examined.
This distinguishes the concept of professional identity used here from others, which
aim more to acquire implicit or explicit knowledge in addition to professional
competence in order to be a member of a certain occupation and therefore to share
a certain universe of thoughts and actions. The scale of professional identity focuses
312 8 Evaluating and Presenting the Test Results

on those cognitive and emotional dispositions that correspond to a development


from novice to expert in a subject and lead to professional capacity to act. Three
aspects were identified for this purpose: the interest in placing one’s own activities in
the overall vocational or operational context (orientation), the interest in helping to
shape work and technology (design) and the interest in high-quality performance of
one’s own work (quality).
On the one hand, the step to this meta-level involves the risk of not including
essential aspects of growing into a specific professional role. Therefore, the appro-
priate scale for such studies should at most be applied in complementary manner. It
does not refer directly to professional socialisation processes, but to the subjective
disposition to take on the professional role successfully. On the other hand, there are
various advantages to this risk. In addition to the possibility of making cross-
occupational comparisons with regard to this disposition, such a concept of profes-
sional identity also escapes justified criticism, which refers to more conventional
formulations. The assumption of a professional role can therefore take place in
different contexts and in different ways In this context, Martin Fischer and Andreas
WITZEL rightly point out that the term ‘professional identity’ should not be idealised,
for example by deducting a lack of professional competence from an untrained
professional identity, for example as a result of career changes (Fischer & Witzel,
2008, 25). It makes sense to emphasise subjective dispositions for assuming the
professional role and general occupational values in so far as these can be
ascertained independently of the respective qualification path. Which type of train-
ing organisation favours or hampers the development of such a professional identity
therefore becomes an empirical question. As described (! 4.7), a variety of
approaches exist within the framework of commitment research for the empirical
capture of professional and organisational commitments.
In this context, three questions are of interest for vocational and business educa-
tion studies.
• How pronounced is the professional willingness to perform?
• Is professional motivation based on factors of intrinsic or extremist motivation?
• To what extent do professional identity, emotional affiliation to the company and
the willingness to (obediently) perform given tasks contribute to a professional
willingness to perform?

8.5.2 Calculating the Results

I-C split scales are required to calculate the I-C diagrams. This form of representation
is suitable for a differentiation according to occupations.
Like the competence levels, the commitment scales are also differentiated into
low, medium and high (commitment split scales). To determine the limit values, the
33rd and 66th percentiles of each commitment scale are determined on the basis of
the Bremerhaven study (Heinemann, Maurer, & Rauner, 2009) (n ¼ 1560). In the
8.5 Measuring Identity and Commitment 313

Table 8.8 Low, medium and Low Medium High


high limits
Organisational identity 0–12 12.1–17 17.1–24
Professional identity 0–14 14.1–19 19.1–24
Professional commitment 0–16 16.1–19 19.1–24
Work ethics 0–14 14.1–18 18.1–24
Organisational commitment 0–14.4 14.5–17 17.1–24

Table 8.9 Saxony Study Low Medium High


2015: Limit values for com-
Professional identity 0–16 16.1–20 20.1–24
mitment differentiated
according to low, medium Professional commitment 0–17 17.1–20 20.1–24
and high Organisational identity 0–12 12.1–18 18.1–24
Organisational commitment 0–14.4 14.5–17 17.1–24
Work ethics 0–18 18.1–22 22.1–24

context of the ‘Saxony study’ (Rauner, Frenzel, Piening, Bachmann, 2016)


(n ¼ 3300), the I-C model was completed by the dimension in the form of a further
scale for organisational commitment (Table 8.8).
For extensive projects such as the Saxony study involving more than 3000
trainees, project-specific limit values can also be calculated and applied (Table 8.9).

8.5.3 ‘Commitment Lights’

The commitment split scales are required to calculate the commitment lights. As a
rule, the lights are calculated per occupation and presented in a cross-occupational
diagram (Fig. 8.17).
Diagrams displaying all commitment scales of a profession can also be used
(Fig. 8.18).

8.5.4 Commitment Progression

Several line diagrams are generated for the commitment progressions. The average
scale values of the total sample are displayed in a line diagram. In five further
diagrams, the average values of the second and third training years are shown for
each commitment scale to enable a comparison in commitment between these two
training sections.
Example The progression of professional identity in industrial/technical
occupations
In the occupational group of the industrial-technical industry, industrial mechan-
ics stand out in comparison with other trainees due to their heterogeneous
314 8 Evaluating and Presenting the Test Results

Fig. 8.17 Example of a commitment light, here for professional commitment, COMET NRW 2014

Fig. 8.18 Exemplary representation of all commitment scales of a professional group

development of identity over the years of training. Industrial mechanics’ identifica-


tion with the profession drops surprisingly sharply in the second year of training and
then rises sharply again in the third. All other occupations show a slight to severe
(plant mechanic) drop in occupational identity with increasing duration of training
(Fig. 8.19).
8.5 Measuring Identity and Commitment 315

Fig. 8.19 Progression of professional identity, group 1, industrial-technical industry (Rauner,


Frenzel, Piening, & Bachmann, 2016)

The progression of professional commitment shows a certain similarity with the


progression of professional identity for the same occupational group. It is evident
that after a drop in professional commitment in the second year of training, profes-
sional commitment in three of these occupations rises again in the third year of
training. Only the process mechanics show an increasing development of profes-
sional commitments for the duration of their training (Fig. 8.20).

8.5.5 Four-Field Matrices

The z-standardised commitment scales are required for the four-field matrices and
enable a comparison of the different professions. The average value of the total
sample is 0, and the standard deviation is 1. Values >0 therefore mean that the
subgroup with this value has an above-average result. Conversely, values <0 mean
that the subgroup with this value has a below-average result. The further away from
0 the value is, the greater the deviation from the average.
A total of two matrices are created, one for identities (professional and
organisational identity) and one for commitments (professional and organisational
commitment). Work ethics are not taken into account in this evaluation. The
z-standardised average values of these scales are calculated for each occupation.
316 8 Evaluating and Presenting the Test Results

Fig. 8.20 Course of occupational commitment, group 1, industrial-technical industry (ibid


Fig. 5.22)

The results are then presented in a diagram across all occupations (cf. Figs. 8.21 and
8.22).

The Identification Potential of Professions: A Professional Typology

The formation of a four-field matrix in which the two axes represent the professional
and organisational identity results in the emergence of four fields to which occupa-
tions with different identity profiles can be assigned (Fig. 8.22).
This allows a distinction between occupations which show the identity as either
professional or organisational. This form of typological distinction also allows the
identification of the type of those who identify with both work and company as a
whole as opposed to non-identifiers.
The work-oriented trainees/employees have both a professional and a
organisational identity, but it is unclear which of the two identities is the primary
and the dominant one. The graphic representation of the work-oriented trainees
shows the quantitative characteristics of both or the work-related identity potential
of the occupations or the corresponding training relationships.
8.5 Measuring Identity and Commitment 317

Fig. 8.21 Work-related identities of trainees from both locations by occupation: (I) consistently
high identity, (II) professional identity, (III) organisational identity, (IV) weakly/non-developed
identity (ibid. Fig. 7.26) (tab. See Annex 5)

Consistently High Identity: Work Orientation

These trainees identify equally with their profession and with their company. Here,
occupations with a high potential for identification go hand in hand with training
conditions that promote close ties to the company.

Professional Identity: Occupational Orientation

The occupation-oriented trainees/employees represent occupations with a pro-


nounced professional identification potential.
318 8 Evaluating and Presenting the Test Results

Fig. 8.22 Work-related commitment of trainees from both locations by occupation: (I) consistently
committed, (II) professionally committed, (III) organisationally committed, (IV) weakly/non-
committed (ibid. Fig. 7.27) (Tab. see Annex 5)

Organisational Identity: Occupational Orientation

The organisational-oriented trainees/employees primarily identify with their com-


pany. They represent training relationships with a high organisational identification
potential, which is expressed in a pronounced emotional attachment to the company.

Weak/No Occupational Identity: Employment and Job Orientation

These apprentices/employees are characterised by a purposeful attitude towards the


profession and their company. They are accustomed to carrying out the detailed tasks
assigned to them as instructed by their superiors. As a rule, the low level of
identification goes hand in hand with an underdeveloped sense of quality and
responsibility.
8.5 Measuring Identity and Commitment 319

Professional and Organisational Commitment: A Professional Typology

The four-field matrix can also be applied to professional and organisational com-
mitment. Here, too, the matrix shows differences (Fig. 8.22). For example, the
occupation of carpenter can be assigned to the professional type of commitment
and the occupation of specialist for driving operations to the type organisational type
of commitment.

Consistently Committed

Those who are consistently committed strongly identify with their profession and
their company and have an underlying professional and organisational commitment.

Professionally Committed

The type of professionally committed person has a highly developed professional


identity—work ethics as a driving force for commitment to work. The organisational
commitment resulting from the identification with the company plays a subordinate
role. A realistic assessment of the dynamics of flexible labour markets also distin-
guishes this type of professionally committed person. Switching to another company
is one of the accepted processes of professional work.

Organisationally Committed

The type of the organisationally committed is characterised by an underdeveloped


professional identity—usually triggered by occupations with a low identification
potential—and by a strong emotional attachment to the company. A change of
company is experienced as an emotional burden by this type of organisationally
committed person. This goes hand in hand with a certain misjudgement of the reality
of flexible labour markets.

The Weakly/Non-Committed

The non-committed type has neither a distinct professional identity nor an emotional
attachment to the company. In this case, the willingness to perform is based on
tightly controlled work implementation (according to detailed instructions) and the
risks taken with an insufficient willingness to perform. These include in particular
the risk of losing one’s job and a low wage level. While the company cannot directly
change the identification potential of these occupations, it poses a large number of
opportunities to strengthen organisational commitment.
320 8 Evaluating and Presenting the Test Results

8.5.6 Identity and Commitment Profiles

Based on an extensive survey of more than 3000 Saxon trainees from more than
70 occupations, the identification of the trainees with their apprenticeship occupa-
tions and companies and their willingness to perform were also examined. Identity
and commitment profiles were determined for different occupational groups. For
these forms of representation, the values were standardised in order to make them
directly comparable. Standardisation leads to a uniform average of zero and a
uniform standard deviation of the scale values of one. A positive average for an
occupational sample therefore means that this group is more pronounced than the
overall sample of Saxon trainees. No other form of representation shows the
attractiveness of their occupations from the trainees’ perspective so clearly
(Fig. 8.23).
The I-C network diagrams illustrate both the level and the profile of the I-C
expression.
The mechatronics engineers complete the E-D profile in almost all identity and
commitment dimensions slightly above the average of the total sample (with the
exception of organisational identity). In contrast, however, the I-C network diagram
of the plant mechanics, which is also balanced, is significantly smaller, as their
values are below average in all five identity and commitment dimensions. The
context data show the reasons for these weaknesses in industrial education.
If the values are based on representative surveys, then the I-C profiles do not only
display the attractiveness of the occupations for trainees. The experts from BIBB and
the social partners involved in career development therefore have quantitative and
qualitative data on the identification potential (IP) of occupations at their disposal for
the first time. The IP values indicate whether the career developers, under the
guidance of the responsible BIBB department, have succeeded in developing occu-
pations with which the trainees identify. If the IP values are below average, these
occupations do not have the potential to develop professional competence, motiva-
tion, responsibility and quality awareness (Fig. 8.24).

Fig. 8.23 I-C network diagrams of plant mechanics (n ¼ 37) and mechatronics engineers (n ¼ 108)
(ibid. Fig. 7.15)
8.5 Measuring Identity and Commitment 321

Fig. 8.24 I-C network diagrams of glaziers (n ¼ 62) and auto-mechatronics engineers (n ¼ 114)
(ibid. Fig. 7.17)

Fig. 8.25 I-C network diagrams of warehouse logistics specialists (n ¼ 21) and warehouse
specialists (n ¼ 36) (ibid. Fig. 7.23)

High-quality company training can compensate the deficit of a too low IP to a


certain degree. The I-C profile of the auto-mechatronic engineer shows slightly
above-average IP values. And these obviously also contribute to above-average
commitment.

Two Content-Related Occupations with Different Identification


Potentials (Fig. 8.25)

The occupation of Warehouse Logistics Specialist is a three-year apprenticeship and


the occupation of Warehouse Specialist is a two-year apprenticeship. The I-C dia-
grams confirm the hypothesis that training time is an indicator for the development
of professional and organisational identity and the commitment based thereon. The
two-year occupations with their IP values differ particularly markedly from the fully
fledged occupations with a three- to three-and-a-half-year training period.
An almost homogeneous overall picture can be seen in the group of commercial
professions. However, no clear profile can be discerned for any of the evaluable
322 8 Evaluating and Presenting the Test Results

Fig. 8.26 I-C profiles, group 6, commercial professions (ibid. Fig. 7.24)

Fig. 8.27 I-C network diagrams of office clerks (n ¼ 315) and real estate agents (n ¼ 57) (ibid.
Fig. 7.25)

occupations. A conspicuous organisational orientation is indicated for the real estate


agents. While in-company training is the most pronounced among retail managers,
this finding cannot be confirmed for hotel managers. In this profession, however, the
trainees are characterised by a higher level of professional commitment (Fig. 8.26).
The high levels of identity and commitment dimensions are very evident in real
estate agents, indicated by a complete profile with high values above the overall
average (Fig. 8.27).
In contrast to real estate agents, the I-C profile of office clerks is slightly below
average.
8.6 Identity and Commitment as Determinants of Professional Development 323

8.6 Identity and Commitment as Determinants


of Professional Development

8.6.1 Professional and Organisational Identity


as Determinants of the Quality of Vocational Training

The model concept of the connection between identity and commitment in voca-
tional education and training (! 4.7, 6.4) requires empirical examination so that
trainers and teachers can rely on it in their didactic actions.
A striking—but expectable—aspect is that there is a clear connection between
identities and commitments. As a rule, identification with the occupation is also
transferred to identification with the training company. The same applies to the
relationship between professional and organisational commitment. However, the
form of the four-field matrix shows that the differentiation between both forms of
identity and commitment is necessary to capture the characteristic identity-
commitment profiles.
The degree to which the two identity and commitment scales correspond with
each other can be seen in Fig. 8.28.
There are clear correlations between the identities and commitments. In the
weakest case, the vocational identity can ‘only’ be explained to 32.5% by the values
of the organisational identity, the strongest connection being between organisational
and professional commitment. At 47.6%, the values of professional commitment
explain the values of organisational commitment.
How the factors of the identity-commitment model can be explained by the
simultaneous effect of several context factors in reality is described below. Based
on the survey data, models were created that allow the various commitment aspects
to be explained by the context factors1.

8.6.2 Professional Identity

The identification with the learnt profession represents a motivation basis to get
involved in one’s profession and to act competently. A high professional identity can
also serve to overcome disadvantages related to one’s job such as poor pay and shift
work. In the rationale for the identity-commitment model, it is assumed that the
connection with the learnt occupation can be promoted by both learning venues of
the dual system. While the reputation of the profession can help to create a strong
sense of self-esteem within vocational education and training programmes at

1
The following sections are based on the multiple regressions calculated with the SPSS statistics
software.
324 8 Evaluating and Presenting the Test Results

Fig. 8.28 Strongest and weakest correlations between the commitment scales

vocational school, the activities belonging to the occupational profile are another
influencing factor. Depending on the profession and values, either the breadth of the
profession or its speciality can help to create a bond with the profession. The level of
the activities could also lead to a highly developed professional identity for the
trainees after a comparison with their individual expectations.
Based on the available data, professional identity is, as expected, determined by
the in-company characteristics of the quality of training (Fig. 8.29). Identification
with the profession is most strongly promoted by self-employment and autonomous
work. Similarly, training across the entire breadth of the profession, including
appreciation of the trainee by employees, and the transfer of responsible tasks
contribute to a high level of professional identity. Other factors that explain profes-
sional identity were established in the working and school climate and in the
cooperation between learning venues in terms of content. Both the assessments of
the working and school climate confirm the assumption that a professional reputation
or the significance of the profession in the organisational hierarchy of companies
contributes to an identification with the profession. The characteristics of the aspects
‘in-company training quality’ (e.g. training level, independent working and learning,
etc.), ‘working atmosphere’, ‘school atmosphere’ and ‘content-related learning
venue cooperation’ together account for 32% of the variability in the values for
professional identity.
A look at the results of the six professional groups involved shows that the fit of
the model for explaining professional identity is the greatest in the industrial-
technical industry, in which the context factors can even clarify 46% of the variance
of the values of professional identity. For commercial occupations, however, other
factors seem to have an influence on the development of professional identity. For
example, the global model of Saxon trainees in the commercial sector can explain
only 24% of the variance in the values of professional identity, leaving 76% of the
variance unexplained (not shown).
8.6 Identity and Commitment as Determinants of Professional Development 325

Fig. 8.29 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional identity, R2 ¼ 32.1% (This is the corrected coefficient of determination R2, at
which the number of variables included in the model equation is calculated. It combines the
requirements of model adaptation and economy. It combines the requirements of model adaptation
and economy.), predictors: quality of training (independence, scope of the profession and demands
of the tasks), working and school atmosphere, content-related learning venue cooperation (The
graphs in the following sections show the extent to which the characteristics given would predict the
commitment range and what value was actually given by the trainees. The greater the deviations
from the calculated straight line, the less accurately the characteristics (e.g. work climate, training
quality, etc.) predict the corresponding commitment range.)

8.6.3 Organisational Identity

The identification of a trainee with his/her company is created by the emotional


attachment that develops during the training period and can be seen as a dimension
of competence development. It can be assumed that organisational identity is
predominantly developed by operational factors. However, the development of
organisational identity can also be supported by the public image of the company,
as well as scholastic factors. The empirical examination of the model shows,
however, that the emotional attachment to the company providing in-company
training (organisational identity) primarily develops through the process of
in-company vocational training.
Organisational identity is explained by the company factors ‘company climate’,
‘training support’, ‘activities across the entire breadth of the profession’, ‘demands
and level of tasks’, as well as ‘independent processing of tasks’. Overall, these
factors can predict 53% of the values of organisational identity. The inclusion of the
326 8 Evaluating and Presenting the Test Results

Fig. 8.30 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional identity, R2 (corrected) ¼ 53.3%, predictors: company climate, training
support, training quality (breadth of profession, independence and demand/level of tasks), teacher
evaluation

scholastic factor ‘teacher evaluation’ contributes only slightly to the prediction of the
organisational identity and has the opposite effect; i.e., the higher the teacher
evaluation, the less pronounced the organisational identity. This surprising result
can be traced back to the fact that trainees may attest their teachers a high level of
expertise, but a significantly lower level of profession-related competence, which
relates to the reality of vocational work processes. This means that in the assessment
of training quality, the imparting of theoretical and practical knowledge is experi-
enced as competing points of reference in training: the appreciation of one of the two
poles reduces the appreciation of the opposite pole (Fig. 8.30).
The most important factor in predicting organisational identity is the working
atmosphere. A similarly high importance is attached to the factor of training support
(above all by the trainers). It is therefore the professional and social interactions
between the members of an organisation that lead to an emotional bond with the
company.
This model used to explain the variance of the values for organisational identity
particularly applies to commercial-technical craft trades and in commercial pro-
fessions. For these professional groups, the model explains 57% and 58% respec-
tively. A comparatively small proportion of unexplained variance remains. With
explanatory values of 48%, this model offers the worst R2 (coefficient of determi-
nation) in the general trade sector.
8.6 Identity and Commitment as Determinants of Professional Development 327

8.6.4 Professional Commitment

The motivation to get involved in one’s profession is created by an identification


with the profession. Trainees who feel emotionally connected to their profession will
also become more involved in their profession—at least hypothetically. It is assumed
that it is not only company characteristics such as the working atmosphere or the
quality of training that lead to greater involvement in their profession. Appreciation
from the teachers could also help to promote the professional commitment of the
trainees (Fig. 8.31).
For the trainees of all professions surveyed in the study, there is a (global) model
that can clarify 47% of the variability in the values for professional commitment. The
largest explanatory contribution is provided by the context characteristic of ‘inde-
pendent processing of work orders’. The further ranking of the influence on profes-
sional commitment includes the factors ‘business process orientation’, ‘working
climate’, ‘demand and level of work orders assigned’, ‘learning venue cooperation’,
‘school climate’ and ‘teacher evaluation’. Consequently, certain characteristics from

Fig. 8.31 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional commitment, R2 (corrected) ¼ 47.2%, predictors: training quality (indepen-
dence), business process orientation, working atmosphere, training quality (demands and level of
tasks), learning venue cooperation in terms of content, school atmosphere and teacher evaluation
328 8 Evaluating and Presenting the Test Results

both learning venues can be used to predict professional commitment. No contribu-


tion is made by the characteristic of training support.
If the ‘global model’ of the Saxon trainees is applied to the individual profes-
sional groups to determine professional commitment, as much as 57% of the
variance in the industrial-technical sector is clarified.
This model is worst suited for clarifying the variance in the values of professional
commitment in the commercial sector, where only 42% of the variance is clarified.

8.6.5 Organisational Commitment

Organisational commitment as an expression of the emotional attachment to the


company is based on identification with the company and is expected as a result of
the willingness to perform the work: ‘I am committed to the company’. It therefore
seems obvious that it is mainly in-company characteristics that lead trainees to build
up a bond with the company. Using a theory-driven model, data of the overall
sample served to test a model that explains 50% of the variation in the values for
organisational commitment. The determinants of the organisational commitment of
the Saxon trainees were identified as ‘independence’ in dealing with operational
problems, the ‘working atmosphere’, ‘training support’ from skilled personnel and
‘business process orientation’. These four characteristics together account for more
than half of the values of organisational commitment (Fig. 8.32).
The highest explanatory contribution is provided by the characteristic of ‘inde-
pendence in task processing’. Trainees therefore develop organisational commitment
if they can weigh the best possible solutions for a particular task against each other or
if they are given room to manoeuvre in carrying out the work. The working
atmosphere can be identified as an equally important explanatory factor for
organisational commitment. A trusting atmosphere in the company is therefore an
important characteristic in the development of organisational commitment.
The contact with the trainers (training support) and the integration of the tasks into
the overall company (business process orientation) contribute almost equally to the
explanation of organisational commitment.
This model for explaining organisational commitment was also applied to the six
professional groups. It turned out that, in the professional group of industrial-
technical craft trades, as many as 58% of the values can be explained as
organisational commitment, but in the general industrial occupational group (spe-
cialist warehouse clerk, specialist employee for bathing establishments, etc.), the
model explains only 45% of the values of organisational commitment.
Conclusion With the methods of measuring the identity and commitment of
trainees and skilled workers, it is possible to empirically record and shape profes-
sional development as a connection between identity and competence. For the
didactic actions of teachers and trainers, this means not only paying attention to
the transfer of professional knowledge and skills—at the highest possible level. It is
8.6 Identity and Commitment as Determinants of Professional Development 329

Fig. 8.32 Determinants of organisational commitment, R2 (corrected) ¼ 50.1%, predictors: train-


ing quality (independence), working atmosphere, training support and business process orientation

equally important for the development of personality that vocational training also
proves to be an ‘education in the medium of the profession’ (Blankertz). This
presupposes that the trainees acquire interrelated knowledge so that they learn to
integrate their training activities into the company’s business processes. This is the
only way to develop a sense of responsibility and quality.
For occupational research, the identification potential of the training professions
is an essential object of research. The results of this research are crucial for the
modernisation and development of professions, since a considerable proportion of
professions have a very low identification potential (Piening and Rauner,
2015a, b, c, d, e, f). This affects the quality of training in these professions.
Measuring professional and organisational identity and the commitment based
thereon are among the instruments of quality assurance and quality development
that can be used to enhance the attractiveness of dual vocational training.
Chapter 9
The Contribution of COMET Competence
Diagnostics to Teaching and Learning
Research

9.1 A New Dimension for Teaching-Learning Research


in Vocational Education and Training

9.1.1 Introduction

With teaching/learning research, pedagogy—on its way into the system of disciplin-
ary sciences—has gained access to the methods of established research traditions.
Since then, teaching and learning research (cf. Straka, Meyer-Siever, & Rosendahl,
2006) has been shaped by both experimental and quasi-experimental research and
the reduction of the category of education to that of “learning” as a form of
behavioural change (cf. Skowronek, 1969).
For example, didactic research based on conceptual change research has pro-
duced results that have received much attention in educational practice. This mainly
applies to didactics in the natural sciences (cf. Carey, 1985; Posner, Strike, Hewson,
& Gertzog, 1982; Vosniadou, 2008). Waldemar Bauer has presented an analysis on
the significance of conceptual change research as a field of vocational training
research (cf. Bauer, 2013). He concludes that research into professional knowledge
during the process of professional competence development is of fundamental
importance for the didactic actions of teachers: “If TVET teachers or trainers
would have some knowledge with regard to the intuitive concepts of their students,
they would better estimate learning barriers and problems of knowledge acquisition”
(Bauer, 2013, 227 f.). It can be assumed that scientifically founded didactics of
vocational education and training that differentiate between occupations and occu-
pational fields are impossible without profession-specific knowledge research that
clarifies how learners acquire the knowledge of work processes incorporated into
vocational work in the course of their development.
A new field of teaching and learning research was developed in the Collaborative
Research Centre (CRC) 186 “Statuspassagen und Risikolagen im Lebensverlauf”

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 331
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_9
332 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

(Status Passages and Risk Situations in Life Courses). For example, Martin Fischer
and Andreas Witzel show how the qualitative secondary analysis method can be
used to investigate the relationship between professional competence (development)
and professional identity (cf. Fischer & Witzel, 2008). They confirm the thesis that
skilled workers bundle their knowledge through processes of self-organisation
(cf. Heinz, 2002). The differentiation between different forms of professional iden-
tity and between professional and organisational identity leads to the realisation that
organisational identification contributes to limited learning behaviour (cf. Fischer &
Witzel, 2008, 42). This form of teaching-learning research predominantly contrib-
utes to the justification of hypotheses, which can only be tested if both professional
competence and the forms of professional and organisational identity can be empir-
ically recorded (! 7.1, 7.5). The methods developed in CRC 186 for extended
teaching and learning research have also succeeded in further developing the
vocational socialisation research represented above all by Wolfgang Lempert
(1995, 2000) (cf. Heinz, 2002).
The attempts to transfer methods of competence diagnostics to the measurement
of vocational competence based on test methods successfully applied in PISA
research (cf. Baethge, Achtenhagen, Babie, Baethge-Kinsky, & Weber, 2006;
Nickolaus, Gschwendtner, & Abele, 2009) have enriched teaching and learning
research in vocational education and training to the extent that a wide range of
experience has been gained on the limited scope of standard-oriented test methods in
vocational education and training. This is particularly evident at the level of testing
instruments, which ignore a central characteristic element of professional activity:
The solution of professional tasks is always confronted with a solution space defined
by the context of operational business processes, which facilitates a decision
between a—usually large—variety of possible solutions. All methods of empirically
recording occupational competence on the basis of standard-oriented test tasks are,
therefore, ruled out, as their solutions can only be evaluated as right or wrong
(cf. Rademacker, 1975). The basis for competence diagnostics in vocational educa-
tion and training are complex and open test tasks (cf. Haasler, Heinemann, Rauner,
Grollmann, & Martens, 2009, 103 ff.). In this context, Martens and Rost point out
that a fundamental distinction must be made between a difficulty model and a
capability model (cf. Martens & Rost, 2009).

Measuring Professional and Organisational Identity


and the Commitment-Based Thereon

Competence diagnostics also includes the recording of identity development in the


course of vocational training and the commitment based thereon in the working and
learning process, with a distinction made between a professional and an
organisational dimension of identity and commitment. Based on a five-dimensional
identity-commitment (I-C) model, a measurement model was developed and psy-
chometrically evaluated that makes it possible to conduct extensive surveys (! 5.4,
7.5, 7.6). This set of instruments facilitates the identification of the essential
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and. . . 333

processes in the development of vocational identity and their respective fields of


reference over the course of the training period and relating them to the respective
levels and dimensions of professional competence. The surveys on in-company and
school-based training situations also make it possible to record key factors in the
development of professional competence and professional identity and, therefore,
provide not only competence-diagnostic data but also data for the direct measure-
ment of training quality.
The central idea of design-oriented vocational education and training is not only
an expression of normative educational goals, but it very realistically depicts the
reality of the professional world: Professional tasks must always be completely
solved, taking into account all relevant criteria, as otherwise the result would be
more or less damaging to the employees and the companies. In professional work,
therefore, it is always important to make full use of the given solution space by
weighing the solution aspects against each other in context-specific manner (the
concept of holistic problem solving). Vocational education and training, therefore,
encompass a value-based education which, in its significance for personality devel-
opment, goes far beyond education aimed solely at mere comprehension.
Teaching and learning research for vocational education and training, therefore,
requires an appropriate categorical justification framework as a prerequisite for a
VET competence model. Only on this basis is it possible to establish teaching-
learning research that produces practice-relevant and scientifically sound results. In
this respect, the research work carried out within the framework of the COMET
research network (cf. Fisher et al., 2015a) can be classified as extended teaching-
learning research.
The following depicts a systematisation of the questions, methods and results of
this research, which can be based on the meanwhile very extensive data records of
the (international) COMET research network.

9.1.2 Teaching and Learning Research Based on COMET


Research Data

COMET competence diagnostics opens up a new dimension in empirical teaching


and learning research. The basis is formed by data from four fields of research into
professional development:
• Data on professional competence (development)
• Data on the development of professional and organisational identity, work-related
commitment and (test) motivation
• Contextual data to capture the framework conditions for vocational teaching and
learning—in particular the subjective assessment of vocational learning by
learners and teachers
• Data on the competence of teachers/trainers
334 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.1 Interrelations and correlations: competence, identity and context

Table 9.1 Research correlations


1. Professional competence (learners) – Teacher competence
2. Professional competence (learners) – Professional/organisational identity/
commitment
3. Professional competence (learners) – Context data (training quality)
4. Teacher/trainer competence – Context data (training quality)
5. Teacher/trainer competence – Professional/organisational identity/
commitment
6. Professional/organisational identity/ – Context data (training quality)
commitment

A first insight into this field of empirical teaching-learning research is provided by


the existing publications, research reports and data reports.
Figure 9.1 illustrates a field in teaching-learning research, which is further
differentiated in Table 9.1.
If one considers the diversity of context variables and the factors and items with
which models and scales were formed in COMET competence diagnostics, it can be
seen that a variety of questions can be generated that can be empirically investigated
using the available methods and data. Decisive prerequisites for the quality of this
research are:
• The scientifically sound categories and models—if one looks at the multitude of
competence definitions used in VET research, it is not surprising that attempts to
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and. . . 335

model professional competence fall far short of the requirements formulated by


the Klieme Commission (cf. Klieme et al., 2003)
• A competence model that comes close to the requirements justified by the
KLIEME Commission and the initiators of the DFG priority programme “Com-
petence models for recording individual learning outcomes and for balancing
educational processes” (cf. Klieme & Leutner, 2006)
• The psychometric evaluation of the models for measuring professional compe-
tence (cf. Martens, 2015) and on professional/organisational identity and profes-
sional/organisational commitment (! 6.4)
• A test format that meets the requirements of vocational education and training
(Chap. 5)

9.1.3 Competence Diagnostics

Competence diagnostics are based on a set of methodological instruments that, for


the first time, facilitates the measurement of professional competence and profes-
sional competence development across the entire range of occupations at a high level
of validity and reliability in terms of content (! 5.7). The concept of open test items
opens up the possibility for a new quality of internationally comparative teaching-
learning research as well as comparative research that spans quality levels. It enables
vocational education and training planning and policy to:
• Use evidence-based data to design and organise the transition from initial voca-
tional training to higher education
• Compare dual education programmes (initial training) with the variants of dual
education programmes in higher education
• To review the empirical evidence of sound models and objectives of vocational
education and training programmes and forms of education and, therefore, place
the overall quality of vocational education and training on a solid footing
(cf. Fischer et al., 2015a)
A directly linked field of teaching-learning research is the implementation of a
didactics of vocational education and training, which can be based on a scientifically
and normatively based as well as a psychometrically evaluated competence model.
Competence diagnostics—with the participation of teachers, trainers and sub-
ject leaders—and the class- and learner-specific analyses of the survey data are
already a new central element of quality assurance. The unanimous opinion of the
project coordinators involved in the COMET projects (teachers/teacher trainers) is
that the COMET competence model makes it possible to operationalise the
learning field concept in detailed, occupation-specific manner. This lends a new
quality to the didactics of vocational education and training. Thomas Scholz, the
project coordinator of the COMET project “Industrial Mechanic Hesse”
summarised the evaluation of the project as follows: “By measuring competence
and establishing competence development, COMET closes a large gap in the
336 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

learning field concept” (Scholz, 2015, 155; cf. the contributions in Chap. 2
“Erfassen beruflicher Kompetenz: Erwartungen, Ziele und Erfahrungen der
Berufsbildungspraxis” (Capturing vocational competence: expectations, goals
and experiences of vocational training practice) in: Fisher et al., 2015a and
Katzenmeyer et al., 2009).
The feasibility study on the application of COMET methods for the design of
vocational examinations (cf. Rauner, 2015b) and the first pilot studies carried out on
this subject (cf. Gäumann-Felix & Hofer, 2015) open up a field of teaching-learning
research with considerable potential for developing the quality of vocational educa-
tion and training. This assessment is based on the insight that examinations as the
“secret curriculum” determine the quality of vocational education and training like
no other element of control and organisation.
This is directly related to developing new concepts for improving cooperation
between learning locations (cf. Rauner, 2017, Ch. 4.3).
The verification of the hypothesis that a cross-venue competence model and the
examinations based thereon have the potential to improve cooperation between
learning venues poses a challenge for vocational education and training research
and planning, as a confirmation of this hypothesis could “solve”—or at least
alleviate—a problem that has been pending for decades. A comparison of dual-
cooperative vocational education and training in Switzerland with the dual voca-
tional education and training variant in Germany is possible on the basis of the data
from the COMET projects on nursing training in Switzerland and the European
“COMCARE” project (cf. Fischer, 2013; Fischer, Hauschildt, Heinemann, &
Schumacher, 2015; Gäumann-Felix & Hofer, 2015).

9.1.4 Teachers as Determinants of Professional Competence


Development

Hattie cautiously estimates that around 20–30% of deviations in students’ learning


outcomes are due to the influence of teachers (cf. Hattie & Yates, 2015, 99).
According to Hattie, this figure is much higher in numerous analyses. It is remark-
able that Hattie, in his book “Visible Learning and the Science of How We Learn”,
begins with a chapter entitled “Learning within classrooms”, since teaching-learning
research usually takes the opposite path in its attempts to clarify the laws of teaching
and learning. Experimental and quasi-experimental research designs exclude the
“class effects” as interference variables (! 7.5.1). However, this excludes the
factors that characterise “learning in classrooms”—the class-specific learning envi-
ronment. This does not mean that there are additional effects on competence
development that can only be clarified with the methods of experimental research.
In the COMET project, these include the phenomenon of stagnation in competence
development during the second half of vocational training and empirical evidence of
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and. . . 337

Fig. 9.2 COMET NRW 2013, Forwarding and logistics clerks, Task 3: Planning a warehouse
logistics project (TS ¼ 43.33; V ¼ 0.54)

the transfer of problem-solving patterns from teachers/lecturers to their pupils/


students.
After the COMET project was able to demonstrate that teachers—usually sub-
consciously—transfer their technical understanding or problem-solving pattern to
their learners at an almost 1:1 ratio, a new field of research is opening for teaching-
learning research. Identifying and researching the competence profiles of teachers/
trainers holds potential knowledge that is of far-reaching significance for the com-
petence development of learners. However, VET practice and research will rarely
have the opportunity to measure the competence levels and profiles of teachers/
trainers. Here, it is possible to indirectly record teacher competence on the basis of
the cited knowledge regarding the transfer of teacher competence to their pupils/
students. The competence development of the pupils indicates the competence of
their teachers.
In this context, the participation of teachers in rater training and in the evaluation
of task solutions for the open test tasks has proven to be a very effective form of
acquiring a developed (holistic) competence model. Figure 9.2 shows the compe-
tence profile of a group of trainees, which simultaneously represents the professional
understanding of their teachers.
338 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

The remarkable aspect about this result is that it was identified by the teachers
whose competence profile was the cause of the unilaterally characterised compe-
tence profile of their pupils. The one-day training had enabled them to identify this
competence profile of their pupils—and, therefore, also their own. As the project
progressed, the new understanding of the subject among the teachers involved in the
evaluation also stabilised in the classroom. This example points to a new field of
teaching-learning research.
The results available so far on the connection between teaching and learning
initially only show that the new quality of research methods leads to a considerable
expansion of teaching-learning research. Furthermore, a differentiation of the obser-
vation and rating procedures, with which the didactic actions of teachers can be
validly and reliably recorded (Sects. 10.7 and 10.8), holds new possibilities for
teaching-learning research on the connection between vocational competence devel-
opment and teacher competence.

9.1.5 Professional Competence Development


and Professional Identity/Professional Commitment

The Development of Professional Identity

The development of professional identity, an established subject of vocational


training research, is strongly stimulated by competence diagnostics. Herwig
Blankertz (1983) postulated the inextricable connection between the development
of vocational competence and identity in vocational education and training research
and in the didactics of vocational education and training. The development of
two-year occupations with a very low identification potential—as can now be
measured—including the impact this has on professional competence development,
could be avoided. The actors in career development should use the empirical data
from identity and commitment research to further develop the two-year (semi-
skilled) professions into full-fledged professions. After all, there is a lot at stake
for the companies. In professions with a low identification potential, skilled workers
are trained who—if they have an underdeveloped occupational identity—are at best
suitable to carry out simple tasks in accordance with the detailed instructions of their
superiors. Without a developed professional identity, skilled workers develop only a
very limited sense of responsibility and quality.
The expanded COMET competence/measurement model, which includes the
dimensions of identity and commitment, can now serve to develop a broad field of
vocational training research. The research results already available show that the
attractiveness of this teaching-learning research field is based on the interest of
companies in establishing productive work structures and, therefore, increasing the
competitiveness of companies in the international competition regarding quality.
This requires skilled workers who are trained and employed in professions with a
high identification potential.
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and. . . 339

Trainees are interested in attractive professions, in other words, those with which
they can identify. This can be linked to a variety of hypotheses and questions
examined in studies on the vocational socialisation of trainees (cf. Brown, Kirpal,
& Rauner, 2007; Heinz, 1995; Lempert, 2000).
One of the guiding principles here is to go beyond the extensive research
activities on professional socialisation and the development of professional identity.
The strength of the previous research is impressively proven by the richness and
quality. This is made clear by the discovery of social development tendencies, which
are reflected in changed structures of professional socialisation. For instance, Walter
Heinz identifies a fundamental structural change “from the practice of work ethics
through external control within the framework of regulated employment relation-
ships to the self-regulation of vocational learning processes within the framework of
risky employment processes” (Heinz, 2012, 324). And BAETGHE assumes a
structural weakening of occupation-based work to draw conclusions for vocational
training. Wolfgang Lempert develops methods to sensitise students for the teaching
profession at vocational schools to the processes of vocational socialisation. This
requires an examination of individual professional biographies and training courses
(cf. Lempert, 2006, Chap. 7).
What social science research on vocational socialisation and the development of
vocational identity has in common is that vocational training research, which is
concerned with career development and the design of training regulations and which,
in this context, deals with the identification potential of professions and their
attractiveness for trainees, plays a subordinate role. This is the central focus of
research and development tasks that deal with the connection between professional
and organisational identity and the related professional commitment.
Anglo-Saxon commitment research is oriented towards the interest in determin-
ing how operational commitment can be increased (cf. Heinemann & Rauner, 2008).
The survey of more than 4000 trainees shows that this form of vocational research
has the potential to address development-oriented issues to increase the attractive-
ness of training professions in view of the shortage of skilled workers in the
intermediate employment sector (cf. Rauner et al., 2015b). This includes questions
regarding
• The content and quality profiles of the professions
• The sense in differentiating occupational profiles according to specialisations and
training priorities
• The development of design criteria for occupational profiles and training
regulations
• The establishment of vocational fields as a basis for vocational specialisations as
study subjects for vocational schoolteachers
The empirical data on the development of professional competence and profes-
sional identity/professional commitment collected in numerous COMET projects
make it possible to examine how the development of competence and identity
correlate with one another on a job-specific basis, which shows that there is an
existing correlation as postulated by Blankertz. The investigative instrumentation
340 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

now makes it possible to examine the identification potential of a large number of


professions. The studies carried out so far suggest that not only occupation-specific
and comparative studies should be carried out, but also longitudinal studies, which
can be used to examine the development of professional competence, professional
identity and professional commitment in professions that require training. The first
research results in this field of teaching-learning research already confirm the
expectation that there is an essential source for career development here.

9.1.6 Professional Competence Development: Context Data

The data of the context surveys are available for the interpretation of the test results
on competence development, offering the opportunity to identify indicators of
competence development.
This field of research is primarily concerned with the elucidation of the funda-
mental theoretical contexts of learning. There is, therefore, a clear preference for
experimental and quasi-experimental research designs in the practice of teaching-
learning research.
If, for example, the influence of test motivation on the results of competence
diagnostics is investigated, the overall sample is usually used as the basis. Both the
class-specific and the occupation-specific characteristics of the test motivation are
then not taken into account. The competence of the test participants appears exclu-
sively as an individual quantity and not—which would actually be required—as a
quantity which is decisively shaped by the learning environments of the classes and
their teachers. If both categories were correlated on the basis of individual data, the
decisive class effect would be missing, which explains 30–60% of the dependence of
competence development.
If, on the other hand, the average competence development of the classes (of a
profession) and the average motivation of these classes are correlated, then this can
help to show whether and to what extent the two values are interrelated.
Large-scale analyses are nevertheless necessary whenever fundamental rules can
be investigated. This was shown using the example of the transfer of problem-
solving patterns from teachers/lecturers.

Teacher Competence: Context Data

In measuring teacher competence (vocational training), research is still in its infancy


(cf. Rauner, 2015a; Zhao, 2015). The first broad investigation was carried out by a
research group around Zhiqun Zhao. The psychometric evaluation of the compe-
tence model showed good values (! 10.6). Initial attempts to extend the method of
competence diagnostics for teachers from measuring cognitive dispositions to mea-
suring teacher competence in class have shown that the introduction of an
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and. . . 341

observation model has led to surprisingly high levels of interrater reliability among
teacher trainers (! 10.7, 10.8).
For class observation, in which several teachers/lecturers always take part, this is
a prerequisite for teachers/lecturers to be able to assess at a high level of agreement.
This result suggests that the COMET competence and measurement model should be
introduced with a corresponding extension to include the observation model for the
second state examination.

Perspectives

The new examination/test approach would then comprise:


• Evaluating cognitive dispositions based on lesson preparation
• Evaluating teaching on the basis of the extended rating procedure
• The technical discussion on the basis of the rating results and the final assessment
of the lessons on the basis of the rating scale
It is expected that the introduction of a COMET-based examination procedure for
teacher training graduates would have an impact on the forms and content of teacher
training. This assumption is based on the experience of the COMET projects.
Teachers who had participated in one-day rater training were able to evaluate the
test subjects’ task solutions with a high degree of agreement—on the basis of the
COMET competence model. The new understanding of the subject was based—
often subconsciously—on their everyday teaching life. At a later second test time, it
was possible to measure a significant increase in the competence of the trainees
almost across the board. The phenomenon of stagnation of competence development
had, therefore, also evaporated.

Professional Identity, Professional Commitment and Context Data

The modelling of the I-C model and the measurement methods based thereon are
already instruments that make it possible to examine this second dimension of career
development in detail. In this case, the decisive context variable is the profession.
The occupation-specific values—for example in the form of I-C profiles—show the
identification potential of professions. The identification potentials are indicators for
the attractiveness of the professions. In connection with the I-C profiles, the experts
involved in career development, therefore, have evidence-based data on the strengths
and weaknesses of the occupational profiles (from the perspective of the trainees).
The question remains as to how the identification potential of professions is
influenced by the conditions of the respective training companies and whether
there are existing regional peculiarities.
Initial analyses of the relationship between professional profiles and training
quality show to what extent the development of professional identity and
342 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

professional commitment in the individual professions depends on the quality of


training and whether there are quality patterns typical of the profession.
The potential of relevant questions for teaching-learning research: occupational
and curriculum development, quality assurance and development, teaching and
learning forms that promote the development of professional identity, etc., and
new questions and methods of socialisation research are suitable for the further
development of this field of research.
This field of research also has a component of international comparison, as the
professional forms of work (and, therefore, the training professions) are of different
significance in the national vocational training systems.

9.2 Professional Competence and Work Ethic

9.2.1 Introduction: From a Function-Oriented


to a Design-Oriented Vocational Training Concept

In the mid-1980s, the ability to (co-)design work and technology in social, ecological
and economic responsibility was developed as a new guiding principle for voca-
tional education and training (cf. Expert Commission for Work and Technology
1988, Chap. III, 4.; Rauner, 1988) and tested in numerous pilot projects as the basis
for the organisation and design of vocational education and training processes.1
Horst Kern and Michael Schumann supported this development with their book
project “Das Ende der Arbeitsteilung?” (“The End of the Division of Labour?”), in
which they summarised the results of the industrial sociological qualification
research of the Sociological Research Institute Göttingen (SOFI) (cf. Kern &
Schumann, 1984). The Enquete Commission of the German Bundestag (11th par-
liamentary term) “Future Education Policy—Education 2000” takes up these trends
in research and, in its final report, repeatedly emphasises the “change of perspective
from an overly narrow orientation towards adaptation to an active participation in
shaping the future society [...] and the world of work as a central education policy
orientation” (1990, 5, 20, 28). It, therefore, takes up the essential moments of the
expert hearing on the “structural change of work and occupation and its relationship
to education and training with special consideration of the flexibility aspect”
(15.02.1989) of the expert opinion on “design-oriented vocational training” submit-
ted by the ITB (15.02.1989). In this regard, “Design competence” is also expressly

1
These primarily include the dual model experiment (for both learning locations) “Business and
work-oriented, dual-cooperative training in selected industrial professions with optional advanced
technical college entrance qualification (GAB)” (cf. Bremer & Jagla, 2000), the dual model
experiment “Design-oriented vocational training with the learning-venue network of SMEs and
vocational schools in the Wilhelmshaven region (GOLO)” (cf. Howe & Heermeier, 1999) and the
model experiment “Innovation at the Vocational School” (NRW) (cf. Heidegger, Adolph, & Laske,
1997).
9.2 Professional Competence and Work Ethic 343

Characteristics Japan USA Europe


Production hours per vehicle 16.8 25.1 36.1
Assembly errors per vehicle 60 82 92
Number of improvements 154 1 1 !
recommended by employees
Reflection on work experience (hrs.) 380.3 46.4 173.3 !

Fig. 9.3 Four characteristics of lean production, including learning within the work process (ibid.,
82)

required for technical education (ibid., 30). The Commission proposes that the
Vocational Training Act should include a corresponding educational mandate.2
Only with the “International Motor Vehicle Program” (IMVP), the largest
research project on industrial development ever carried out by MIT (Massachusetts
Institute of Technology), has the change from scientific management (Taylorism) to
“lean production”—to an organisation of entrepreneurial processes oriented towards
operational business processes—been impressively demonstrated using the example
of the international automotive industry. The drama of this structural change is
particularly significant due to one research result: the labour productivity of Japanese
automobile companies (which had developed and introduced the concept of lean
production) was twice higher than that of the US and European automobile indus-
tries (cf. Womack, Jones, & Roos, 1990, 9). The central factors of lean production
are above all the transfer of competences and responsibility to the directly value-
adding work processes and the reduction of horizontal and vertical division of
labour, combined with the introduction of flat company hierarchies (Fig. 9.3).
This study was of fundamental importance for vocational education and training.
It marks with great clarity the replacement of industrial vocational training, which is
characterised by scientific management (scientifically founded and practically tested
by W. Taylor), by vocational training which is oriented towards the guiding principle
of co-designing the world of work. Lean production is characterised by a shift of
competences, responsibility and quality assurance tasks to directly value-adding
work processes and a clear reduction in the horizontal and vertical division of labour.
Some selected research results from the study presented by MIT in 1990 shows the
great challenge for vocational education and training that is oriented towards helping
to shape the world of work. This particularly applies to the very large number of
suggestions for improvement made by employees and the reflection of work expe-
rience as a central element of learning in the work process (Fig. 9.3).
Vocational training research identified a close connection between the modern-
isation of companies through the introduction of business process-oriented

2
Anchoring this recommendation in the Vocational Training Act (BBiG) failed because of the
cultural sovereignty of the federal states. Therefore, the BBiG is also assigned to the “Economic
Constitution”.
344 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

organisational concepts and a significant upgrading of vocational training


(cf. Dybowsky, Haase, & Rauner, 1993; Ganguin, 1993; Rauner, 2000, 49).
The recommendation of the Enquete Commission for Education 2000 (of the
German Bundestag) for a fundamental change of perspective from adaptation-
oriented to design-oriented vocational education and training was given a strong
impetus by the MIT study. The new mission statement for vocational education and
training was bindingly anchored in the framework agreement of the KMK on
vocational schools (KMK, 1991) as well as in the handbook on the development
of framework curricula (KMK initially in 1996): the ability “to help shape the world
of work and society in a socially and ecologically responsible way”.3 The tradition of
vocational education and training with a systemic structure was replaced by an
educational concept structured according to fields of learning. With a nationwide
pilot programme comprising “New learning concepts in dual vocational training”
(from 1 October 1998 to 30 September 2003), an attempt was made to implement the
new learning field concept with the participation of 21 pilot schools in 14 federal
states with different emphases (cf. programme executing agency (ITB, ISB):
Deitmer et al., 2004; Zöller & Gerds, 2003). Karin Przygodda and Waldemar
Bauer come to a sobering conclusion in their evaluation of the pilot programme:
“If curriculum development is not supported by the establishment of vocational
qualification research, and if vocational training practice is not provided with
suitable methods and instruments for implementing the curricula in work process-
oriented learning situations, the learning fields reform initiative will come to noth-
ing” (Przygodda & Bauer, 2004, 76 f.).
With the methods of vocational competence diagnostics (COMET), a methodo-
logical instrument has been available for a decade that has also been assigned
didactic quality by VET practice, and which now facilitates understanding the entire
scope of the learning field concept and its implementation in VET practice. The
project coordinators for the NRW COMET sub-project “Forwarding and logistics
clerks” (SLK) report on their experiences at the conference “COMET put to the test”
(Fisher et al., 2015a). Among other things, they explain: “What is new for us is that
with the COMET concept we now have a tool with which, for the first time, we can
very precisely determine at which level we teach which competences and where
corrections in didactic action are necessary. The great surprise: We—the project
group—did not expect that so many new insights would challenge us so early in the
project. Whether and how we will translate this into a new quality of vocational
learning will become apparent in the course of the pilot project—this time, however,
on the basis of very precisely recorded competence levels and profiles of our pupils”
(Stegemann, von Eerde, & Piening, 2015, 136).

3
The KMK later extended this central idea to include the aspect of “economic” responsibility
(cf. KMK, 2000, 8).
9.2 Professional Competence and Work Ethic 345

9.2.2 The Characteristics of Vocational Education


and Training (Chap. 3)

The modernisation of a heating system for a residential building installed in the


1980s is an everyday task for sanitation, heating and air-conditioning companies. A
new challenge in each case is posed by the local characteristics of the buildings and
the wishes of the clients. These wishes are not always realistic, as customers are
rarely familiar with the latest heating technologies and the possibilities of reducing
heating output through building refurbishment measures. Advising the customer
and, after detailed consultation, translating the formulated customer’s wishes into a
specification and into realistic planning and implementation of the system
includes—in addition to observing the functionality of the plant, the utility value,
the costs and consequential costs as well as the work planning—strict adherence to
environmental standards and safety or occupational health and safety regulations.
This naturally requires creativity when it comes to combining electrical and thermal
energy (combined heat and power) or integrating hot water and heating.
A professional solution can only be expected when all aspects relevant to the
modernisation of a heating system have been taken into account. It is not possible to
reduce the complexity of occupational tasks, as occasionally proposed by examina-
tion experts (cf. e.g. Griffin, Gillis, & Calvitto, 2007)—unless the skilled workers
ignore the risks of incomplete, inferior or unauthorised solutions and, therefore, also
endanger their health or damage the reputation of their company.
The concept of holistic problem-solving illustrated with this example applies to
the design and organisation of almost all professions, occupational fields and study
programmes (cf. Fisher et al., 2015a, 17 ff., Chap. 5).

Employability

It can be assumed that, contrary to the thesis of “deprofessionalisation”, the occu-


pational form of social work has changed and will change in its content, but not in its
fundamental significance for employees and companies (cf. Kurtz, 2001; Lempert,
2007b; Dengler & Matthes, 2015; Rauner, 2017, 1.1–1.10). For vocational education
and training, this means that it is measured by the degree to which and the quality
with which it communicates the competence profiles defined in the job descriptions.
This depends on whether the trainee can be granted the right to practise the
profession he or she has learnt. As far as health and safety relevant competences
characterising a profession are concerned—and this is the rule rather than the
exception—they must be undoubtedly (and safely) mastered.
346 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

The Contents of Vocational Training: Work Process Knowledge

The contents of vocational education and training represent purpose-related circum-


stances. The world of work and, beyond that, the utility values produced by human
beings arise from the objectification of purposes, intrinsic interests and needs.
Vocational work and vocational training also mean the search for compromises in
consideration of diverging goals and criteria in the solution or processing of voca-
tional tasks. Pure vocational education and training is, therefore, a contradiction in
terms. The widespread didactic tradition of applied knowledge: the derivation of the
contents of vocational learning from theoretical scientific knowledge has been
justified in many ways as a fundamental error (cf. above all Schoen, 1983; Heritage,
1984, 298 ff.; Schein, 1973, 39, 43; Hacker, 1996; Boreham, Samurçay, & Fischer,
2002; Böhle & Rose, 1992; Fischer, 2000a, 2000b; Rauner, 2004).

Shaping Competence

There is no right or wrong working or living world, but a process of developmental


change—a design process—in which any given scope for social and ecological
responsibility must be exploited (see above).

Professional Identity and Work Ethic

Motivation in the form of vocational commitment is of particular importance for


vocational education and training and vocational training research. It arises from the
inextricable connection between the development of professional identity and pro-
fessional competence (cf. Blankertz, 1983) and establishes a professional sense of
quality and responsibility—a fundamental prerequisite for the introduction of flat
corporate structures with a shift of competences and responsibility into the directly
value-creating work and business processes.

The Training Paradox

“Learning within the work process” poses a riddle: How does one acquire profes-
sional skills without first acquiring the corresponding theoretical knowledge? The
answer: Beginners in a profession become experts by doing what they want to learn.
Trainers support learners by confronting them with work situations that present them
with a challenge, with which they have to cope.
9.2 Professional Competence and Work Ethic 347

Fig. 9.4 Competence profiles with different competence levels and degrees of homogeneity

9.2.3 Competence Profiles for the Representation


of Competence Development and Professional
Work Ethic

The competence profiles represent the problem-solving patterns—and, therefore, the


technical understanding—of the trainees as well as the structure of value-related
decisions in the vocational work processes incorporated in them.
For school as the learning location, the solution space expands into a scope,
which points beyond the solution space given by the general conditions of a specific
order situation of a company. (cf. Dewey, 1916, 316 ff.; Rauner, 1988). In this
context, special importance is attached to action-explaining and action-reflecting
knowledge, which go beyond action-leading knowledge.
If the competence profiles of the test participants are analysed, it does not suffice
to pay attention to a high degree of homogeneity, but also to the level of competence
and knowledge achieved (Fig. 9.4).
348 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Examples

In these examples, both the trainees of the medical assistants (MFA) and the
industrial clerks (INK) achieve a comparably high competence level of TS ¼ 54.9
and 46.5 in contrast to the shipping clerks (SPKA). The homogeneity of their
competence profiles is (very) homogeneous with V ¼ 0.18 for the MFAs and with
V ¼ 0.19 for the INKs. With V ¼ 0.31, the equally high competence level of the
SPKAs with TS ¼ 44.0 is, however, significantly less homogeneous. If we look at
the characteristics of the competence profiles, we can see the cause of the inhomo-
geneity of the SPKA competence profile. The sub-competences K6 (environmental
compatibility), K7 (social compatibility) and K8 (creativity) are underdeveloped
(compared to the other sub-competences). The competence profiles of the INKs and
MFAs show that the trainees are not only able to completely solve the complexity of
professional tasks, but also to consider the value of all (!) solution criteria at a high
level of knowledge.

The Professional Understanding and Problem-Solving Patterns


of Teachers as Determinants of the Homogeneity of Their Students’/
Trainees’ Competence Profiles

All four INK classes partaking in the test achieve (in the second main test) are
significantly higher and more homogeneous levels of competence than the test
participants in the industrial and technical occupations.
This also refutes the widespread prejudice that commercial vocational training
requires subject-specific (semi-academic) training rather than training based on
learning fields. Now the opposite has come to light! Only the MFA trainees and
the nursing students at the Swiss colleges of higher education have demonstrated
such a consistent and successful implementation of the learning field concept as in
the sub-projects MFA and INK in the COMET NRW project. This also indicates that
these project groups have succeeded in consistently implementing the criteria for
(test) task development on the basis of the pre-test results. This becomes clear when
comparing the competence profiles of the INK pre-test participants (tasks 1, 4, 5, 6)
with the competence profiles of the same (revised) tasks in the main test (Fig. 9.5).
Within the framework of competence diagnostics in vocational training, the
pre-test has the function of testing the drafts for test items developed by the
vocational project groups (usually teachers) and of selecting and revising items
which are used in the test. In a one-day training session, the developers of the test
items acquire the ability to evaluate the item solutions of the test participants (rater
training). The review of interrater reliability—the degree to which their assessment is
consistent—shows that this is regularly successful.
The drafts of the test items represent the technical understanding and problem-
solving patterns of the teachers at the beginning of a project (prior to rater training).
Above all, the task profiles show whether the eight-solution criteria were taken into
9.2 Professional Competence and Work Ethic 349

Fig. 9.5 Comparison of the competence profiles of the selected test items in the pre-test and in the
main test

account by the development teams in the situation descriptions of the test tasks, and
if so, with what significance. The competence profiles of the test items, therefore,
represent not only the competence characteristics of the pre-test participants, but also
those of the teachers (developers of the test items) (Fig. 9.5).

9.2.4 The Relationship Between the Level of Competence


and the Homogeneity of Competence Development

The psychometric evaluation of the COMET competence model confirms the


hypothesis of successive and interrelated competence levels (cf. Erdwien & Martens,
2009, 80).
On the basis of this finding, the hypothesis can be substantiated that the homo-
geneity of competence development (competence profiles) also increases with
increasing competence levels. Whether this hypothesis can be empirically confirmed
was examined on the basis of data from the COMET projects comprising Hesse
electronics technicians and NRW auto-mechatronics engineers.
The correlation between the two variables was calculated on the basis of the mean
competence level (as the total score) and the mean coefficient of variation (V ) as a
measure of the homogeneity of the competence profiles of the participating classes
(Figs. 9.6 and 9.7).
There is a high correlation of r ¼ 0.64 for both surveys (E-B, E-EG Hessen and
NRW) and a very high correlation of r ¼ 0.84 for the 20 NRW AME classes. It
can, therefore, be shown that the level of competence correlates highly with the
degree of homogeneity of the competence profiles. At the same time, this means that
350 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.6 Correlation (r ¼ 0.64) between V and TS in 28 E-B and E-EG classes of the (Hesse)
Electronics Technicians project

Fig. 9.7 Correlation between V and TS in 20 NRW AME classes (r ¼  0.84)


9.2 Professional Competence and Work Ethic 351

there is a close connection between the development of professional competence and


professional work ethic.

Competence Profiles and Professional Work Ethic

The reflected balancing of the criteria relevant for the solution or processing of a
professional task is always associated with value decisions: Sustainability, function-
ality, environmental and social compatibility must be balanced against each other in
relation to the situation. Professionals who plan and carry out their work tasks
competently are, therefore, inevitably involved in the responsible balancing of
values. Professional competence and professional work ethic are, therefore, inextri-
cably linked. Matthew Crawford explained this thesis with an example from his
motorcycle repair shop: “Rattling pistons (on a motorcycle) can indeed sound like
too much valve clearance, which is why a good mechanic must always be attentive
and keep in mind the possibility that he is pursuing the wrong hypothesis. This is an
ethical virtue” (Crawford, 2016, 132). In general, he comes to the conclusion: “In
contrast to the assessment of cognitive psychologists (or better said: outside the
scope of their discipline defined by them) this cognitive competence—to ponder
one’s own way of thinking—seems to spring from a moral quality” (ibid., 131).
The COMET competence model and the competence profiles recorded on this
basis illustrate what this insight means in detail.
The consideration of the highest possible functionality and at the same time an
equally high utility value in the task solution—within a given cost framework—as
well as the inclusion of the regulations for environmental and social compatibility
refer to the complex structure of responsibilities that professional specialists cannot
avoid. This includes informative consulting for customers all the way to dealing with
conflicts with the client, when specialists are confronted with the unrealistic or
irresponsible requirements of their customers.
Professional work ethic forms with the development of professional competence
and professional identity and leads to a certain tension between professional and
organisational identity. For example, a sense of professional responsibility—as an
expression of work ethic—can conflict with the company’s business interests and,
therefore, with its own organisational identity when it comes to carrying out a
business assignment. Professional competence at the level of action-reflecting
knowledge is, therefore, an essential prerequisite for responsible professional action
and, therefore, for professional shaping competence.
In this context, Helmut Heid rightly criticises the mediation of abstract, econom-
ically desirable, moral values—detached from the content of vocational education
and training: In the “personal and qualificational statements about the increasing
importance of moral components of desired qualifications [...]—provided that the
values are ‘purified’ or separated from their contents in prevailing debates—the
predominating abstracts include willingness to perform, sense of responsibility,
ability to adapt, criticise and cooperate and other ‘virtues’ apostrophised as simple
‘key qualifications’” (Heid, 2006, 40 f.). If one “cleanses” the values to be taken into
352 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

account when solving professional tasks, such as utility value, environmental and
social compatibility, etc., then one splits professional competence into pure profes-
sional competence and a moral education abstracted from the real work processes, as
it is justifiably established, for example, in Ethics as a school subject. The didactics
of moral action in profession and economy and the relevant research were based on
the moral psychology of Kohlberg (1996). With their categories and models
abstracting from the content of VET, they contribute to widening the distance
from the central idea of vocational education and training oriented towards learning
fields.
Alexander Lenger suggests the opposite approach—but for higher education. The
study of economics should be guided by the insights gained in the training of social
workers. He refers to Becker-Lenz and Müller-Hermann (2013), who advocate the
position that ethical competence is an integral part of technical competence
(cf. Lenger, 2016, 168).
If one follows the proposals for the vocationalisation of higher education as set
out in the Bologna reform,4 it would be consistent to regulate “higher vocational
education” in an amended Vocational Training Act. This would pose the advantage
that the job descriptions and curricula would be developed with the significant
participation of experts from the social partners and the responsible federal minis-
tries. This highlights a previously rarely clarified weakness in the understanding of
science. In the chapter on “University and the Scientific System” written by Jürgen
Klüver in the Encyclopaedia of Educational Science, the basic principles of scientific
research and teaching are presented with rare clarity. The text, written in 1995, also
shows what remains of the scientific ideal of the 1990s.
“The special aspect of university when compared to all other educational institu-
tions is that it is fundamentally concerned with generating science. University
didactics must therefore be above all science didactics. [...] The educational function
of the university clearly recedes in the classical self-understanding of the university
[...] and can only be understood [...] in its specific form if it is understood as largely
determined by the respective scientific discipline.” Klüver then refers to the connec-
tion between research and teaching: “To this end, the student is introduced to the
most important sub-areas, basic concepts, procedures, theories and assured results of
this discipline” (Klüver, 1995, 78, 84).5
The German Research Foundation (DFG) and the German Science Council
(Wissenschaftsrat), as the entities responsible for implementing the Excellence

4
The USA has a long tradition of professionalising higher education. It is primarily fed by the
“College for All” policy (cf. Wyman, 2015).
5
A way out of the difficulty of integrating vocational and higher education into a modern educa-
tional architecture and thereby solving the tiresome problem of permeability is offered by an
architecture of “parallel educational paths” with an independent and consistent dual educational
path “from apprentice to master to master (professional)” for the qualification of specialists and
managers and a parallel “excellent” academic-scientific educational path, which could again devote
itself to its actual task: the increase of scientific knowledge in the structure of the further differen-
tiating scientific disciplines. (cf. Rauner, 2017, 2018a)
9.2 Professional Competence and Work Ethic 353

Initiative, base their explanations and instructions on “good scientific practice” on


the memorandum presented by the DFG in 1998, which was updated during the
preparation of the Excellence Initiative. The updates do not concern the self-image
of science in research and teaching, but rather improvements in the system of self-
regulation, for example, in the supervision of young scientists and the prevention of
misconduct by individual scientists.
The system of self-regulation of higher education institutions is based on the
freedom of science enshrined in the Constitution and means that only science itself
can guarantee good scientific practice. The promotion of young scientists is of
particular importance for the development and maintenance of science.
Further key statements of the DFG memorandum clearly show that the HEIs as
potential applicants for the Excellence Initiative are reminded of their central task:
“to foster and develop the sciences [...] through research, teaching and study”. HEIs
are, therefore, “comprehensively legitimised, but also obliged, to shape their internal
order in such a way that science can be shaped according to its immanent values and
standards” (DFG, 1998, 15). In its case law, the Federal Constitutional Court has
repeatedly confirmed the self-image of science documented in the DFG Memoran-
dum on “Securing Good Scientific Practice”. It makes clear in its case law that the
Constitution—Article 5 (3) of the Basic Law—protects “what is to be regarded as a
serious attempt to establish the truth”.
Furthermore, business education is confronted with the phenomenon that eco-
nomic science and business education often do not differentiate between science,
business and economic policy. Hans-Carl Jongebloed, for example, laments an
understanding of pure economics: “The ‘postulate of freedom of value’ and the
economic principle assumed to be ethically neutral in this respect still therefore
characterise the interests of knowledge and action in obtaining economic results
today”. It is obvious that the “principle of pure economic efficiency should be
oriented to the standards of moral principles” (2006, VIII f.). As Jongebloed cannot
differentiate between gaining economic knowledge, achieving vocational compe-
tence and economic action with the category of “obtaining economic results”, he
becomes entangled in irreconcilable contradictions. Julian Nida-Rümelin’s interest-
ing attempt to philosophically establish a “humane economy” is very similar. In his
critical examination of the category of “economic rationality”, he rarely differenti-
ates between economic knowledge and economic action. For example, he explains:
Market fundamentalists from economics and philosophy lay the conceptual founda-
tions of a policy that assigns the market the role of the fundamental framework of
order [...]. In the crisis of the global financial markets in 2008 ff. it becomes clear
that it is a mistake to assume that the necessary rules will emerge in the market itself
[...]. The assumption of market radicals [is based] on a fundamental error of
thought (Nida-Rümelin, 2011, 25). What remains open is what the fundamental
error of thought refers to—economic theories or economic action or both. Only on
the basis of a differentiation between theoretical (context-free) scientific knowledge
that students of economics acquire in their disciplinary studies and context-related
professional action knowledge—the basis for professional action—can professional
ethics be integrated into business education and didactics, as shown by the COMET
354 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

NRW project with its two commercial professions. This would also eliminate the
widespread practice of integrating business ethics into the university curriculum of
economics. Like the ethics of technology, business ethics is a subject of applied
philosophy (cf. Pies, 2016). In project studies, it is of course a good idea to interrelate
the findings and orientations of business ethics, economics and other subjects.
In vocational education and training, it could be possible to orient vocational
learning towards learning fields and vocational design competence on the basis of
vocational specialisations and vocational didactics as well as vocational scientific
research.

9.2.5 Conclusion

The COMET test procedure enables the representation of test results in the form of
competence profiles.
The eight-dimensional competence profiles, therefore, indicate the level at which
professional competence with its sub-competences is achieved and the technical
problem-solving patterns of the test participants. The competence profiles show
whether they have reached employability. The competence profiles represent
the weighting of the values incorporated in the eight sub-competences and, therefore,
the expression of the vocational competence and work ethic of the skilled workers.
The problem-solving patterns of the test participants are, therefore, also the patterns
of their professional ethics. The coefficient of variation V can be used to quantify the
degree of homogeneity of competence profiles and, therefore, also the extent of
professional ethics. This provides competence diagnostics with an instrument to very
precisely and vividly examine the extent to which vocational education and training
succeeds in implementing the guiding principle of vocational education and training:
“the ability to help shape the world of work in socially, ecologically and econom-
ically responsible manner”.

9.3 Professional Identity and Competence: An


Inseparable Link
9.3.1 Justification of the Hypothesis

Different research traditions have dealt with professional identity, professional


commitment, organisational commitment and the emotional attachment of
employees to their company. Since Wolfgang LEMPERT’s extensive research
work on the vocational socialisation of trainees and skilled workers, the develop-
ment of vocational identity has been an object of professional and vocational training
research (cf. Hoff, Lappe, & Lempert, 1991; Lempert, 2006). The Collaborative
9.3 Professional Identity and Competence: An Inseparable Link 355

Research Centre “Status Passages” of the University of Bremen had a relevant


research focus (cf. Heinz, 2006), and the professional form of social work and its
significance for the development of professional and work ethics were the subject of
empirical studies by a research group led by the Swiss sociologist Carlo Jäger
(cf. Jäger, 1989).
Organisational and management research—especially in the USA (! 4.7)—has a
long tradition of research on organisational commitment. Aron Cohen subsequently
developed the “multiple commitment model” (cf. Cohen, 2007). As a guest
researcher at the Institute of Technology and Education (ITB), he participated in
the development of an “Identity-Commitment (I-C) Model”, which can be based on
both vocational pedagogical and organisational sociological research.
Herwig Blankertz (1983) looked at the connection between the development of
professional identity and professional competence from a vocational pedagogical-
didactic perspective and established the thesis of a necessary connection between the
two dimensions of vocational development. Since then, vocational education
research has been challenged to clarify this connection.
In his introductory lecture to the symposium “Secondary level II didactics and
identity development in adolescence” (eighth Congress of the DGfEW), Herwig
Blankertz explained that the connection between professional identity and compe-
tence was (also) investigated in the NRW project on collegiate school: “The drama
of dual qualifying educational programmes consists in the fact that competence
development here is regulated by sense structures that demand a change of perspec-
tive from the pupils: they must anticipate a specific professional role and identify
with it—as otherwise competence development would be impossible” (Blankertz,
1983, 139). Competence, however, is again dependent on the learner being reflected
in the tasks and, therefore, forming his or her own identity (ibid., 140). Andreas
GRUSCHKA, who reported on the dual qualifying educational course for educators in
this symposium, justified the questions for the evaluation study on competence
development and subject-specific identity formation: “[We investigate] the contri-
bution of a lesson that seeks to combine vocational and science-propaedeutic
learning to competence development and identity formation [of educators]”
(Gruschka, 1983, 144).
When planning the study, however, he had to recognise that “there was no theory
and conception of social-professional or pedagogical competence development” that
would have been available for the evaluation study (ibid., 144). The great merit of
this experimental model lies in the justification of the hypothesis of the interrelation
of professional competence and identity development and in a first attempt to
empirically test this hypothesis, which is very fundamental for vocational education
and training.
Only the COMET project facilitates methods and research results which make it
possible to examine in detail the connections between professional competence
development and the development of vocational identity, vocational and
organisational commitments as well as the connection between professional com-
petence development and work ethic.
356 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Table 9.2 Participation in the context survey by profession, COMET NRW second test time
Profession Number of participants
2. Test time COMET NRW
Auto-mechatronics engineer 240
Industrial mechanic (n/a due to insufficient number of
participants)
Electronics technician for industrial engineering 64
Electronics technician for energy and building 128
technology
Medical assistant 140
Carpenter 62
Forwarding and logistics clerks 89
Industrial clerks 63

9.3.2 Methodical Approach

On the basis of the data from the COMET NRW project (COMET Data Report
2015), it was examined how the I-C dimensions correlate with the development of
professional competence (development).
More than 1000 trainees partook in the COMET NRW project (at the second test
time). The majority of test participants partook in the context survey (Table 9.2).
The total scores (TS) of the competence measurement and the data from the I-C
survey of trainees from eight professions were selected as measurement variables.
The following data were available:
• The TS mean values for the vocational school classes of the eight training
occupations involved in the COMET NRW project
• The individual assessments of the trainees on the dimensions of the I-C model
The calculation of the correlation values is profession-specific. A cross-
professional calculation of the correlation coefficients between professional compe-
tence and the dimensions of the I-C model is not possible because the competence
characteristics are profession-specific characteristics. The training professions differ
in their level of demand, with the indicator primarily being the previous schooling of
the trainees. An equally important differentiation between professions are the dif-
ferent skills that are required for successful training, such as linguistic,
mathematical-scientific or creative-artistic skills (cf. Gardner, 1999). It is, therefore,
not possible to determine a cross-professional level of difficulty for test items. For
the calculation of the correlation coefficients, this means determining the profession-
specific correlations. In a second step, the results can then be compared with each
other across different professions.
If the class-specific competence and identity/commitment values are correlated,
the points in a scatter diagram map the values of the competence characteristics (TS)
as the independent variables and the values of the dimensions of the I-C model as the
9.3 Professional Identity and Competence: An Inseparable Link 357

dependent variables. Each point of the scatter diagram represents the two dimensions
of a (school) class to be correlated and, therefore, also its specific learning environ-
ment. A database based on the competence and identity values of all test participants
would not allow the calculation of correlations, as the decisive determinant of
competence development, the learning environment of the specific classes, could
then not be taken into account.

9.3.3 Test Results

The correlation values confirm the hypothesis that the development of profes-
sional competence and identity (as well as professional commitment) are more
or less closely interwoven.

Based on the class mean values, there are very significant medium to very high
correlations between professional competence development and professional and
organisational identity and the willingness to perform based thereon.
This connects to a number of other sub-results. Professions with below-average
identification potential are characterised by underdeveloped professional identity
and competence development and correspondingly low professional and
organisational commitment (Table 9.3).

Table 9.3 Correlations between professional competence measured as average total


scores (TS) of the classes involved and the corresponding I-C averages
PI PC OI OC WE r: PI - PC
Auto Mec. 0.10 0.47 0.10 0.22 0.52 0.67
EIE 0.49 0.34 0.81 0.38 0.29 0.83
EEB 0.28 0.51 0.47 0.67 0.42 0.82
MSA 0.67 0.46 0.53 0.41 0.41 0.19
J 0.74 0.79 0.60 0.77 0.42 0.76
FLSC 0.31 0.52 0.60 0.56 0.80 0.91

medium high very high correlation


The differentiations in correlation levels were made according to Cohen (1988)
PI professional identity, PC professional commitment, OI organisational identity, OC
organisational commitment, WE work ethic
358 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.8 Correlation of professional commitment of electronics technicians, all classes

Electronics Technician (Fig. 9.8)

If the two electronics professions are combined, there is a very significant high
correlation between identification with the profession and the training company
(r ¼ 0.822; 0.832). Professional/organisational identity correlates highly with
organisational and professional commitment (r ¼ 0.514; r ¼ 0.8).
For electronics technicians, the hypothesis that there is a close connection
between competence and identity can be confirmed.

Car Mechatronics

Training to become a car mechatronic results in a different I-C pattern (Fig. 9.9).
Highly significantly, the professional competence of car mechatronics trainees
correlates at a medium to high level with their professional commitment (r ¼ 0.473)
and work ethic with r ¼ 0.518. There is a rather low correlation between professional
competence and organisational identity.

Carpenter (Fig. 9.10)

At a very high level of significance, the correlation between professional competence


and all five dimensions of the I-C model shows medium to high correlation values.
Competence and professional identity correlate with one another at a particularly
high level (r ¼ 0,67).
9.3 Professional Identity and Competence: An Inseparable Link 359

Fig. 9.9 Correlation of professional commitment of car mechatronics, all classes

Fig. 9.10 Correlation of professional commitment of carpenters, all classes

Medical Specialist Assistant (Fig. 9.11)

For the medical assistants, as for the carpenters, medium to high correlations result
for all dimensions of the I-C model on the basis of the class averages. The highest
correlation between professional competence and professional identity is also found
here: r ¼ 0.668.
360 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.11 Correlation of professional commitment of medical assistants, all classes

9.3.4 Conclusions and Perspectives

The realisation that every vocational training always has two objectives, the devel-
opment of professional competence and the development of professional identity
and the willingness to perform based thereon, is all too often overlooked.
A high level of professional competence and a lack of professional commitment
are the result of a failure to complete vocational training. Conversely, however, it is
also true that skilled workers with a very high level of occupational commitment but
who lack professional competence are a risk in the work process. It is, therefore,
important to convey a balanced relationship between competence and commitment.
It is a great advantage for the development of professional identity if pupils are
given the opportunity from the beginning of their school years to prepare themselves
thoroughly for their career choice when they finally also have the chance to be
trained in their desired profession. Training in the desired profession strengthens the
development of professional identity and a sense of responsibility and quality.

The Criteria of Modern Professionalism

An objective prerequisite for the development of professional identity, however, are


professions with a high potential for identification. The studies show that there are
numerous professions with little to very little potential for identification (! 8.5.5
and 8.5.6). This applies in particular to the abbreviated two-year training profes-
sions, which tend to have the image of unskilled professions. Even the best trainer
cannot compensate for this deficit. This calls on those involved in vocational
research and career development to develop criteria for modern professionalism
and to apply them consistently when introducing and modernising professions. In
9.4 Training Qualities and Competence Development 361

addition to the small group of virtually timeless, traditional craft occupations, the
future belongs above all to the broadband core professions (cf. Rauner, 1988; KMK,
1996) with their open-development professional profiles. The principle of speciali-
sation and differentiation of professional profiles according to specialisations should
be replaced by the principle of exemplarity. The application-oriented design of open
professional profiles is the responsibility of the regional vocational training dialogue,
so that the local and regional fields of application of companies can be better
integrated into vocational training.

Identity, Willingness to Perform, Sense of Quality and Responsibility

The emotional bond between specialists and companies has diminished in recent
decades. This is the result of successful flexibilisation of the labour market. It is
precisely for this reason that a cooperative training and working atmosphere in
companies plays an outstanding role in successful in-company vocational training.
The high values measured in this study for work ethic in numerous training
professions require critical reflection in and with the training companies. Work
ethic is defined as the willingness to obey the detailed instructions of superiors
without understanding and questioning the significance of the work tasks for the
company’s operations. These guiding principles of the tradition of Taylorist working
structures should be a thing of the past. With the operational organisational struc-
tures aligned to the operational business processes, the aim is to introduce flat
hierarchies in the employment structure and, therefore, also to shift competences
and responsibilities to the directly value-adding processes. This applies in particular
to the training of skilled workers. They learn to think in business processes so that
they understand what they are doing and develop the ability to take on quality
assurance tasks—according to the guiding principle of modern organisational devel-
opment: producing quality instead of controlling it.

9.4 Training Qualities and Competence Development

9.4.1 The Question

The fact that the quality of vocational education and training has an impact on
competence development seems immediately obvious and, therefore, does not call
for an empirical examination of this correlation. If we ask more precisely, for
example, about quality criteria in vocational education and training that can be
used to distinguish the influence of learning venues, the differences between training
professions and the competing forms of training at the same level or at levels that
build on one another, then we discover only a few empirical findings. This is also due
to the weaknesses in competence diagnostics, which have only been overcome in
362 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

recent years (cf. Fisher et al., 2015a), and in research on the quality of vocational
training.
In the COMET projects, test participants are regularly questioned about their
assessment of training quality as part of the context analyses in order to obtain data
for interpreting the test results. An analysis of the relationship between the assess-
ment of the quality of dual vocational training by trainees and the development of
skills must take account of the fact that the quality of training is assessed from the
trainees’ subjective perspective. The importance of this factor results from the fact
that the trainees state how they experience the quality of the training. The objecti-
fying element of the survey is based on scales that were developed to assess the
quality of training at the two learning venues of dual vocational education and
training and on learning location cooperation (Tables 7.18 and 7.19). As it is
assumed that the subjective state of mind of learners in their training influences
their learning behaviour, this suggests that the connection between the subjective
assessment of training quality by trainees and the objective values of competence
measurement needs to be clarified. This does not rule out examining the quality of
training according to objective quality criteria.

9.4.2 Methodical Approach

These studies are based on the data on the development of professional competence
collected using COMET competence diagnostics methods within the framework of
the COMET NRW project. Extensive data on the quality of vocational education and
training—from the trainees’ perspective—were collected in the same project.

The Assessment of the Training Quality by the Trainees

As part of the COMET (NRW) project, around 1000 second- and third-year trainees
from eight training professions in two consecutive training years (2013, 2014/15)
were asked about the quality of their training.
The assessment of training quality based on a survey of trainees and vocational
school students (dual vocational schools) refers to the two learning venues of dual
vocational training and learning venue cooperation (see methodological instruments
pp. 230–233).
9.4 Training Qualities and Competence Development 363

9.4.3 Results on the Relationship between Competence


and Training Quality

To illustrate whether and how test participants are able to objectively assess the
quality of vocational education and training, competence profiles of classes and
professions are compared with the corresponding quality profiles.
The competence profiles and quality diagrams of the eight professions of the
COMET NRW project (N ¼ 700, second test time) are compared in Fig. 9.12.
The competence levels and the homogeneity of the competence profiles show
considerable differences between the professions. The total score differs between a
TS of 25.6 for electronics technicians for energy and building technology and a TS
of 54.9 for medical assistants. This result also confirms the finding that with a high
probability the degree of homogeneity of the competence profiles increases with an
increasing competence level (! 9.2.3).
It is noticeable that the test participants—despite the large differences in the
competence characteristics—assess the quality of their training in the eight pro-
fessions in roughly the same order of magnitude. The quality diagrams differ only in
the weighting of individual quality criteria. The trainees tend to differentiate between
the eight quality criteria when weighing them up, so that profession-characteristic
diagrams are produced. This applies, for example, to the different assessment of
school-based and in-company training. For example, trainees in the industrial and
technical professions of electronics technician for industrial engineering and indus-
trial mechanic have a clear preference for the company as a place of learning. By
contrast, the differences in the assessment of learning venues are less pronounced
among trainees in the skilled trades and in the three service professions (the latter
with significantly higher competence values). There is broad consensus among the
trainees of all eight professions that learning venue cooperation is the decisive
weakness of dual vocational training. They make a very clear distinction between
the contextual and structural dimensions of learning venue cooperation.

If this comparison of the subjective assessment of the quality of training and


the values of the competence development is added up, it can be seen that
the very different (mean) competence levels of the professions do not affect the
subjective assessment of the quality level of training in these professions. The
two learning venues are weighted differently in the evaluation.

The Quality Criteria Correlate Differently with the Values


of the Competence Level

In the assessment of in-company training quality, the correlation values for profes-
sional competence are significantly higher than those for school-based learning. A
364 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.12 For comparison: Competence profiles and quality diagrams for eight occupations
(Rauner, 2018b, 66)

characteristic feature of dual vocational education and training is that trainees are
unanimously convinced that learning venue cooperation—coordinated learning at
the company where training is provided and at the vocational school—hardly
contributes to the content-related quality of their training. On the other hand, they
are convinced that the low quality of the structure of learning venue cooperation
impairs their competence development (see Fig. 9.13).
If, on the other hand, trainees succeed in gaining insights into the significance of
their activities for in-company business processes within the framework of
9.4 Training Qualities and Competence Development 365

Fig. 9.13 Correlation coefficient r (Pearson); **: Correlation is significant at level 0.01
(two-sided), total score (competence level)—quality criteria

in-company training, they will recognise that this will have a positive effect on
competence development (r ¼ 0.238).
According to the trainees, teachers and trainers contribute equally to their train-
ing. The higher the competence of the trainees, the more positively they assess their
trainers and teachers. However, the low correlation coefficient of r ¼ 0.1 is a clear
indicator that profession- and class-specific differences can be expected.

Differentiations According to Professions

Training Quality (Companies) (Fig. 9.14)

The higher the average competence level of a profession, the more positively the
quality of in-company vocational training is assessed. The mean values vary
between CM ¼ 3.5 (EEB) and CM ¼ 4.0 (IM). If one compares the mean values
of the classes of professions, the positive correlation can also be seen for the
occupations IC, FLSC, J and EIE (Fig. 9.15).
The MA classes form an exception. The higher-performing MA classes rate the
quality of training less favourably (Fig. 9.16).
366 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.14 Training quality (demand/level of tasks)

Fig. 9.15 Training quality (demand/level of tasks), carpenter classes

Fig. 9.16 Training quality (demand/level of tasks), MA classes


9.4 Training Qualities and Competence Development 367

Fig. 9.17 Learning venue cooperation (content)

Learning Venue Cooperation (Fig. 9.17)

The assessment of learning venue cooperation (in terms of content) correlates


positively with the competence values. The mean values of the occupations vary
between CM ¼ 2.6 (EIE) and CM ¼ 3.3 (MA). The higher-performing professions
(IC, FLSC, MA) tend to rate the quality of learning venue cooperation in terms of
content higher than the lower-performing (industrial-technical) professions. Only
weak class-specific effects were measured.
It is noteworthy that trainees clearly distinguish between the content and struc-
tural characteristics of learning venue cooperation. The negative correlation coeffi-
cient r ¼ 0.011 applies equally to the structural learning venue cooperation of all
professions. The mean values for the quality values vary between CM ¼ 2.6 (EIE)
and CM ¼ 2.9 (IM).

Training Support (Trainers) (Fig. 9.18)

Training support in all professions is rated moderately positively. The mean values
vary between CM ¼ 3.3 (E-EC) and CW ¼ 3.8 (IM). The higher-performing classes
tend to experience the training slightly more positively. The MA classes also form an
exception here. The higher their competence level, the more critically they assess
their trainers.

Trainer Assessment

The average assessment level corresponds approximately to that of the trainers.


However, trainees in higher-performing professions rate their teachers more posi-
tively than trainees in lower-performing professions, with occurring class effects
(Fig. 9.19).
368 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.18 Training support (trainers)

Fig. 9.19 Trainer assessment

The average class values for all occupations vary between CW ¼ 2.9 and
CW ¼ 3.7.
When assessing teacher competence, trainees clearly distinguish between the
(weaker) practical competence “Our teachers have a good overview of
organisational reality”—on the one hand—and their (stronger) specialist compe-
tence “Our teachers really know the subject well”—on the other.
Almost half of the E-B trainees (45.8%) attest that their teachers do not have an
overview of organisational reality. If, on the other hand, professional competence is
asked for, the values are more positive.

Teaching Quality (Fig. 9.20)

The profession-related mean values vary between CW ¼ 2.9 (IM) and CW ¼ 3.7
(MA).
9.4 Training Qualities and Competence Development 369

Fig. 9.20 Teaching quality

Fig. 9.21 Assessment of teaching quality, electronic technicians classes

Here, too, a clear distinction can be drawn between commercial and technical
professions and service professions. The assessment of teaching quality is particu-
larly heterogeneous among electronics technicians and MAs (Figs. 6.9, 6.10, 9.21
and 9.22).
These values correlate with the pronounced heterogeneity of the competence
levels of the E-EC classes.

Learning Climate (Fig. 9.23)

Between the professions, the assessment of the learning climate in schools varies
between CW ¼ 2.8 (E-B) to CW ¼ 3.7 (IC).
370 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.22 Assessment of teaching quality, MA classes

Fig. 9.23 Learning climate

If one differentiates between classes in the professions, then the learning climate
shows a clear tendency towards critical assessment among the higher performing
trainees (classes) (Figs. 5.13, 5.14, 9.24 and 9.25).

9.4.4 Conclusion

The results of the survey of trainees from eight industrial, technical and service
professions on the quality of their training and how this affects competence devel-
opment can be summarised in four points:
1. The obvious assumption that trainees in professions in which they achieve a
higher level of competence also rate the quality of their training correspondingly
higher than trainees in professions with a (significantly) lower level of compe-
tence does not apply. The quality diagrams of all eight professions hardly differ in
their quantitative extent. This becomes particularly clear when comparing the two
9.4 Training Qualities and Competence Development 371

Fig. 9.24 Assessment of learning climate, electronic technicians classes

Fig. 9.25 Assessment of learning climate, carpenter classes

professions E-EC with TS ¼ 25.6 and MA with TS ¼ 54.9. This is because


trainees choose professions that not only correspond to their inclinations but also
to their abilities. Subjectively, therefore, training in the various professions is
perceived as being roughly equivalent in terms of its level of demand. This
phenomenon must, therefore, be taken into account when comparing the quality
diagrams.
2. The quality of in-company vocational training is rated significantly higher by
trainees in all eight professions than the quality of (vocational) school education
and training. However, as the competence level of professions rises, the assess-
ments of in-company and school-based vocational education and training con-
verge. The quality diagrams for industrial and technical professions show that
in-company vocational training is much more highly valued than service pro-
fessions. Their assessment standards for training quality are shaped by training
experience: the acquisition of vocational decision-making competence in indus-
trial and technical training practice. The trainees directly experience the increase
372 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

in professional skills. This justifies their primary interest in the acquisition of the
action-guiding knowledge (know-that) on which their action competence is
based. The trainees in the service professions have a higher preference for the
professional expertise based on this at the levels of know-how and know-why.
This also explains the different positive assessments of the school as a learning
venue. An evaluation criterion that is independent of the training professions is
the scope of the Vocational Training Act (BBiG), which is anchored in law in the
German version of dual vocational training. The BBiG regulates the design and
management of in-company vocational training. The trainees, therefore, experi-
ence the vocational school as an institution accompanying their training with
fewer rights and duties.
3. If vocational competence (development) is measured with the methods of com-
petence diagnostics (COMET), it can be seen that school as the learning venue
has the higher potential to impart professional competence also at the levels of
action-explaining and action-reflecting knowledge. This also contributes to a
higher degree of homogeneity in competence profiles. However, only a part of
the teaching staff succeeds in exploiting this potential of the school as a learning
venue. In this case, the expansion of their specialist knowledge through active
participation in COMET projects has a positive effect. As the importance of a
high level of knowledge for the professional competence of trainees is not always
directly apparent from the context of vocational action, it is necessary to reflect on
the connection between knowledge and skills in the vocational education pro-
cesses of the school as the learning venue. For open test items in accordance with
the COMET test procedure, test participants are, therefore, requested to provide
comprehensive and differentiated reasons for their item solutions. Only those
with a high level of competence and knowledge are in a position to assume
responsibility for their professional actions and to weigh up alternative solutions
in well-founded manner.
4. The summarising results available for the overall sample show the strengths and
weaknesses of dual vocational training. The trainees rate the structural and
content-related weaknesses of the learning venue cooperation as very critical.
The federal states have (so far and with the exception of Baden-Württemberg) not
established a final examination for vocational school education as the only type of
secondary school. In Austria and Switzerland, the final vocational school exam-
ination is considered part of the final examination of professional competence.
This regulation significantly enhances vocational school learning among trainees
and challenges teachers to design and organise the review of vocational compe-
tence development as an essential element of vocational school learning.
9.5 The Training Potential of Vocational Schools 373

9.5 The Training Potential of Vocational Schools

9.5.1 Introduction

The final examinations for trainees serve to check whether vocational training has
been successful. The yardstick is the employability to be imparted. It is assumed that
learning at the two learning venues—the training company and the vocational
school—contributes to the training result (§ 38 BBiG). As no school-based final
examination is planned for vocational education and training in Germany to assess
the success of vocational school-based learning, the question to what extent and in
relation to which skills vocational competence development can be traced back to
school-based learning remains unanswered to this day. Even if a final school
examination is passed, questions remain unanswered concerning the specific contri-
bution of the learning venues to the development of the trainees’ competences.
As the quality of vocational education and training is characterised by the fact that
vocational competence is essentially based on reflected work experience and the
resulting knowledge of the work process (Boreham et al., 2002), it is a particular
challenge for competence diagnostics to examine the significance of learning venues
for competence development. The widespread formula used to explain the “secret”
of dual vocational training to outsiders: “The vocational school imparts the theory
and the company the practical skills” hardly contributes to clarifying the question of
the specific learning potentials of the learning venues—and how these can be used
effectively. Based on the data collected in the COMET projects (test and context
data), the thesis can be substantiated that the competence development of trainees is
essentially shaped by learning at vocational school and that, in contrast, the learning
potential of school as the learning venue is assessed by the trainees as lower than that
of the training companies.

9.5.2 Methodical Approach

The COMET test procedure is based on three components, which facilitate the
objective, reliable and valid recording and analysis of the competence development
of those to be qualified for a particular profession. (1) First is the measured
competence development of the trainee/student and their identification with the
profession or the training company as well as the professional and organisation
based thereon. (2) Secondly, the personal data allow the test results to be differen-
tiated according to, for example, the influence of previous schooling, migration
background and other personal characteristics. (3) The third resource are the data
and results of the context survey used to record the attitudes and assessments of the
test persons with regard to their training situation. This investigation is based on
these data.
374 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Of the total of 14 scale descriptions for the context studies, three refer to school-
based learning (Table 7.18): the school learning climate (school climate), teacher
evaluation and teaching quality.
As school-based learning also influences the quality of learning venue coopera-
tion, the two scales for learning venue cooperation are included (Table 7.19).

Teachers: A Determinant of Competence Development Underestimated


by Students

Learning at school usually takes place in classes. The learning situations in the
classes, according to the largely concurring findings of educational research, are
primarily determined by the teachers. They are the decisive factor for the compe-
tence development of their pupils (cf. Hattie & Yates, 2015, Chap. 10). Students are
aware of this. They are, therefore, occasionally classified as “experts” with good
diagnostic competence (Guldimann & Zutavern, 1992). It is, therefore, now a good
custom to interview pupils about their learning situation and, above all, about their
teachers within the framework of quality assurance procedures and in teaching and
learning research. There is a special feature for vocational schools, as the dual
organisation and design of vocational learning requires trainees/students to weigh
up the significance of the two learning venues for their training. A differentiated
questionnaire, which has been continuously evaluated and optimised since 2008 as
part of the COMET project, is used to ask test participants about their training
situation. Results are now available for more than 15 professions. The test partici-
pants’ assessment of the quality of the learning venues and of the competence and
commitment of the teachers and trainers can, therefore, be compared with the results
of the competence surveys. In addition, a survey on the development of identity and
commitment is taken into account, in which approximately 4000 trainees were
involved (cf. Rauner et al. 2015b).

9.6 Teachers and Trainers Discover Their Competences: A


“Eureka” Effect and Its Consequences

In the COMET projects, the time interval between the start of the project and the first
main test is on average between 6 and 9 months. The test items are developed during
this period (Fig. 9.26).

9.6.1 The Development of Test Items

The pre-test comprises four steps.


9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 375

Fig. 9.26 Phases of familiarisation with the COMET competence and measurement model:
Informing—Rater training/Rating—Analysis of pre-test results and feedback

Informing and Preparing (Conceptualising) the Project

The project groups—usually between 8 and 12 teachers/trainers/subject


researchers—are prepared for the development of test items in a half-day training
session.

Development of Test Items (Drafts), Formulation of Solution Aids

Teachers play a central role in the test item development process. Designing test
items requires a brief introduction to the COMET competence and measurement
model as well as an examination of the criteria for developing test items.
After only four trial ratings (usually on 1 day), a high degree of agreement is
achieved in the evaluation of the task solution on the basis of all (!) evaluation
criteria. The value of Finn ¼ 0.7 (! 5.6.3) is already a good value. The quality
criterion of interrater reliability is, therefore, fulfilled.
The first rater training in a very extensive international COMET project (elec-
tronics engineer: Hesse—China) was intended to provide information about the
quality of the rater training. Not only was the development of the rater competence
of the Chinese rater team to be examined, but also whether the Chinese and German
rater teams achieve comparable values. In contrast to the previous national projects,
the rater training was not limited to 1 day, but extended to 3 days. To the surprise of
all concerned, the interrater reliability already reached unexpectedly high values in
the second example (Fig. 9.27).
The fact that the raters of the Chinese and South African electronics project
already reached or exceeded the very high value of Finn ¼ 0.84 in the second trial
rating means that all raters almost completely agreed in their evaluation of the task
solution for all evaluation criteria—and that at the end of the training all three groups
of raters achieved high to very high Finn values.
376 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Day 2 Day 2 Day 3


Day 1
Pb-Code Assignment morn. aftern. morn.
Finnunjust
H0282 Signalling system .41
H0225 Signalling system .54
H0176 Drying room .80
H0234 Drying room .75
H0265 Skylight control unit .84
H0102 Skylight control unit .82
H0336 Silica treatment plant .86
H0047 Silica treatment plant .79

Fig. 9.27 Results of the interrater reliability calculation (China)

The first trial rating is typical for international comparison projects: As a rule, the
Finn values are still far apart. It is, therefore, all the more surprising that at the end of
the training—as in these three cases—a Finn score > 0.7 was achieved by all
national rating groups.
As a rule, all rater training participants used the solution space of the test tasks
only for the first two trial ratings. They then internalised the profession-specific
interpretation of the evaluation of an item. This result also explains that the COMET
test procedure manages with a total of only three very similar evaluation scales for all
professional fields.

The rater team of the Chinese project was reason for a big surprise. Nobody in
the German-Chinese project consortium had expected that 30 Chinese teachers
from vocational schools, technical colleges and higher technical schools in the
Beijing region would assess the task solutions of German trainees selected for
the rater training at a very level of agreement at the second trial rating
(including the rating results of the German project). This result was the first
proof that the COMET method of competence diagnostics for vocational
training can be used to carry out international comparative examinations
without any problems.
The results of two repeat trainings (Beijing, Hesse)—after 1 year—showed
that the once achieved competence of the raters—the new technical under-
standing—is maintained (COMET Vol. III 2011, 107).

The sustainable acquisition of a holistic technical understanding and problem-


solving competence as a basis for the safe and objective evaluation of the most
varied solution variants of the test participants for open and complex test or
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 377

examination tasks does not require lengthy further training. In a 1-day training
session, it is possible to convey this ability (Sect. 4.6.1). This can only be explained
by a “Eureka” effect, which is triggered by a spontaneous insight on the part of the
participants in the rater training, which does not require any lengthy justification:
Professional tasks must always be completely solved (“What else?”). If even one of
the solution criteria is not observed, this may entail incalculable risks for the
environment, the company or the employees themselves.
If outsiders are confronted with this method and the values of interrater reliability
it achieves, this usually triggers great astonishment. “I would not have thought it
possible,” said a vocational training expert at an IHK conference at which the results
of an international COMET project were presented.

If raters apply all 40 items in the evaluation of a certain profession-specific


task solution, then it becomes apparent that they are often still far apart from
each other in the initial trial rating. No later than at the fourth trial rating, they
then reach a high to very high degree of agreement (interrater reliability):
Finn ¼ 0.75–0.85. In the course of the rating training, and at this high level of
agreement, the raters learn to interpret the rating items in profession-specific
manner.

The COMET test procedure is, therefore, at the stage where it is being profes-
sionally handled. Now at the latest, the subject researchers (teachers/trainers)
actively involved in the project are in a position to apply the COMET concept as a
didactic model in their teaching. Thomas Scholz: “The discursive process among the
teaching staff that accompanies the project is just as complex and multi-layered as
the effect of COMET on teaching. Mutually influencing conversations occur at
different levels and in the associated social relationships. Meta-communication is
created between all participants. COMET has initiated a pedagogical-didactic dis-
cussion with us from the very beginning. However, it took another two years until
we understood the COMET concept in all its depth and were able to use it didacti-
cally” (Scholz, 2013, 28).

9.6.2 The Changed Understanding of the Subject Shapes


the Didactic Actions of Teachers

If the standards established for comparative investigations in competence diagnos-


tics are used as a basis, the results of the pre-tests can only be compared with those of
the main test to a very limited extent.
378 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

9.6.3 Context Analyses: The Subjective View of Learners


on the Importance of Learning Venues

The subjective importance that the trainees attach to the vocational school and its
teachers for their competence development becomes particularly clear in the context
analyses in the BBiG professions, which refer to learning venue cooperation.

The Weighting of Learning Venues

When weighting the importance of school and company as learning venues for
learning a profession, trainees trained in BBiG professions have a clear preference
for the company as a learning venue. This applies in particular to industrial and
technical training professions. In the COMET NRW project (cf. Piening, Frenzel,
Heinemann, & Rauner, 2014), 71.3% of industrial mechanic trainees and 65% of
electronics technicians for industrial engineering “fully agree” and “partly agree”
with the statement: “I learn much more at work than at vocational school”.
Trainees rate the importance of school-based learning as consistently low. A clear
majority of trainees in industrial and technical professions negate the statement
“Vocational school teaching helps me to solve the tasks and problems of
in-company work”. The assessment of the statement “The vocational school lessons
and my everyday work in the company have nothing to do with each other” is
similar. Obviously, the fit between theoretical and practical learning content seems
to be limited.
It is noticeable that trainees differentiate between the relatively highly rated
technical and methodological competences of their teachers (Fig. 5.18) on the one
hand and their knowledge of company reality on the other. The latter is considered to
be rather low (Fig. 5.19). If one compares the assessment of the learning situation at
the vocational school with that of the companies, one can see that in-company
vocational training is valued much more highly. The fact that trainees can learn a
lot from their trainers is undisputed among those surveyed, irrespective of their
profession. Therefore, they also come to the conclusion that they learn much more at
work than at school (Figs. 9.28 and 9.29).
A clear picture emerges when summarising these assessments of school learning
and teachers by the trainees, who rate the significance of the vocational school and its
teachers for their competence development as rather low. They believe that they
learn significantly more for their profession in the company than in the vocational
school.
If one compares these training quality assessments of vocational school as the
learning venue by the trainees with the results of the competence survey, it can be
seen that the learning situation in the vocational school classes is the decisive
determinant of the development of the trainees’ competence. Almost every second
trainee in Class 5 (Fig. 8.11) to become an electronics technician for industrial
engineering achieves the highest level of competence. In Class 21, on the other
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 379

Fig. 9.28 “Our teachers really know the subject well” (ibid., 115)

Fig. 9.29 “Our teachers have a good overview of organisational reality” (ibid., 115)
380 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

hand, none of the trainees reaches this level of competence. According to the state of
COMET research, the very pronounced spread of competence development between
classes can be attributed to the teacher factor (cf. Rauner & Piening, 2014).6
Trainees in professions with a high level of competence (IC, FLSC, MA and J)
rate the quality of teaching significantly higher—with mean values between
CW ¼ 3.6 and CW ¼ 3.9—than trainees in industrial and technical occupations
(E-B and IM) with their low competence levels of 27.2 (IM) and 28 (E-B). They rate
the quality of teaching as below average with CW ¼ 2.5.
The assessment of teaching quality correlates with the assessment of teacher
competence. The E-B trainees rate the practice-related competence of their teachers
as below average CW ¼ 2.6, while the IC trainees rate their teachers as below
average CW ¼ 4.0.
In the following, a further scale on vocational school learning, the vocational
school learning environment, will be used to examine the thesis of the contradiction
between the empirically proven great influence of the factor of vocational school
learning on the one hand, and the quality of the school as a learning venue and its
teachers for their vocational competence development on the other, which is rated by
the trainees as low.

Vocational School Learning Environment

The vocational school, as a partner of the training companies in the dual system of
vocational training, is involved in imparting professional competence and in prepar-
ing for the final examinations regulated by the BBiG, in which teachers take part as
examiners. However, in the German dual vocational training system, the results of
school-based learning are not recorded in the final examinations. Therefore, the
vocational school is experienced by the trainees as a learning venue of lesser
importance—as a “junior partner”. This also has an effect on the learning climate.
The evaluation of the statement, “I feel comfortable at school” gives a first
indication of the different perception of school learning.
While electronics technicians specialise in energy and building services, engi-
neering and automotive mechatronics technicians largely feel comfortable at voca-
tional school (55.5% and 54.8% respectively); this applies only to 28.4% (!) of
industrial mechanics.
The reasons given by the motivated trainees are the lack of cooperation in the
learning environment. More than half of the industrial mechanics complain that their
classmates have little consideration for other pupils (51.3%). This assessment is not
shared to the same extent by the other two occupational groups. The vocational
school has a compensatory function for trainees in craft trades. They also perceive
and value it as a learning venue that compensates for the weaknesses of their
in-company vocational training.

6
Also refer to the results of relevant learning research (e.g. that of Hattie & Yates, 2015).
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 381

Fig. 9.30 “What we do in class, I usually find interesting.”

The extent to which teaching plays a role in this is described in more detail below
(Fig. 9.30).
For the industrial mechanics, the factor “teaching disrupted by classmates” turns
out to be an influential quality aspect.
While the risk pupils (trainees who did not exceed the level of nominal compe-
tence in the test) and the pupils with a low and very low level of competence do not
perceive the “teaching disrupted by classmates”, the high-performing pupils per-
ceive these disruptions caused by classmates as a serious problem.
There are two possible causes for the interpretation of the paradox: the high
learning potential of VET schools and their low assessment by trainees.
1. When learning within the work process, the trainees experience their competence
development directly, especially in the industrial and technical professions. The
development of their professional skills, which they experience within the work
process, is the yardstick for their assessment of the training quality of the learning
venues. How the acquisition of vocational work process knowledge in the
processes of school learning contributes to the development of professional skills
is not immediately apparent to many trainees. They, therefore, agree with the
statement that although their teachers are professional competent, they are less
convinced of their knowledge of the realities of work. Only learners with a higher
level of competence are aware that they (can) acquire the action-explaining and
reflecting knowledge of the work process characterising employability, especially
at school.
382 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

2. The company as a learning venue has a significantly higher significance in the


minds of the trainees, as the training contract is concluded with the training
company, which also remunerates the training. While the presence in the training
company is also regulated by labour law and a violation of this can have
far-reaching consequences—up to and including the termination of the training
relationship—no comparable regulations apply to “attendance” at the vocational
school. Opinions differ when assessing the importance of vocational school
learning for the acquisition of employability. This applies above all if the
vocational schools are not equally involved in the final examinations.

Nursing Training at Technical Colleges

This example shows that the management of dual vocational education and training
“from a single source” and the equivalence of learning venues leads to a much more
positive attitude among students towards learning in vocational schools: “Satisfac-
tion with training at the surveyed schools for health and nursing [...] is very high
overall. 79 % of respondents are more or less satisfied with their training” (Fischer,
2013, 219).
The teachers at the nursing schools are also rated positively. 71% confirm that
their teachers have a good overview of professional reality and 83% consider them to
be more or less technically competent and up to date (ibid., 222).
It is, therefore, no surprise that learning venue cooperation in dual nursing
training is rated significantly more positively than in vocational training regulated
under the BBiG. “[...] 70 % of the students are therefore of the impression that the
teachers of the technical schools cooperate with the practice instructors and nursing
services in the hospital more or less or completely—this statement does not apply to
only 3 % of the respondents” (ibid., 237). The students of Swiss nursing training rate
the learning venue cooperation “even more positively than the trainees [of the
German vocational schools]” (ibid., 238) (Fig. 9.31).
Renate Fischer concludes that the training between technical school and practical
training, which was assessed as positive by students at technical colleges, and the
good cooperation between teachers and practical instructors (e.g. also in joint pro-
jects) have a “highly beneficial effect on the development of professional identity
and commitment” (ibid., 272).

9.6.4 Conclusion

The competence surveys carried out within the framework of COMET projects in
dual vocational training programmes show very clearly that school as a learning
venue and teachers are decisive determinants of professional competence develop-
ment. This applies above all to achieving the highest level of competence, as
provided for in the learning field concept: “the ability to help shape the world of
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 383

Fig. 9.31 Learning venue cooperation, comparison of trainees and students on the statement: “My
practical workplaces and the school coordinate their training”. (ibid., 239)

work in socially, ecologically and economically responsible manner” (KMK,


1996, 10).
By contrast, the assessment of school-based learning by trainees in dual voca-
tional education and training (Germany) shows on average that they rate the quality
of learning at school lower than in-company learning. The predominantly positive
assessment of learning in the training company and the underestimation of school-
based learning as a decisive factor for competence development can be attributed to
the fact that the trainees perceive and experience their increase in professional action
competence directly in their professional actions in training practice, while they
experience their competences acquired in the school-based learning process less
directly but rather mediated through their action competence in practical
professional work.
It should also not be underestimated that trainees conclude their training contract
with the training company in which the rights and obligations of training are
regulated as legally binding and that they receive their training remuneration—the
“reward” for their training activities—from the companies.
The educational potential of vocational schools is recognised above all by high-
performing trainees, by trainees in professions with a high average level of compe-
tence and by trainees who experience the school as a learning venue to which they
attribute an important compensatory function in their training.
Despite the weaknesses of the dual organisation of vocational education and
training, which trainees and students see primarily as being caused by the inferior
quality of the school as the learning venue, the results of the competence surveys
show that the school as learning venue has a high learning potential. This can be seen
384 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

above all in the polarisation of the levels of competence achieved by the classes
taking part in the tests. Despite comparable profession-specific educational pre-
requisites of trainees and students, part of the classes (of a profession) regularly
reach a (very) high level of competence and another part a (significantly) lower level
of competence. This means that teachers exploit the educational potential of voca-
tional schools to a very different degree.
The quality of learning venue cooperation is of overriding importance for
exploiting the training potential of schools as a learning venue. Using the example
of nursing training in Germany and Switzerland, it has been demonstrated that
the management of dual vocational training “from a single source”—and therefore
the equal participation of vocational schools in dual vocational training—enhances
the quality of their training. This is reflected in the appreciation of vocational
(technical) schools by trainees/students.

Comparability of Test Groups

A statistical comparison between the test results of the pre-test participants and the
participants of the first main test is possible, however, if both test groups represent
the test population and if comparable test items are used for both tests. If the pre-test
participants are distributed among the training centres participating in the test, then a
comparison of the pre-test results with the results of the first main test can be used to
examine whether and to what extent competence development has taken place.

Example: Pilot Study (Industrial Clerks)

Eighty-two second- and third-year trainees from two vocational colleges (VC) took
part in the first main test of the COMET Project NRW pilot study for industrial clerks
(cf. Stegemann et al., 2015; Tiemeyer, 2015). Fifty-two trainees from the same VCs
took part in the pre-test. The results are, therefore, not representative for the test
population of the federal state. The comparability of the pre-test and main test
participants is, however, given, as the number of participants in both tests hardly
differs. Both test groups are representative of the industrial clerk trainees at the two
vocational training centres (Fig. 9.32).
The result impressively shows that the competence level of the trainees has
increased significantly in a period of about 6 months between pre-test and the first
main test. The increase in the competence level of the test group is reflected above all
in a significant increase in the proportion of test participants who have reached the
highest competence level (Shaping Competence): from 21.8% in the pre-test to
69.5% in the first main test.
A similar effect can be seen in the COMET NRW Carpenter project. The high
Finn coefficient, which was reached in the rater training is an indicator for the fact
that all raters mastered the COMET competence and measurement model following
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 385

Fig. 9.32 Distribution of competence in pre-test and main test for industrial clerks (INK-A)

rater training. A comparison of the pre-test and main test groups is also possible here,
as both test groups from two VET centres were involved in this pilot project.
77% of the test participants reach one of the two upper competence levels during
the first main test. In the pre-test, this was only 51.8%. The decline in the number of
risk students from 30% (pre-test) to 17% in the first main test is particularly marked.

The participation of the teachers in the pre-test—especially in the rater training


and in the rating—obviously led to the fact that they were able to implement
their extended problem-solving pattern and technical understanding in their
didactic actions.

Example: COMET Project Nursing Training, Switzerland

The example of the training of nursing staff at higher technical colleges in Switzer-
land also shows a significant increase in the competence level of students at technical
colleges in the period between the pre-test and the first main test. The proportion of
students who reach the third (highest) competence level has increased significantly,
while the proportion of the risk group has decreased significantly (Fig. 9.33).
These three examples represent a development that has been demonstrated in
almost all COMET projects.
386 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.33 Distribution of competence levels, COMET project nursing training, Switzerland: Pretest
2012 and first main test 2013 (n ¼ 115)

9.6.5 Conclusion

The hypothesis that the active participation of vocational trainers in the development
of test items and their evaluation and optimisation within the framework of a pre-
test—including rater training—has a positive effect on their competence develop-
ment was confirmed.
This form of further training takes place as an implicit learning process, as the
feedback workshops show, in which the project groups, when interpreting the test
results, did not recognise the changed didactic actions of the teachers as a (decisive)
cause for the increase in competence of their students. The fact that vocational
trainers (also) implicitly transfer their specialist knowledge to their pupils/students
was proven in an extensive large-scale study in which 80 teachers/lecturers took part
in the student test (cf. Zhou, Rauner, & Zhao, 2015; Rauner, Piening, & Zhou, 2015
[A + B-Forschungsbericht Nr. 18/2014]).

Thomas SCHOLZ sums up the experiences of the project group, which they
gathered and reflected on during the implementation of the COMET Industrial
Mechanic (Hesse) project, as follows: “With the experiences from the pre- and
the two main tests as well as the development of test tasks, the working groups
approached the design of learning tasks with a holistic solution approach. A
new dimension of task development opened up, tasks that highlighted
COMET’s influence on teaching change. The discussion about methodology

(continued)
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 387

and didactics with regard to COMET tasks in the classroom became the focus
of the working groups. The group of industrial mechanics decided to introduce
this new form of learning: the ability to solve tasks according to the COMET
competence model. The introduction of this new learning form, as suggested
by the learning field concept, had an impact on the test results. The more
advanced the new teaching practice is, the better the test results will be”
(Scholz, 2013, 25).

The didactic actions of the teaching staff are characterised by the tension between
their specialist knowledge, which is shaped by their university studies and develops
in their specialist studies on the one hand, and the knowledge of work processes
incorporated into their professional activities on the other (Bergmann, 2006; Fischer
& Rauner, 2002). With the acquisition of the COMET Competence Model, the
professional knowledge of action (the work process knowledge) moves into the
centre of didactic action and the scientific knowledge becomes rather a background
knowledge which retains its significance for the reflection of complex work and
learning situations. The theories and research traditions on which the COMET test
procedure is based (again) experience their fundamental significance in competence
diagnostics:
• Research into the knowledge of work processes (cf. Boreham, Samurçay, &
Fischer, 2002)
• The theory of multiple competence and the associated guiding principle of
holistic problem solving (cf. Connell, Sheridan, & Gardner, 2003; Rauner,
2004b; Freund, 2011)
• The novice-expert paradigm and the associated insight that one is always a
beginner when learning any profession and that the path to becoming an expert
follows the rule that one grows with one’s tasks (cf. Dreyfus, 1987; Fischer,
Girmes-Stein, Kordes, & Peukert, 1995)
• The theories of “situated learning” (cf. Lave & Wenger, 1991)
• The concept of practical knowledge (cf. Holzkamp, 1985; Rauner, 2004b)
• The theory of “developmental tasks” (cf. Gruschka, 1985; Havighurst, 1972) and
the related concept of paradigmatic work situations (cf. Benner, 1994)
• The theory of “Cognitive Apprenticeship” (cf. Collins, Brown, & Newman,
1989)
• The “Epistemology of Practice” (cf. Schoen, 1983)
Teachers/trainers who actively participate in the COMET projects as test item
developers and as raters already have the ability to assess the professional compe-
tence of trainees and students at a high level of interrater reliability after 1 day of
rater training.
388 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Teachers and trainers change their understanding of the subject and their
didactic actions in the sense of the COMET competence model by participat-
ing in the development of test and learning tasks, in rater training and in the
rating of task solutions as well as by reflecting on and interpreting the test
results with their pupils, in their subject groups and with scientific support.
This change in thinking and acting does not take place as laborious additional
education, but rather casually as a Eureka effect and to one’s own surprise;
“Oh of course, it’s as clear as day” or “I have the feeling that I’ve been in
charge for years”. The new or expanded understanding of the subject is
reflected in the development of learners’ competences. Above all, their com-
petence profiles are an expression of the new quality of training. They chal-
lenge teachers and trainers and make it easier for them to reflect on and change
the strengths and weaknesses of their own didactic actions. In a team, it creates
a very effective form of learning from each other.
Chapter 10
Measuring Professional Competence
of Teachers of Professional Disciplines
(TPD)

10.1 Theoretical Framework

The Conference of the Ministers of Education and Cultural Affairs of the Federal
States of Germany (KMK) published standards for teacher training (report of the
working group) in 2004. As an introduction, Terhart explains: “An [...] assessment
of the impact and effectiveness of teacher training based on competences and
standards is [...] the prerequisite for being able to introduce justified improvements
if necessary” (KMK, 2004a, 3). His indication that the professional competence of
teachers ultimately depends on the quality of their teaching is also confirmed by the
results of the project “Competence Diagnostics in Vocational Education and Train-
ing” (COMET). In addition to the teacher factor, the previous schooling of trainees
or students of technical colleges and the in-company learning environment in dual
vocational training have proven to be further determinants of professional compe-
tence development (COMET Vol. III, Chap. 8).
When measuring the professional competence of vocational school teachers, a
distinction must be made between two aspects. It has proven useful to encourage
teachers to take part in their students’ tests. This has been tested both in PISA studies
and in the COMET project. Naturally, this is not enough to measure teacher
competence, which requires the development of a competence and measurement
model. The linchpin is the definition and operationalisation of the requirements
dimension as well as the justification of competence levels (refer to KMK, 2004b, 3).
An extensive quantitative study of occupational competence development
revealed that there are extraordinarily large—unanticipated—differences in compe-
tence between the 40 test classes (Hesse/Bremen) of (electronics technicians)
trainees and students at technical colleges, 31 of which were from Hesse. Figure 9.6
shows this for the third competence level (“Holistic Shaping Competence”).
The heterogeneity of competence development within the test groups (classes)
turned out to be just as unexpectedly large (Fig. 10.1).

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 389
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_10
390 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.1 Percentage of test participants (Hesse) who reach the level of “Holistic Deign Compe-
tence” per class: E-B ¼ Electronics technician for industrial engineering, E-EG ¼ Electronics
technician for energy and building technology, F-TZ ¼ Technical college students, part-time

Within a class, there is often a difference of 2 years of training between high-


performing and low-performing students.
At the aggregation level of the two electronics occupations, electronics technician
for industrial engineering and electronics technician specialising in energy and
building technology (E-EC), previous schooling proves to be a decisive factor in
explaining the higher competence level of E-B (cf. also Baumert et al., 2001,
Chap. 8).
The analyses of the context data in relation to the performance differences
between the classes of the same training occupation (e.g. E-B) are of didactic
interest. In addition to the in-company learning environment for trainees and the
competence of the trainers, the skills and behaviour of the teachers are the decisive
factors for the professional competence development of the pupils/students (Rauner
et al., 2015b). If we compare the classes that are taught by the same teacher but
receive their training in different companies, however, they are relatively close to
one another in terms of their average level of competence—even if the training
companies differ considerably from one another in terms of the quality of the
training they provide and the training environment in which they operate.
There are extensive and diverse findings from educational research on the central
importance of teachers in all forms, courses and systems of education for the
competence development of learners. In summary, John Hattie referred to the state
of research (Hattie, 2003, 2011).
The extremely large differences in the competence characteristics of the test
groups and the phenomenon of transferring the teachers’ competence profiles to
their trainees/students, therefore, suggest that teacher competence should be
10.2 Fields of Action and Occupation for Vocational School Teachers 391

measured. If this succeeds, it would be a major step forward for the quality
development of vocational education and training.
The aim to gain deeper insights into the professional competence (development)
of vocational school teachers with the methods of Large-Scale Competence Diag-
nostics can be justified by the following reasons:
(1) The results of empirical vocational training research, according to which only
very limited success has been achieved to date in enabling vocational school
teachers to implement the introduction of the learning field concept agreed by
the KMK in 1996 into the development of framework curricula for vocational
school programmes (cf. Przygodda & Bauer, 2004, 75 f.; Lehberger, 2013,
Chap. 2)
(2) The high degree of heterogeneity that occurred in the vocational competence
surveys of trainees/students between the test groups of comparable courses of
study (see above)
(3) The large proportion of trainees and students who, at the end of their training, do
not have the ability to complete professional tasks in professional and complete
manner

10.2 Fields of Action and Occupation for Vocational School


Teachers

In justifying the KMK standards for teacher training, the educational sciences are
emphasised as the essential basis for the acquisition of teacher competences. These
are above all the educational and didactic segments of the studies and the compe-
tences based thereon (KMK, 2004a, 4). From the perspective of general education,
this restriction can possibly be justified as, particularly in the tradition of humanistic
pedagogy with the paradigm of exemplarity, the meaning of the contents of teaching
and learning was reduced to the function of a medium in the educational process
(cf. Weniger, 1957). In vocational education and training, on the other hand, training
content is of constituent importance. The job descriptions and training regulations
prescribed by the Vocational Training Act (BBiG) specify the knowledge, skills and
abilities to be mastered in an occupation. This applies in particular to the examina-
tion requirements, which form the basis for the examination of employability in the
individual occupations. Skills and knowledge which are necessary (!) for the exer-
cise of a profession are examined separately as the right to exercise a profession is
not infrequently acquired with a qualification. Germanischer Lloyd, for example,
verifies the mastery of various welding techniques by industrial mechanics appren-
tices (specialising in ship mechanics) in accordance with its own quality standards.
Whenever safety, environmental and health-related training contents are involved,
vocational school teachers and trainers are particularly challenged to communicate
the corresponding training contents and to check their safe mastery. It follows that a
“vocational school teacher” competence model must have a content dimension.
392 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

If one looks at the standards for initial and continuing training of teachers, then
the descriptions of the skills that teachers must acquire in their initial and continuing
training are found to be largely identical, albeit differently distinguished. It is
striking that, in the majority of training concepts, the dimension of specialist
knowledge is often ignored. Fritz Oser (1997), for example, proposes 88 “standards”
for teacher training. These are 12 higher-level training areas (“standard groups”),
which are broken down into 88 detailed training objectives (standards). On the other
hand, a remarkable aspect here is that the content dimension of competence is
missing in this compilation. In the training guidelines of the “National Board for
Professional Teaching Standards” (NBTS) quoted by Oser, one of the “five ideals for
the collection and verification of teaching standards” under b) is “knowledge of the
content that is learnt...” (cited from Oser, 1997, 28). In another current compilation
of professionalisation standards for teacher training, the subject contents are even
given special emphasis. The “Professionalisation standards of the Pedagogical
University of Central Switzerland” read:
1. The teacher has specialist knowledge, understands the contents, structures and central
research methods of their subject areas and can create learning situations which make these
subject-specific aspects significant for the learners (professionalisation standards of the
Pedagogical University of Central Switzerland [2011]).

The analysis by Andreas Frey (2006) and Johannes König (2010) of methods and
instruments for the diagnosis of professional competences of teachers confirms that
competence diagnostics (teachers) is primarily aimed at recording interdisciplinary
pedagogical-didactic competences. Frey summarises his findings as follows: “The
list [of 47 methods and instruments] shows that the social, methodological and
personnel competence classes are already well covered by instruments. However,
the specialist competence class, in particular the various specialist disciplines, is
insufficiently documented in the specialist literature. In this case there is a need for
scientific research and development” (Frey, 2006, 42)1.
The project of the International Association for the Evaluation of Educational
Achievement (IEA) to measure the competence of mathematics teachers (TEDS-N)
focused on the subject and didactic competence of teachers (cf. Blömeke & Suhl,
2011). However, the format of the standard-based test items and a supplementary
questionnaire limit the scope of this test procedure. The “professional competence”
of teachers can, therefore, only be recorded to a very limited extent.

1
cf. also König (2010).
10.2 Fields of Action and Occupation for Vocational School Teachers 393

10.2.1 Proposal for a Measurement Method by Oser, Curcio


And Düggeli

Oser, Gian, Curcio and Düggeli have developed and psychometrically evaluated a
method for measuring competence in teacher training. Methodologically, the con-
cept is based on situation films (multi-perspective shots and film vignettes generated
from them) and an expert rating of the competence profiles of teachers (Oser, Curcio,
& Düggeli, 2007)2. OSER and his research group have good reasons for opting for the
methodological middle course between direct observation and self-evaluation pro-
cedures, as both procedures have not yet led to the desired results. The method of
direct observation is already ruled out for test-economic reasons. Even if it were
possible to develop a reliable rating procedure, this procedure would not cover a
decisive dimension of competence: the knowledge on which teacher behaviour is
based. Even if one assumes that one can deduce from the observable action the
action-leading knowledge, then this method leaves open to what extent teachers can
reflect their actions in a well-founded way or whether they act more intuitively and
on the basis of practised “skills”. If one wants to capture teacher competence at the
competence level of reflected professional knowledge and actions, then the
observers cannot avoid reflecting the observed behaviour with the observed teachers.
There are narrow limits to decoding the competences incorporated in observable
behaviour as a domain-specific cognitive disposition. This limitation also applies to
the “advocatory procedure” proposed by OSER and his team, which is also based on
an observation procedure. In forms of teacher training based on video documents,
the common reflection of those observed is, therefore, an essential element. The
observed teacher has the opportunity to explain why he or she behaved in this way
and not differently in specific situations (video feedback). Without the reflection of
visual documents with the actors, video-based observation methods for measuring
competence have only a limited reach.
The OSER approach for identifying competence profiles at a medium level of
abstraction is interesting because it avoids merely determining competence levels.
The profiles were developed or identified according to the Delphi method. Using the
example of competence profile A 2.3 (A2: “forms of mediation” is one of nine
standard subgroups): “The teacher organises different types of group teaching...”
(ibid., 16).
With the concept of competence profiles, it is possible to approach the quality of
teacher competence to be described and recorded. The project shows the empirical
effort involved in developing this method for measuring vocational school teacher
competence.

2
The project was carried out with 793 teachers from vocational schools. However, it is a project that
is not limited to vocational training.
394 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

10.2.2 Competence Profiles

The approach of using competence profiles to record vocational school teacher


competence is interesting insofar as it clearly goes beyond the one-dimensional
scaling of competence in the form of scores. Competence profiles can be used to
represent the quality of competence (ibid., 17 ff.). The representation of competence
profiles requires a competence model, primarily a scientific and normative model of
the requirement dimension. The difficulty with this method lies in determining the
number of profiles. OSER et al. choose a medium degree of abstraction to limit the
variety of profiles to 45. There are, therefore, pragmatic reasons for the level of
differentiation. As each teacher has their “own” competence profile, which may still
vary from subject to subject, this approach requires an examination of the question of
a taxonomy or other forms of systematising competence profiles. One way out of the
difficulty of condensing the competence profiles of teachers to a certain number of
profiles is to identify the competence dimensions (sub-competences) scientifically
and normatively, which make it possible to map arbitrary competence profiles. With
this approach, Howard Gardner succeeded in establishing the concept of multiple
intelligence as a concept of competence research (Connell, Sheridan, & Gardner,
2003). Abilities can then be conceptualised as functionally integrated intelligence
profiles. Through the development of specific intelligences, there is room for
competence development and potential abilities (Fig. 10.2).

10.2.3 Validity of the Oser Test Procedure

Here, Oser et al. rightly indicate a very sensitive point of competence diagnostics.
The high validity of measurement procedures or test items can only be confirmed if
one can prove how teacher competence affects the development of pupils’ compe-
tences. “However, whether or not the quality characteristics of a standard are
actually recorded with the present diagnostic instrument must be checked with the
aid of cross-validation” (ibid., 19). The planned procedure of an expert rating
exhausts the possibilities offered by this procedure as a whole. Ultimately, evidence
must be provided as to whether pupil performance is due to the competence of their
teachers.

10.2.4 The Action Fields for TPD

In teacher training, it is necessary to provide seminars on the manifold conditions


and decision-making fields of the didactic actions of teachers—for example, on the
teaching methods. This also applies to the acquisition of expertise. Although it is
possible to verify whether a teacher/student has methodological and professional
10.2 Fields of Action and Occupation for Vocational School Teachers 395

Fig. 10.2 Example of an individual intelligence profile as well as two different intelligence profiles
and the different spaces for competence development they define (Connell et al., 2003, 138 and
140 f)

competence, measuring teacher competence requires test tasks or test arrangements


to be developed on the basis of a competence model. The content dimension of the
competence model identifies “teaching” as a field of action. In order to determine the
competence characteristics a teacher applies in their teaching and the characteristic
competence profile they have in this field of action, a measurement model is required
that covers all relevant aspects of this field of action. This means that the quality of
group work must be recorded as one among many other and interacting aspects in the
“teaching” field of action. Cooperation in a learning group only gains its didactic
significance in the context of the teaching project.
The KMK standards for teacher training identify four training and task areas for
teachers:
• School as an organisation
• Planning, implementation and evaluation of lessons
• Assessment of student performance
• Cooperation of colleagues in the context of teaching and school development
(KMK, 2004a)
396 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

For each of these four training and task areas, a distinction is made between the
knowledge to be acquired in the course of study and the skills to be acquired or
mastered in teacher training and teacher work.
If the training and task areas (1) and (4) are combined into one field of action—
“participation in school development” (see KMK, 2004b, 3; item 5)—and if it is
assumed, in line with the concept of complete action, that teaching also includes the
evaluation and review of learning processes and the assessment of student perfor-
mance (2) and (3)3, then two fields of action remain.
For vocational school teachers, the following four task fields can be justified.

Planning, Implementing and Evaluating Vocational Learning Processes

The central task of every teacher at vocational schools is the design and evaluation of
vocational training processes and their individual evaluation (cf. KMK, 2004b, 3).
The COMET competence model is of particular importance in this context. Coop-
eration with other teachers (e.g. the subject group) and with out-of-school cooper-
ation partners (e.g. trainers) is, therefore, the rule.

Development of Educational Programmes

Occupational profiles, framework curricula and training regulations are increasingly


designed to be open to development, so that accelerating technical change does not
lead to a constant backlog of regulatory tools in need of updates. The implementation
of open training plans and the application of broadly based core occupations
(e.g. mechatronics technician, polymechanic (Switzerland), computer scientist
(Switzerland), media designer), taking into account the local and regional fields of
application of the relevant training companies, means that the development of
training programmes in dual vocational training is one of the core tasks of vocational
school teachers. The consideration of the qualification potential of the companies
involved in dual vocational training requires a high degree of vocational compe-
tence—in addition to vocational pedagogical and didactic skills.

Planning, Developing and Designing the Learning Environment

Study labs and workshops are of particular didactic relevance for the design of
vocational training processes. Their quality is a decisive factor in the implementation
of “action-oriented” forms of learning. The study labs and their equipment are,
therefore, often a “trademark” of specialist departments or vocational schools.
How study labs and workshops can be designed under the conditions of technical

3
Teachers carry out their assessment and advisory duties in the classroom [...] (ibid., 3., No. 3).
10.2 Fields of Action and Occupation for Vocational School Teachers 397

change and changing qualification requirements so that they have an experimental


quality that also allows them to deal prospectively with the vocational world of work
is a particular challenge for vocational school teachers in this action field. In
addition, the didactic actions of the teachers also depend on the media equipment
and the quality of network access.

Participation in School Development

With the shift of operational tasks of school development to the level of vocational
training institutions, the participation of vocational school teachers in quality devel-
opment and assurance is one of their original tasks. The extraordinarily large
diversity of professions, vocational training programmes and the regionally specific
embedment of vocational training in economic structures calls for a change from
vocational schools to regional competence centres (BLK, 2002). It is foreseeable that
the emphasis will increasingly shift to continuing vocational education and training.
The international development towards “Further Educational Colleges” and “Com-
munity Colleges” (USA) is already further advanced here. In this context, the
traditional concepts of school development are losing importance. The transforma-
tion of vocational schools into competence centres requires the development of new
forms of organisational development and their institutionalisation alongside univer-
sities and general upper secondary levels.
When developing test items for a project “Measuring the professional compe-
tence of vocational school teachers”, the four action fields should be represented by
at least one complex test item each.
In the teacher training discussion, reference is made to the different dimensions or
sub-competences of professional teachers. This concerns the sub-competences of
teaching, educating, counselling, evaluating and innovating, as they were already
founded by the German Education Council (1970) and reformulated by the KMK
Teacher Training Commission (1999). If these dimensions of teacher competence
are “taught” in the form of modules as self-contained skills during the phase of
familiarisation with teaching activity in seminars, then there is a risk that these
sub-competences will be misunderstood as mutually isolated characteristics of
teacher action. For the standards of teacher training, this means that competence in
counselling, teaching, education, etc. can only be demonstrated in the context of
domain-specific design and evaluation of the (training) processes, something to
which the KMK expressly refers in its standards. The educational task at school is
closely linked to teaching. Similarly, assessment and counselling are not isolated
fields of action, but tasks that are integrated into teaching (KMK, 2004b, 3).
In teacher training, structured according to disciplines and modules, disciplinary
knowledge can be imparted and tested in exams. On the other hand, the professional
competence of teachers only becomes apparent in the domain-specific concrete fields
of action (see above).
A special feature of vocational education and training is its overriding guiding
objective: “Empowerment to help shape the world of work in socially, ecologically
398 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

and economically responsible manner” (KMK, 1991, 196). The central idea of
design-oriented vocational training has far-reaching consequences for the
professionalisation of vocational school teachers and the realisation of learning
environments in the institutions involved in vocational training: vocational schools,
training companies and inter-company training centres.
For vocational education and training, this means that its contents cannot be
obtained by means of the didactic transformation of scientific contents. Professional
work process knowledge has its own quality. When implementing an electrical
lighting system in a residential or office space, a butcher’s shop or the vegetable
department of a food discount store, the selection and arrangement of lighting
fixtures in terms of brightness and colour temperature is extremely varied, taking
into account the respective standards for workplaces, sales rooms, etc., not least also
taking into account aesthetic criteria as well as ease of operation and repair. The
decision for a low or normal voltage solution is also a question of weighing
competing criteria. If the classes were propaedeutically geared to the basics of
electrical engineering, the focus would be on switching logic and the functionality
of the lighting fixtures. The content “electric lighting” would become one of the
applied natural sciences. The real world of work with its professional require-
ments—the actual contents of vocational education and training—as well as the
central idea of co-designing the world of work would then be excluded. In the world
of professional work, professionals are always faced with the task of exploiting the
respective scope for solutions and design—“with social and ecological responsibil-
ity” (KMK, 1999, 3, 8).
The implications of the learning field concept based on this central idea are
obvious for the fields of action of vocational school teachers. The quality of the
learning environments for vocational education and training must be measured by
whether they are designed according to the concept of the holistic solution of
vocational tasks. Donald Schoen, with his insightful-theoretical paper “The Reflec-
tive Practitioner”, corresponding to the category of practical intelligence, has dem-
onstrated the fundamental importance of practical competence and professional
artistry as independent competence not guided by theoretical (declarative) knowl-
edge. At the same time, this leads to a critical evaluation of academic (disciplinary)
knowledge as a cognitive prerequisite for competent action. Schoen summarises his
research results in the insight:
I have become convinced that universities are not devoted to the production and distribution
of fundamental knowledge in general. They are institutions committed, for the most part, to a
particular epistemology, a view of knowledge that fosters selective inattention to practical
competence and professional artistry (Schoen, 1983, S. VII).

In this context, he cites from a study in medical expertise: “85 % of the problems a
doctor sees in his office are not in the book”. Schoen sees the deeper cause for the
inability of the education system to impart knowledge that establishes vocational
competences in disciplinary, subject-systematic knowledge.
The systematic knowledge base of a profession is thought to have four essential properties. It
is specialized, firmly bounded, scientific and standardized. This last point is particularly
10.3 The “TPD” (Vocational School Teacher) Competence Model 399

Fig. 10.3 On the relationship between the objectives and theories of vocational education and
training, the initial and continuing training of TPD and the design, evaluation and measurement of
their competences

important, because it bears on the paradigmatic relationship which holds, according to


Technical Rationality, between a profession’s knowledge base and its practice (ibid., P. 23).

He refers to an extensive curriculum analysis by Edgar Schein, who criticises the


knowledge concept incorporated in school curricula:
Usually the professional curriculum starts with a common science core followed by the
applied science elements. The attitudinal and skill components are usually labelled ‘prac-
ticum’ or ‘clinical work’ and may be provided simultaneously with the applied science
components or they may occur even later in the professional education, depending upon the
availability of clients or the ease of simulating the realities that the professional will have to
face (Schein, 1973, S. 44, cited in Schoen, 1983, P. 27).

10.3 The “TPD” (Vocational School Teacher)


Competence Model

The central task of teachers at vocational schools is to empower pupils and students
to help shape the world of work and society in socially and ecologically responsible
manner (KMK, 1991/1999). The guiding ideas and objectives of vocational educa-
tion and training as well as teacher training and teacher activity form the explanatory
framework for a “TPD” competence model (Figs. 10.3 and 10.4).
The development of a “TPD” competence and measurement model is needed to
mediate between the guiding principles, goals and theories of vocational education
and training and teacher training and to develop test tasks and describe their solution
spaces. The didactic relevance of the competence model can be seen above all from
the fact that it is also suitable as a guide—among others—for TPD training.
The “TPD” competence model comprises the usual dimensions of competence
modelling
400 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.4 The “TPD” competence model

• The requirements dimension (competence development/competence levels)


• The contextual dimension
• The behavioural dimension (cf. KMK, 2005; Schecker & Parchmann, 2006, 55;
COMET Vol. III, 51 ff.). III, 51 ff.)

10.3.1 The Requirements Dimension

The requirements dimension reflects the interrelated levels of professional compe-


tence. They are defined on the basis of skills resulting from the processing and
solution of occupational tasks (see Bybee, 1997; COMET Vol. III, Fig. 3.3, Sect.
3.3). The objective and subjective demands placed on the work of teachers refer
directly to their professional and educational skills. The nine criteria of the compe-
tence level model with its three competence levels and the lower level of nominal
competence serve as a framework for interpretation.
10.3 The “TPD” (Vocational School Teacher) Competence Model 401

Functional Competence

Functional competence is available to (prospective) vocational school teachers who,


within the framework of their university education, have acquired basic vocational
pedagogical-didactical, technical vocational, vocational/technical-didactical and
technical-methodological knowledge. Functional competence is based above all on
the relevant pedagogical-didactical and professional action-leading knowledge,
without the test persons being in a position to specifically apply this knowledge to
a situation and to sufficiently justify and reflect on their pedagogical actions.

Procedural Competence

Vocational school teachers have procedural competence and are also in a position to
apply their vocational knowledge in vocational training practice in a manner appro-
priate to the situation, to reflect on it and to further their education. A characteristic
feature of this level of competence is the ability to design and organise vocational
training processes under the real conditions of school or training reality. The teachers
have a vocational educational work concept. They are part of the professional group
practice.

Shaping Competence

Building on the previous levels, the highest level of competence represents the
ability to holistically (completely) solve vocational pedagogical tasks. This includes
the criteria of socially compatible teaching as well as the ability to socio-culturally
embed vocational training processes. The level of holistic competence includes the
ability, with a certain amount of creativity, to weigh up the various demands placed
on the holistic task solution in a situation-specific way: for example, between the
requirements of the curriculum, the resources available and the most pronounced
individual support for learners possible. The teacher is familiar with the relevant
professional and pedagogical-didactic innovations in their field.

Nominal Competence

It is outside the scope of professional competences if, as here, the development of


professional competence is introduced into the model design as a characteristic
criterion for the success of teacher training. (Prospective) vocational school teachers
who only reach the level of nominal competence are assigned to the risk group. As
defined by PISA, risk students are not up to the demands of successful vocational
training and have to reckon with considerable difficulties in the transition to working
life (Baumert et al., 2001, 117). This definition can be applied mutatis mutandis to
402 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

teacher training and activities. There is, therefore, an urgent need for further training
for teachers whose cognitive domain-specific performance disposition (competence)
is below the first competence level (functional competence) and who nevertheless
work as teachers. The training scope and content can be identified relatively pre-
cisely using the COMET test procedure.

10.3.2 The Contextual Dimension

The operationalisation of the contextual dimension is based on the four fields of


action for vocational school teachers described above. While the professional fields
of action of trainees and professionals differ fundamentally in terms of content—
from profession to profession and above all from occupational field to occupational
field—a uniform content structure covering all professions and occupational fields
can be used as a basis for vocational educators (teachers/trainers) (see p. 6).
However, this has to be designed in profession-specific manner when developing
test tasks and describing the corresponding solution spaces. The superordinate
structuring of the contents in the form of the four fields of action can be justified
by vocational pedagogy. The modelling of the content dimension at this medium
operationalisation level allows the development of a common competence and
measurement model for teachers of professional disciplines and, therefore, to com-
pare teacher competence across occupational fields. At the same time, however, the
vocational knowledge of the professionals to be qualified is of great importance for
the design and organisation of vocational training processes.

10.3.3 The Behavioural Dimension

When justifying the behavioural dimension of the competence model, reference can
be made to the justification of the COMET competence model. The concept of
complete working and learning action applies to teacher action in particular. A spin-
off of the review of training success, as provided for by the Vocational Training Act
for intermediate and final examinations (or Part 1 and Part 2 of the examination),
impairs the professional design of the feedback structure and practice for the
vocational school as a learning venue (cf. COMET Vol. 3, 222 et seq.). The
operationalisation of the behavioural dimension, therefore, includes the examination
of competence development during training.
The behavioural dimension of the competence and measurement model is taken
up by the occupational research concept of “complete tasks” (Ulich, 1994, 168)4:

4
Ulich refers to Hellpach (1922), Tomaszewski (1981), Hacker (1986) and Volpert (1987).
10.4 The Measurement Model 403

• Independent setting of goals that can be embedded in higher-level goals


• Independent preparation for action in line with the perception of planning
functions
• Selection of means, including the necessary interactions for adequate goal
achievement
• Implementing functions with process feedback for any corrective action
• Control and result feedback and the possibility to check the results of one’s own
actions for conformity with the set goals (Ulich, 1994, 168).
It is noteworthy that Ulich emphasises the category of “complete tasks” and,
therefore, establishes a reference to work design as a central object of occupational
research. The programmatic significance that the concept of complete action (task
design) has acquired in vocational education has one of its roots here. Another lies in
the degree of medium operationalisation in the form of differentiating the complete
working and learning action into successive action steps. For the didactic actions of
teachers and trainers, this scheme offers a certain degree of certainty. In the mean-
time, this model of action structure has also found international acceptance in
connection with the introduction of the learning field concept into the development
of vocational curricula.
A further function is fulfilled by the behavioural dimension in the application of
the competence model as didactic guidance for structuring process-related solution
aids in the solution of vocational work and learning tasks (cf. Katzenmeyer et al.,
2009, 173 ff.).
While the requirements and content dimensions of the competence model are
“product-related” dimensions, the behavioural dimension represents the process
structure of working and learning actions.

10.4 The Measurement Model

10.4.1 Operationalisation of the Requirements Dimension


(Fig. 10.5)

The operationalisation of the requirements dimension refers to the three fields of


action: (1) instruction, (2) development of educational programmes, (3) design of the
learning environment, as these can be assigned to an overarching field of action of
the design and organisation of vocational training processes. In action fields (2) and
(3), the focus is on conceptual planning competences. If the learning environment—
for example, a study lab—is carefully planned, then its implementation no longer
poses any particular professional challenges. How the quality of lesson planning
correlates with that of teaching is one of the central research questions of competence
diagnostics in teaching. It is conceivable that a teacher can create excellent lesson
designs but does not have the competence to translate them into a successful lesson.
For the development of the measurement model, this means developing a rating
404 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.5 The professional competence of teachers with professional disciplines: levels,
sub-competences (criteria) and dimensions

procedure with which the conceptual planning competence of the teachers can be
measured within the framework of large-scale projects (rating scale A). For the
evaluation of teaching, it is necessary to develop a modified variant of the measure-
ment model with which both teaching design and teaching itself can be evaluated—
for example, within the framework of demonstration lessons (rating scale B).

10.4.2 The Competence Dimensions

When modelling the requirement dimension, a distinction is made between nine


sub-competences, which are assigned to three competence dimensions:

(1) Functional competence (Df) with the sub-competences


• Vocational competence (Kbf)
• Vocational/technical didactics (Kfd)
• Technical methodology (Kfm)
(2) Procedural competence (Dp)
• Sustainability (Kn)
• Efficiency (Ke)
• Teaching/training organisation (Kuo)
(3) (Holistic) shaping competence (Dgk)
• Social compatibility (Ksv)
• Social-cultural embedment (Ksk)
(continued)
10.4 The Measurement Model 405

(1) Functional competence (Df) with the sub-competences


• Vocational competence (Kbf)
• Creativity (Kk)

10.4.3 The Competence Levels

The competence levels: Functional competence (KF), procedural competence


(KP) and shaping competence (KG) are interrelated levels of teacher competence.
The first competence level KF: The functional competence comprises the com-
petence components KB, KFD and KFM.
The second competence level KP: Procedural competence comprises the compe-
tence components of functional competence KB, KFD, KFM as well as
sub-competences KN, KE and KUO.
The third competence level (KG): In addition to the sub-competences of the
competence levels KF and KP, shaping competence comprises: KB, KFD, KFM,
KN, KE, KUO and the sub-competences KS, KSK and KK.

10.4.4 Operationalisation of Competence Components


for Teachers of Professional Disciplines (TPD)
(Rating Scale A)

The operationalisation of the requirement dimension is based on the psychometric


evaluation of the COMET competence and measurement model, which has a similar
basic structure (Erdwien & Martens, 2009, 62 ff.; Rauner et al., 2011, 109 ff.).

10.4.5 Vocational Competence

The contextual reference point for the design of vocational learning processes entails
the characteristic vocational work processes/tasks and the work process knowledge
incorporated therein. This has an objective and subjective component, given by the
technical/scientific connections, as well as a pronounced subjective component,
given by the action leading, action explaining and action reflecting knowledge
(cf. Lehberger, 2013). A particular challenge for teachers at vocational schools is,
on the one hand, long-lived structural knowledge and, on the other hand, “superficial
knowledge” to be found at the surface of technical-economic development. The
decision as to whether professional knowledge has to be acquired or whether it is a
406 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

question of the ability to acquire this knowledge per situation using the
corresponding media requires a high level of professional competence from teachers.
• Are the professional contexts presented correctly?
• Does the solution or design space correspond to professional reality?
• Is the objective and subjective knowledge of the work process taken into account?
• Is the functionality of the task solution adequately challenged?
• Is a distinction made between appropriation and research when dealing with
professional knowledge?

10.4.6 Vocational/Technical Didactics

The vocational/technical didactics competence of vocational school teachers and


trainers determines the ability to transform vocational content into teaching and
learning content. With the introduction of the learning field concept, subject-
systematic knowledge was replaced by work process knowledge as a reference
point for the content design of teaching and learning processes. The didactic
reference point for the design of teaching and learning processes is the “significant
vocational work situation”. As the technical and scientific training of teachers is
usually oriented towards the relevant subjects, the implementation of the learning
field concept poses a particular challenge.
• Is the professional validity of the educational objectives and contents (in relation
to the professional profile) adequately taken into account?
• Has it been possible to select the task for the learners’ respective level of
development?
• Is the didactic concept of the holistic task solution taken into account in the
planning and evaluation concept?
• Is the didactic concept of the complete learning/working action taken into
account?
• Is the curricular validity of the teaching project adequately taken into account?

10.4.7 Technical Methodology (Forms of Teaching


and Learning)

The concept of professional action competence and the professional profiles defined
in the training regulations require a targeted approach in the selection and application
of forms of learning and mediation. For example, the acquisition of safety-related
skills requires different forms of teaching and learning than the more general aspects
of vocational education and training. Equally important is the consideration of the
10.4 The Measurement Model 407

connection between knowledge and ability, with emphasis on the importance of


“action-oriented learning”.
• Is the methodological concept appropriate for the teaching project (learning tasks,
projects, etc.)?
• Are alternative methods of learning and teaching weighed against each other?
• Does the teaching method take into account the heterogeneity of the class?
• Are forms of activity-based learning adequately considered?
• Are pupils given the opportunity to present and evaluate their learning outcomes?

10.4.8 Sustainability

Teaching and training always aim at the sustainable acquisition of skills. This is most
likely achieved through a high degree of self-organised learning and when learning
is accompanied by strong feedback (cf. Hattie & Yates, 2015, 61 ff.). In project-
based learning, the success of the project, the presentation of the results and the
experience that a teaching project has “achieved” something are decisive for the
acquisition of knowledge that is memorised as well as basic skills that form the basis
for the ability to act in a variety of situations.
• Is superficial learning of professional work/knowledge (Know That) avoided?
• Is the aspect of “prospectivity” (future possibilities of professional skilled work)
taken into account?
• Is competence regarded as cognitive disposition (cognitive potential)—and not
only as a qualification requirement?
• Are forms of valid and reliable quality assurance used?
• Is the aspect of developing professional identity taken into account?

10.4.9 Efficiency

The optimal use of resources is a particular challenge for the design and organisation
of vocational education and training. This concerns the equipment and use of the
study labs. The form of cooperation between teachers and between teachers and
trainers includes the possibility to increase not only the quality of teaching and
training, but also the efficiency in the planning, implementation and evaluation of
teaching and training.
• Is the time and effort required for the preparation of the teaching project
appropriate?
• Are the opportunities for teamwork used?
• Are the individual learning outcomes (competence developments) the learners
have achieved verified?
408 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

• Are media and study labs used specifically?


• Are there good reasons for resorting to tried and tested learning tasks/projects?

10.4.10 Teaching and Training Organisations

In vocational education and training, the teaching and training organisation places
particularly high demands on teachers and trainers. This concerns above all the
interaction between theoretical and practical learning in dual vocational training and
the organisation and design of practicums in vocational and technical schools. The
educational contents must be coordinated with each other, and joint projects require
a high degree of coordination. With the introduction of learning fields in vocational
schools, the demands on cooperation between teachers have increased significantly.
When classes are formed, it must be decided whether company-specific or mixed
classes are to be established in dual vocational training.
• Are the premises and equipment resources of the school used appropriately?
• Are the opportunities for learning venue cooperation exploited?
• Are the opportunities for cooperation between teachers used?
• Is Internet access secured for teachers and learners/students?
• Is feedback on learning outcomes adequately established?

10.4.11 Social Compatibility

Social compatibility in teaching refers above all to the aspect of humane work
design, health protection and the social aspects of teacher activity that extend beyond
the professional work context (e.g. dealing with the most varied interests of school
management, education administration, parents, companies and trainees).
• To what extent does the didactic action of the teachers (planning, teaching,
follow-up of the lessons) correspond to the criteria of humane work design?
• Are aspects of health protection and safety at work (for teachers and learners)
taken into account?
• Is the aspect of creating a good learning climate considered?
• Is handling of disturbances and conflicts (organisation, school, pupils, col-
leagues) taken into account?
• Does the teaching team consider lesson planning and design as a “common
cause”?
10.4 The Measurement Model 409

10.4.12 Social-Cultural Embedment

Teaching is increasingly confronted with questions of the cultural and social context
of vocational training. On the one hand, this concerns the family situation of pupils
and trainees (e.g. single parents) and the economic situation (e.g. poverty) of
learners. The migration background of pupils and students (language, social
norms, religion, etc.) is a central aspect of teaching, especially in cities and
conurbations.
• Are the anthropogenic and socio-cultural preconditions of the participants in the
lessons taken into account in lesson planning?
• Are the circumstances of the social environment taken into account?
• To what extent is the currently expected role behaviour (learning guide, moder-
ator, consultant, organiser, role model) taken into account?
• To what extent is the potential for conflict arising from the learners’ socio-cultural
background taken into account?
• Is the economic situation of the learners taken into account?

10.4.13 Creativity

Creativity is a competence that is difficult to assign to a competence level in terms of


education theory. To a certain extent, it is transverse to the other competence
components. Decisive for an assignment to the third competence level (holistic
competence) was the argument that, at the highest competence level, it is important
to situation-specifically weigh all solution-relevant criteria against each other in the
“search” for a good task solution. The special “plus” of a solution results not only
from the consideration of all criteria, but also from the ability to creatively design
individual solution dimensions and the balanced weighting of the criteria. This
challenge to professional competence is fully met only at the level of holistic
competence.
• Does the lesson plan contain original elements that go beyond the usual?
• Are different ideas for lesson planning creatively balanced against each other?
• To what extent are individual and learning group-related learning prerequisites
taken into account in planning decisions?
• Is the freedom of design offered by the task exploited in the teaching project, for
example, through the use of media?
• Is the planned lesson sensibly integrated into the longer-term teaching context
(learning situation)?
410 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

10.5 Test Tasks

In competence diagnostics of teacher competence, a distinction is made between


measuring cognitive dispositions pursuant to conceptual-planning competence and
professional action competence. This includes didactic action (e.g. teaching). For the
evaluation of teaching, that is, the implementation of lesson planning, an appropri-
ately adapted rating scale (B) is used, which contains the three elements: (1) lesson
preparation in the form of a lesson plan, (2) instruction (the demonstration lesson)
and (3) an evaluation interview with the examiners (! 10.7). Rating scale A
(! 10.4 and Appendix B) is used for large-scale competence diagnostics projects:
the recording of conceptual-planning competence for the action fields “teaching”,
“developing educational programmes” and “designing learning environments”.
The competence level and profile of the test participants in the perception of tasks
in lesson planning and implementation as well as in the design of learning environ-
ments can be seen from the competence characteristics (Table 10.1).

10.5.1 Test Tasks for Measuring Cognitive Dispositions


(Conceptual-Planning Competence)

If this test procedure does not measure teacher behaviour but only teacher compe-
tence as a cognitive disposition, the test items for the action field Teaching refer to
the conceptualisation of teaching as it takes place in practice in the form of class
preparation. It remains to be seen whether teachers with a high level of competence
in lesson planning also have a high level of competence in teaching. The clarification
of this connection requires a special empirical investigation. This restriction does not
apply to the action fields “Development of educational programmes” and “Planning,
development and design of the learning environment”, as in these tasks the planning
activities determine the quality of the result.
Each test item comprises:

Table 10.1 Guidelines for the development of test items


The test tasks:
• Capture a realistic task from one of the four vocational fields of action
• Define the scope for design given for the respective profession and, therefore, enable a
multitude of different solution variants of different depth and width
• Are open to design, that is, there is not only one right or wrong solution, but requirement-
related variants
• Demand the components of the requirement, action and content dimension identified in the
competence model for their comprehensive solution
• Require a professional-technical and vocational-educational-didactically well-founded
approach for their solution. The task is solved on the basis of a concept-related plan. It comprises a
comprehensive justification
10.6 State of Research 411

1. A description of the situation.


A realistic or real situation is described for one of the four vocational task
fields, so that the (future) teacher can get an exact picture of the task to be solved.
The description of the situation specifies the starting points for the task solution.
A question-based breakdown of the description of the situation or the task is not
planned, as this would already have outlined the solution.
2. The task consists of the invitation to develop a solution proposal that is appro-
priate to the situation and can be implemented—if necessary, also equivalent
solution proposals—and to justify these in detail.
For this purpose, solution aids are specified and provided. It can be assumed
that (future) teachers will carry out their planning work with the help of com-
puters and the Internet. When processing tasks, the corresponding sources must
be indicated according to the usual rules.

10.5.2 Time Scope of the Test Tasks (for Large-Scale


Projects)

The maximum processing time for test items is 180 minutes. The test items are
designed in such a way that this time is sufficient to process the items without time
pressure.

10.6 State of Research

For a standardised competence survey of teachers of professional disciplines (TPD),


one difficulty is that their training is organised very differently across the globe
(Grollmann, 2005; Grollmann & Rauner, 2007). For a TPD competence model and
the design of the test items, this means that the fields of action justified above must
be taken as a basis and not a national curriculum or national standards for the training
of vocational school teachers. The international connectivity of a competence model
can best be investigated in the context of internationally comparable research pro-
jects (competence diagnostics). On the basis of the test items, it is most likely
possible to show whether and to what extent the actors involved in the initial,
continuing and further training of teachers for vocational training courses agree in
their explicit and implicit concepts and theories of professionalisation.

10.6.1 A Pilot Study with Student Teachers

Within the framework of a pilot project with a group of student teachers from the
professional fields of electrical and metal engineering, two test tasks were used, one
412 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

each for the action field of teaching and the action field of designing learning
environments.
Sufficient time was available to process a test item. The prescribed time frame of
3 h proved to be appropriate. Double rating of the task solutions was carried out by
two experienced technical managers with extensive experience in rating. The very
high degree of agreement of the ratings allowed a precise recording of the compe-
tence development of the test participants.
The most important result of this pilot study was certainly the willingness of the
student teachers to take part in this test. The development of the test tasks for
teachers of different professional disciplines was relatively simple in that the voca-
tional fields of action for teachers of all professional disciplines are the same (see
above). The didactic actions of the teachers of different professional disciplines,
therefore, do not differ in the structure of the actions, but in their content. All
professions, for example, are concerned with the design of study labs (see the third
field of action) as “learning venues” for experimental and action-oriented learning.
The occupation-specific teaching/learning contents are different. This can be easily
checked using the example of the study lab to be set up, where there is a fundamental
difference to the competence of diagnostic vocational training of professionals in
different professions, which differ in their fields of action.
Comparable overriding pedagogical-didactic criteria apply to the study lab to be
set up (Fig. 10.6) for metal or electrical professions. This also applies to the action
field of teaching. This is of great advantage for the execution of tests and above all
for the interdisciplinary comparability of the results.

Test Results

The extraordinarily large differences in the development of competences, both in


their level of competence and in the competence profiles, were not expected either by
the discipline leaders or by the student teachers. In the opinion of all participants, the
pilot study has shown that the diagnostic value of this survey method is very high
and cannot be achieved with any other evaluation method.
The competence profiles of the test participants (Fig. 10.7).

10.6.2 The Research Programme: Competence Development


of Teachers and Lecturers in Vocational Education
and Training in China

In preparation of an extensive project of competence diagnostics of TPD in the


professional fields of metal technology and automotive service/technology in China
under the direction of Zhao Zhiqun (Peking Normal University), both the complex
test tasks (developed and tested in a pilot project in Germany) and the measurement
10.6 State of Research 413

Fig. 10.6 Example of a COMET test task for vocational school teachers
414 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.7 Competence profile of student teachers

model were first evaluated didactically. This was based on an analysis of the
vocational tasks and fields of action of vocational school teachers using the method
of expert workshops for professionals (Kleiner et al., 2002; Spöttl, 2006). The fields
of action on which the competence model is based were confirmed, but their content
was further differentiated (Zhao & Zhuang, 2012). With reference to this result, the
complex test tasks were modified without changing their core (Fig. 10.8).
10.6 State of Research 415

Fig. 10.8 Objects of research and their logical order

Pretest (China)

The pretest serves to


(a) Verify the quality of the complex test tasks and obtain data for their revision
(b) Evaluate the rating scale with its 45 rating items didactically
Six “expert teachers” were trained to become raters in a one-day training session.
After four trial consultations, a high interrater reliability was achieved with Finn
(just) > 0.70.
The didactic evaluation of the rating scale showed that 40 out of 45 rating items
had to be retained unchanged and the other 5 items had to be editorially adjusted
(Zhao & Zhuang, 2012, 6). Overall, the complexity level (level of difficulty) of the
situation descriptions of the test tasks was slightly reduced.

Main Test

The test was attended by 321 teachers of metal technology and automotive service
(technology) from 35 vocational training centres—a representative selection for
China. The number of raters was increased to 13.
The test results proved to be very informative (Fig. 10.9).
By far the highest level of competence in China is held by lecturers at technical
colleges (Technicians Colleges). Nearly 85% of these teachers achieve the highest
level of competence. The competence level of teachers at vocational colleges is
416 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.9 (Total) Distribution of competences of TPD in metal and automotive service

significantly lower—but still high. 61% reach the second level of competence. By
contrast, the competence level of teachers in the professional branch at high schools
(vocational schools) is very low. 43% have no professional competence.
For the first time, the competence profiles provide a very precise picture of the
competences of Chinese vocational school teachers in the various vocational training
programmes (Fig. 10.10).

Test Reliability

When verifying test reliability, values above 0.5 are considered acceptable and
values of 0.9 and higher are considered very high. The values of reliable consistency
achieved in the psychometric evaluation of the competence and measurement model
are 0.983 and the value of split-half reliability is 0.974 (ibid., 17). High values were
also achieved in the verification of empirical validity (Fig. 10.11 and Table 10.2).
Referring to COMET teachers’ professional model, the assessment model and its opera-
tional definition on professional competence, we construct a basic factor model, i.e., a first-
order 9-factor model consisting of 9 indexes and 45 items (as shown in Fig. 9.10) (. . .) In
summary, these results show that the professional competence test exhibited high empirical
validity, discriminating teachers with exceptional skills from those with average skills
(ibid., 22).
10.6 State of Research 417

Fig. 10.10 Competence profiles of Chinese vocational school teachers, by training courses

Fig. 10.11 Basic data model for TPD professional competence. (Source: Own compilation)
418 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

Table 10.2 Independent t test on the competence dimensions of TPD


Mean Standard Degree of Significance
Item value deviation t value freedom (2 sides)
PF Suppose the variance 14.7433 3.88015 3.880 233 0.000
to be equal
Suppose the variance 16.6982 2.84448 4.341 188,392 0.000
to be unequal
PP Suppose the variance 13.0890 3.97335 4.022 233 0.000
to be equal
Suppose the variance 15.1869 3.06921 4.419 180,040 0.000
to be unequal
PG Suppose the variance 10.3727 3.78394 3.594 233 0.000
to be equal
Suppose the variance 12.1689 3.00513 3.911 175,612 0.000
to be unequal
Notes: PF for functional competence, PP for processual competence; PG for shaping competence

10.6.3 Investigating the Link Between Measured Teacher


Competence and Quality of Teaching

With the participation of more than 100 teachers in student tests (electronics
technicians and automotive mechatronics technicians) within the framework of
Chinese COMET projects (cf. Zhuang & Li, 2015; Zhao, Rauner, & Zhou, 2015;
Zhao, 2015), a new quality in teacher training was achieved, according to the
assessment of the test participants and their school principals. The measured com-
petence development of the teachers in the form of competence profiles is of very
high diagnostic significance with regard to the technical understanding underlying
the teaching activity (Figs. 10.9 and 10.10). If teachers take part in student tests, in
rater training and in the rating of student solutions or if they use the COMET
competence and measurement model as a didactic instrument for the design and
evaluation of teaching, then they acquire the concept of the complete (holistic)
solution of professional tasks in a relatively short time. This has a formative effect
on the design and organisation of vocational training processes.
On the basis of this recognition that teachers/lecturers transfer their professional
understanding and problem-solving patterns to their students, it is now possible to
ascertain the professional competence and problem-solving patterns of teachers/
lecturers in VET from the competence profiles of their students.
10.7 Evaluation of Demonstration Lessons in the Second Phase of Training. . . 419

10.7 Evaluation of Demonstration Lessons in the Second


Phase of Training Teachers with Professional
Discipline (TPD): A Test Model

Demonstration lessons are an essential element in the second phase of teacher


training and especially in the context of the second state examination. This exam
component includes:

10.7.1 The Lesson Plan

The candidate (TPD-C) prepares a teaching draft for each demonstration lesson
within the framework of their state examination in the vocational specialisation as
well as in a general subject or a further focal point in a vocational specialisation and
justifies the embedment of these lessons in the respective educational programme. In
dual education programmes, the aspect of learning venue cooperation should be
taken into account. In the examination regulations, the scope of these elaborations is
limited, since the aim is to achieve a realistic examination effort, which can be
extended to a justifiable extent under the framework conditions of an examination.
The lesson plans should be sent to the members of the examination board for
evaluation a few days before the demonstration lesson. According to the examina-
tion procedure outlined here, the examiners evaluate the lesson draft on the basis of
the evaluation sheet variant A (Appendix B). A double rating (two examiners) is
useful here. They create a group rating on the basis of their individual ratings. This
rater practice contributes to the fact that ultimately high to very high values of the
agreement (interrater reliability) are reached. The marked items on the rating scale
are only used during class observation.

10.7.2 Class Observation

Two rating scales are used for the evaluation of teaching: the evaluation sheet variant
A used for the evaluation of the teaching design and the evaluation sheet variant B,
which consists of the rating scale A and modified rating items. Furthermore, the
impression of the observed lesson facilitates the correction of evaluations of the
teaching project on the basis of the lesson plan.
The main focus of class observation lies on the evaluation of the social-
communicative competence of the TPD-C. The reflections of the Kollegiums des
Studienseminars für das Lehramt an Berufskollegs [collegium of the study seminar
for the teaching profession at vocational colleges] in Hagen (NRW) on “character-
istics of good teaching” have been incorporated into the extension of the competence
model by an essential component: the social-communicative competence of the
420 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

TPD. The social-communicative competence of future teachers is a central aspect of


their professional competence. In the context of the second state examination, this
dimension of professional action competence is observed and evaluated within the
framework of the demonstration lessons. The observed teaching behaviour allows a
well-founded evaluation of social-communicative competence only if the examiners
have the opportunity to question the TPD-C on their behaviour and actions in class.
Therefore, an examination interview is necessary (and usual) after the demonstration
lesson, in which the examinee first has the opportunity to reflect on their didactic
actions and answer questions of the examiners that are at their core: Why did you act
like this in situation X and not differently? For the evaluation of the candidate’s
social-communicative competence, this interview is the prerequisite for a valid and
reliable evaluation of the teaching project (evaluation form variant C, appendix B).
Social-communicative teacher competence comprises five criteria:
1. Building relationships conducive to learning
2. Designing communicative lessons
3. Moderating learning processes
4. Designing lessons to be reflective and give feedback
5. Fulfilling role expectations and developing personality
Each of these five competence criteria is operationalised with five items.

10.7.3 The Interview (Following the Demonstration Lesson)

The demonstration lesson is followed by an interview regarding the demonstration


lesson. It comprises
• A self-assessment of the candidate, who explains how they succeeded in
implementing the lesson plan, whether and how they deviated from the plan—
if necessary–, how they assesses the learning climate/learning situation and
whether they achieved the objectives they justified in the lesson plan.
• An examination interview, which mainly includes questions from the examiners.
They are based on the evaluation forms variant B and C. The aim is to gain a
deeper insight into the teacher’s actions in order to correct the assessments of
individual items on this basis, if necessary.

10.7.4 Final Evaluation of the Examination Performance

The examination board evaluates the examination performance on the basis of its
ratings (evaluation forms variant B and C). They communicate via a group rating.
The “weight” with which the individual examination sections are included in an
overall assessment of the second state examination is laid down in the examination
regulations (Fig. 10.12).
10.9 Outlook 421

Fig. 10.12 COMET rating for the assessment of the performance of teacher candidates (TPD-C) in
the context of the second state examination

10.8 Development and Evaluation of the Model


“Social-Communicative Competence of Teachers”

The heads of the study seminar for the teaching profession at vocational colleges in
Hagen (NRW) have named characteristics and items for successful teaching in a
vocational pedagogical discussion process with reference to the relevant vocational
pedagogical research as well as their extensive teaching and training experiences,
from which a model for the sub-competence “social-communicative competence of
teachers” was developed. The large number of heads of department involved and the
scope of this discussion process form the basis for the only possible methodological
validation of this model in the form of discursive validity (cf. Kelle, Kluge, & Prein,
1993, 49 ff.; Kleemann et al., 2009, 49).
In a second step, the reliability of the rating scale and the competence criteria was
examined on the basis of a two-stage rating training. The rating was based on two
video recordings of demonstration lessons. After the individual rating of the first
video recording, the rating groups agreed on a group rating. It was expected that the
values of interpreter reliability achieved in the rating of the second video recording
would increase. The results of this rating procedure yielded high interrater-reliability
values for both groups. With this extension of the COMET model for recording
teacher competence (TPD), a set of instruments is available both for competence
diagnostics and for teacher training and further education, which has the potential to
increase the quality of the didactic actions of these teachers.

10.9 Outlook

For a standardised competence survey of vocational school teachers, one difficulty


lies in the fact that their training is organised very differently across the globe
(Grollmann, 2005; Grollmann & Rauner, 2007). For a competence model for
vocational school teachers and the design of the test items, this means that the
justified fields of action (! 10.2) must be taken as a basis and not a national
422 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .

curriculum or national standards for the training of vocational school teachers. The
international connectivity of the COMET competence model for vocational school
teachers was demonstrated with the project described above.

10.9.1 Psychometric Evaluation of the Competence Model

The psychometric evaluation of the competence model was a decisive step in the
research process “Competence Diagnostics for Vocational School Teachers”. The
concept of open complex test tasks includes high demands on the test methodology.
The COMET project was able to show which psychometric evaluation methods are
suitable for this research (Martens & Rost, 2009, 96 ff.; Erdwien & Martens, 2009,
62 ff.; Haasler & Erdwien, 2009, 142 ff.).

10.9.2 Investigating the Link Between Measured Teacher


Competence and Quality of Teaching

In its introductory standards for teacher training, the KMK rightly emphasises: “The
professional quality of teachers is determined by the quality of their teaching”
(KMK, 2004b, 3). This fundamental insight requires an exploration of this connec-
tion. The sense in measuring teacher competence with the methods of Large-Scale
Competence Diagnostics and thereby reducing the abilities of teachers to a domain-
specific cognitive disposition (e.g. to lesson planning) is only given if the relation-
ship between the measured teacher competence and the quality of teaching can be
empirically proven. Only then is the measured level of competence an indicator of
the vocational competence of teachers. An external criterion for checking the
validity of the content of the test items in teaching is the competence development
of the pupils. After it has been empirically proven that the competence profiles of the
teachers correspond to a degree with those of their pupils and that the competence
profiles of the pupils can, therefore, be traced back to the problem-solving patterns of
their teachers, there is a high plausibility for the thesis: “Good teachers train
competent pupils” (Zhao, 2015, 443).
Chapter 11
The Didactic Quality of the Competence
and Measurement Model

11.1 The Learning Field Concept Provides Vocational


Education and Training with an Original,
Educational-Theoretical Foundation

For decades, vocational education was torn between two basic guiding principles:
science orientation (pure education) versus qualification to suit the needs of the
labour market (utilitarianism). The central idea of pure education goes back to
Alexander von Humboldt. Heinrich Heine sums it up particularly frankly and briefly:
“Real education is not education for any purpose, but, like all striving for perfection,
finds its meaning within itself”. For the implementation of this guiding principle, the
orientation of pure education towards the sciences—towards pure scientific exper-
tise—appeared to be the adequate path to be pursued by all education. The German
Education Council has, therefore, elevated science orientation to a fundamental
didactic principle of all education. For vocational education and training, this
promised to cast off the stigma of utilitarianism, that is, education aimed at useful-
ness. This, however, posed a new problem for vocational education and training.
Attempts to derive vocational knowledge from (academic) scientific knowledge, to
use it to develop systematically structured educational plans and to establish voca-
tional competence led to a dead end. The success story of the science system can be
seen in the exponential multiplication of generalisable disciplinary knowledge,
based on a system of scientific disciplines with a high division of labour. Scientific
knowledge is regarded as pure, resulting in the relationship between genuine—
pure—education and the pure knowledge produced by the sciences. However, this
ignores the fundamental realisation that the historically grown world can only be
understood as a process of objectifying purposes and the underlying interests and
needs. The world in which we live and work, therefore, inevitably means interacting
and dealing with values and responsibility.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 423
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_11
424 11 The Didactic Quality of the Competence and Measurement Model

In the 1980s, the insight described above was taken into account with the
central idea of empowering those who are to be professionally educated to help
shape the world of work in socially and ecologically responsible manner.
Therefore, it is not the scientific abstract knowledge that forms the basis for
the development of professional competence, but the knowledge of the work
process as the basis for competent and responsible professional action
(Rauner, 1988, 32–51; Heidegger, Adolph, & Laske, 1997; KMK, , 1991,
590–593; 1999).

The world in which we live and work, in whose development we—in all spheres of
society—participate as consumers every day (through our purchasing decisions)—
as producers of utility values, as voters or members of social movements, and
constantly, consciously or subconsciously, both to a large and small extent, is not
a pure world. There are no cars, no buildings, no furnishings, no services that are
pure or without a purpose.

Howard Gardner formulated the pedagogical response to the ideology of pure


education as follows: “I want my children to understand the world, but not
simply because this world is fascinating, and the human mind is driven by
curiosity. I want their insights to enable them to change the world to improve
the lives of the people living in it” (Gardner, 2002).

When the Subcommittee on Vocational Education and Training (UABBi) of the


KMK (, 1996) established the learning fields as the basis for design-oriented
vocational education and training—as a fundamental change of perspective in the
development of framework curricula—and bindingly established them in 1999, the
change in pedagogical fashions in vocational education and training was interrupted.
An original pedagogical central idea of vocational education and training was
translated into a curriculum development programme. The UABBi had taken up the
recommendation of the Enquête Commission of the German Bundestag “Future
Education Policy—Education 2000” on vocational education and training and
formulated as a new guiding principle to turn away from vocational education and
training aimed at adaptation and turn to vocational education and training aimed at
co-shaping the world of work (and society): Since then, each framework curriculum
has defined vocational education and training as “the empowerment to help shape the
world of work in socially and economically responsible manner” (KMK, , 1999). It
is no longer only a question of enabling prospective specialists to understand the
(working) world, but also of equipping young people with the ability to participate in
shaping the world—on a small and large scale. This change of perspective
implemented by the KMK had far-reaching consequences for educational planning.
The tradition of pure basic and specialist science-propaedeutic education lost its
legitimacy. The curricula derived from scientific subjects, such as the fundamental
11.1 The Learning Field Concept Provides Vocational Education and Training with. . . 425

educational concepts based on social science content in personal services or the


concepts of basic industrial and technical education oriented towards the natural
sciences, came into conflict with vocational education and training oriented towards
learning fields.

The learning field concept turns the historically grown (working) world as an
objectification of purposes and goals as well as the interests incorporated
therein, that is, as a world with values, into the object of vocational education
and training. It is a matter of understanding and exploiting the scope for
creativity in order to help shape a world of work that is increasingly dependent
on participation.

The modernisation of a heating system under consideration of the current or even


expected environmental regulations, state-of-the-art heating technologies, affordable
costs, the highest possible operating comfort as well as trouble-free operation
requires shaping competence: the ability to solve professional tasks under consider-
ation of all relevant criteria. Learning fields take up “significant professional action
situations” as the bases for curriculum development (KMK, , 1996). Professional
work situations are “significant” when they challenge professional competence
development. Herwig Blankertz (1983) and Andreas Gruschka (1985) have
implemented this theoretical concept of development, which goes back to
Havighurst (1972), in the (NRW) college school project, above all in the educational
programme for educators: Professional development tasks as the basis of a curric-
ulum structured according to development logic (Rauner, 1999).

Patricia Benner’s project from the “Nursing” faculty of the prominent Uni-
versity of California (Berkeley) is still regarded as groundbreaking in the
international vocational pedagogical discussion for training nurses in accor-
dance with the novice-expert paradigm. She describes the “significant” work
situations of nurses, which she and her team empirically identify as the basis
for curriculum development, as “paradigmatic work situations” (Benner,
1997). Successfully coping with these work situations triggers competence
development. Paradigmatic work situations have the quality of development
tasks. Successfully “passing” as well as reflecting on a paradigmatic work
situation teach trainees to see their working world from a broader perspective
and to take a recognisable step in their competence development. This is the
basis for the KMK’s change of perspective from an objectivistic-systematic
learning tradition to a subject-related structuring of vocational development
and learning processes. The development of the ability to solve vocational
tasks—more precisely to solve them completely—becomes the yardstick
when logically structuring the contents of vocational training programmes.
426 11 The Didactic Quality of the Competence and Measurement Model

The risk of failing with the ambitious goal of introducing the groundbreaking idea
of vocational education and training structured according to learning fields has not
been averted. Under the conditions of accelerated social change, the willingness and
ability to try out new things is seen as an indicator of innovative competence in
individuals and institutions. Following the rather cumbersome process of introduc-
ing learning fields to structure educational plans and processes in vocational educa-
tion and training, the orientation towards the COMET competence model should
open a new approach to the learning field concept. A challenging project, it is also a
competence model that is suitable for mediating between educational goal and
learning task.
The learning field concept is characterised by a number of key terms which
should be recalled in order to avoid conceptual misunderstandings in further
explanations.
1. The learning field concept is based on the orientation of vocational training
processes towards work situations whose potential for professional competence
development is assessed as “significant” by experts in the respective profession.
2. In principle, competence-promoting work situations and tasks are the linchpin for
the design and organisation of vocational learning, that is, the imparting of
vocational action and shaping competence. The KMK manual on the learning
field concept, therefore, refers to them as “situations that are significant for the
exercise of a profession”.
3. The description of work and learning tasks as effective forms of in-company and
school learning, therefore, requires both a description of the competence-
promoting (significant) work situations and the respective work assignments. It
is only through this linkage that work and learning tasks challenge targeted
vocational action and learning.
4. The distinction between action and learning fields points to the fundamental
difference between working and learning and to the fact that both—in vocational
education and training—are constitutive for each other. The didactic reference
point for the learning fields are the vocational action fields. At the same time,
learning fields—prospectively—point beyond vocational practice. While the
action fields are concerned with the professional execution of a company order,
the learning fields are concerned exclusively with learning. Within the learning
fields, it is, therefore, possible and, pursuant to the educational objective of
co-designing in social and ecological responsibility, also necessary for the
description of learning tasks to go beyond the limited operational framework of
the work situation in accordance with the formulated characteristics of a learning
task (refer to p. 505). In nursing or commercial professions, case situations or case
studies are often used which are characterised by a stronger link between learning
and action fields (Fig. 11.1).
The term “learning task” is not used in the learning field concept and, therefore,
requires classification. The learning field concept has produced the blurred term
“learning situations”, which take up “professional tasks and courses of action” and
“didactically and methodically prepare them for implementation in teaching” (KMK
11.1 The Learning Field Concept Provides Vocational Education and Training with. . . 427

Fig. 11.1 Key terms of the learning field concept

2011, 32). Learning situations—pursuant to the KMK manual—therefore, address


both situational (learning task) and process-related (teaching-learning processes)
aspects of vocational action. The term “learning task” is intended to describe the
situational aspect of learning situations that goes beyond the limited operational
framework of the work situation.
There is increasing evidence that vocational education and training is turning to
competence-based educational standards. This provides a set of instruments with
the advantage of being internationally established. The Anglo-Saxon tradition of
competence-based vocational learning and the methods of developing modularised
428 11 The Didactic Quality of the Competence and Measurement Model

certification systems and assessment methods based thereon, such as the British
system of National Vocational Qualifications (NVQ), promise stronger ground
under the feet of those who are looking for tried and tested recipes. In contrast to
the seemingly diffuse learning field concept, which after almost two decades of its
introduction appears as a ruin of innovation, competence-based learning promises a
handy formula which, it seems, is also in line with the EU projects of the European
Qualifications Framework and the ESCO project (European Skills, Competences
and Occupations).
One problem with both EU initiatives is the programmatic formula that voca-
tional education and training is defined as a process of acquiring qualifications
“irrespective of place and time”. In this context, vocational curricula and developed
methods of vocational learning are regarded as input factors—and, therefore, as
yesterday’s methods. From this perspective, vocational training programmes appear
to be a considerable disruption potential that stands in the way of establishing a
profitable and flexible service sector (in line with a relevant GATS recommendation)
(Drexel, 2005).
It looks as if the educational policy and planning reception of this qualification
concept in Germany is meeting with considerable resistance and that dual vocational
training is being rediscovered internationally, above all as a means of combating
youth unemployment. At their meeting at the end of September 2011 in Paris, the
G-20 employment ministers emphasised the introduction of dual vocational training
systems in their catalogue of recommendations for action to combat youth unem-
ployment. Modern vocational training (Sennett, 1998) based on the concept of
European core occupations (Rauner, 2005), vocational training structured according
to learning fields and competence diagnostics based on vocational shaping compe-
tence (Rauner et al., 2011) are gaining in importance in this context—also interna-
tionally. There is, therefore, much to suggest that the learning field concept is still
proving to be a highly innovative reform project for vocational education and
training.
The working world for which vocational training prepares teaches us that a
heating or lighting specialist, a specialist in the retail trade or a specialist in education
is always faced with the challenge of balancing a variety of possible solutions and
procedures when solving a professional task. The amount of time available, the
variety of professionally possible solutions, their practical value and sustainability,
their environmental and social compatibility and not least their economic feasibility
are criteria that must be weighed against each other in every situation.

A high level of professional competence is, therefore, characterised by the


ability to make astute use of the scope for solutions or design given in each
case. The guiding principle of shaping competence, which is anchored in every
framework curriculum for vocational education and training with the intro-
duction of the learning field concept, represents the reality of the working

(continued)
11.1 The Learning Field Concept Provides Vocational Education and Training with. . . 429

world. True education enables us to answer this question: Why are the realities
of the working world (and society) like this and not like that? And: Is there
another way? True education enables us to help shape the (working) world,
which inevitably means facing up to the responsibility associated with it.

11.1.1 Professional Action Fields as a Reference Point


for the Development of Learning Fields

Professional work tasks encompass a potential for the development of professional


competence and identity if the (prospective) professionals are to learn to integrate
their work tasks into the company’s business processes. “Working and learning” are
the two sides of the same coin. Therefore, work tasks can also be described as
learning tasks. The work contents are then emphasised as a medium for acquiring
professional competence. A form of representation for learning fields breaks down
the characteristic professional work tasks and contexts according to the three aspects
of work-oriented content of work and learning:
• Subject of (specialist) work.
• Tools, methods and organisations of specialist work.
(Requirements for (specialist) work and technology are listed in Fig. 11.2).
The transformation of a professional action field or a professional task into a
learning field (or a learning task) consists in formulating the three content dimen-
sions both from the perspective of the empirically identified professional tasks and
that of the educational objectives. A learning field, therefore, always includes work
process knowledge (see p. 63 ff.). Since learning fields and related learning tasks/
learning situations always aim at the complete and reflected solution of professional,
their basic structure corresponds to the project-based forms of learning.

Fig. 11.2 Identification and determination of training and teaching content in terms of vocational
qualification requirements and educational objectives (Rauner, 2000)
430 11 The Didactic Quality of the Competence and Measurement Model

11.2 Designing Vocational Education Processes


in Vocational Schools

A widespread misunderstanding among vocational trainers: “Theoretical knowledge


of the subject is the prerequisite for professional action” (Fig. 11.3).
Current teaching practice is, therefore, often oriented towards the didactic model:
• Theory is first taught by theory teachers.
• In a second step, theory is then applied on the basis of exercises and experiments
in the study lab.
• Finally, theoretical knowledge can also be applied in the work process.
It, therefore, depends on the clarification of the training paradox: professional
beginners become experts by doing what they want to learn.
Learning is one of the basic skills that people possess from birth. We are most
likely to see this in young children when they develop quickly, learn to speak
quickly, learn to move, experience themselves, become increasingly skilful and
secure with the objects around them, such as using a spoon properly when eating
and learning to ride a bicycle as they grow up. They casually learn what to bear in
mind when playing with other children. Most of what a growing child learns, it does
not learn according to the scheme: theory and teaching first and then applying what it
has learnt. The theory of cycling does not enable the child to cycle. What applies to
learning how to swim, cycle and manually reworking mould sealing surfaces (for a
tool mechanic) is also generally significant for professional learning.

11.2.1 Professional Knowledge

The acquisition of theoretical knowledge is not a prerequisite for professional action;


instead, vocational knowledge arises from professional action processes.
Each new work experience is evaluated in the light of previous work experience
and the result of this evaluation is added to the previous experience. If the difference
between previous and new work experience is too great, then subjectively no bridge
can be built to the new experience—nothing is learnt (in terms of expanding the

Fig. 11.3 From learning to working: a widespread misunderstanding


11.2 Designing Vocational Education Processes in Vocational Schools 431

fields of meaning of action-relevant concepts). New knowledge only emerges when,


on the one hand, the new work experience matches existing connotations, makes
them vibrate and, on the other hand, deviates from the existing knowledge to such an
extent that the new experience contributes to an extension and deepening of previous
connotations and valuations. Work experiences are always made when the existing
ideas, connotations and expectations have to be questioned, modified and differen-
tiated by the new reality.

11.2.2 The Training Paradox

The concepts of active learning pose a mystery: How do beginners become experts
without first acquiring the corresponding knowledge? This may sound paradoxical,
but it corresponds exactly to what vocational pedagogy understands by active
learning. Beginners in a profession become experts by doing what they want to
learn. Trainers support them by confronting learners with work situations that are
challenging to master. At the same time, it is also true that professional skills are
based on professional knowledge.
With the introduction of the learning field concept, the formula “Professional
action requires professional knowledge” is a thing of the past.
Gottfried Adolph (1984, 40 f.) reported on an informative example from his
teaching practice.

An example from teaching practice:


“Electronics technicians at the beginning of their second year of training are
asked to measure current and voltage in the laboratory using a series connec-
tion of two lights. The laws of series connection were previously ‘thoroughly
tested’ in theoretical lessons. The performance test showed the teacher that all
students now have the theory knowledge that when resistors are connected in
series, the voltage drops are proportional to the resistors, that the current
intensity is the same in all resistors. Students can make the appropriate
calculations.
The lights that each pupil has connected in series have different wattages at
the same rated voltage. Their resistances are, therefore, not the same. They are
chosen so that when the circuit is closed, the light with the high rated power
does not light up, while the weaker one has almost full power.”

The subsequent behaviour of the pupils is almost according to legal regularities:


hesitation—surprise; uncertainty about the fact that only one light is “on”. A
frequent call to the teacher: one of the lights is broken! This is automatically
followed by taking the “broken” light and unscrewing it from its socket. The fact
that now the other light also goes out leads to renewed, even stronger uncertainty. By
432 11 The Didactic Quality of the Competence and Measurement Model

inserting and removing the light, the phenomenon is repeated over and over again, as
if one needed repetitive [...] confirmation of what is intrinsically “impossible”
(exclamation of a student: “This is impossible!”).
Gottfried Adolph comments on this typical event: “... Everything that happened
was not expected by the students, who expected that a ‘correctly’ connected light
would also light up. If it does not, then it is ‘broken’. It is expected that twisting a
light in and out of its socket will influence that light and not on the other”.
He, therefore, concludes: “The preceding theoretical teaching on the series
connection of resistors has not changed the expectations expected in practice—the
school theory has not reached the personal, secret theory [...]. It turns out that the still
widely used organisational model (first so-called ‘theory teaching’... followed by
‘practising application’... is wrong in its approach.” (ibid., 41).
If the teacher had asked the pupils to experiment with the series connection of
lights of different wattages instead, then the pupils, possibly supported by the
teacher, would have finally understood in a process of testing and experimenting
(in line with experimental cognitive activity) not only the laws according to which a
series connection works, but also the important aspects of connecting lights in series.
The decisive point for this form of acquisition of professional knowledge, however,
is that the pupils would not only be taught formulae for calculating the series
connection of Ohm’s resistors, but that they would be challenged to experiment
and acquire these findings themselves. If these technical findings are taught by the
teacher, then their value for practical tasks is not only limited, but the teacher has
missed an important learning opportunity, namely, the acquisition of the ability to
gain knowledge by experimenting.

11.2.3 Designing Learning Tasks

The linchpin of professional learning is competence-promoting company work


situations and tasks which, in the form of learning tasks, represent the starting
point for vocational training processes.

What Distinguishes Learning Tasks from Work Tasks

Only work tasks that encompass a potential for the learner’s competence develop-
ment have the quality of “development tasks” and can be transferred to learning
tasks. Learning tasks can be completed in a few hours if—as with open test tasks—
they are restricted to conceptual planning. This distinguishes learning tasks from
projects. Projects always have two results:
11.2 Designing Vocational Education Processes in Vocational Schools 433

A “Product”

For example, a class of electronics technicians specialising in energy and building


technology develops modern lighting for the school canteen or prepares an excursion
to a regional wind farm and evaluates this excursion.

A Learning Outcome

The learning outcome is the main concern of a project within training and must,
therefore, not be lost sight of. It is important to exchange ideas at the beginning of a
project and ascertain what can be learnt in the planning and implementation of a
project.
Learning situations aim at professional competence development. They belong to
the project-based forms of learning, as the intrinsic learning tasks are realistic and
complex. They are, therefore, also based on the concept of a complete task solution.
If learning tasks are also solved practically, then it is useful to speak of “work and
learning tasks”.
Learning situations pose a practical advantage. Project-based learning is
maintained, especially if the didactic concept of complete task solution is observed.
However, the organisational and temporal framework conditions for the implemen-
tation of learning situations are uncomplicated. This also means that learning tasks
can be worked on and justified in varying depth and breadth.
The following explanations serve as orientation for the step-by-step design of
learning tasks on the basis of work situations/tasks (Fig. 11.4).

Step 1: Identifying Competence-Promoting Work Situations/Tasks

The first point of reference for identifying competence-promoting (significant) work


situations/tasks is the apprenticeship profiles. In addition to the professional quali-
fication profiles (professional skills), they describe the professional action fields
(Table 11.1).
Possible sources for selecting “significant work situations” include:
1. The job portal
2. Company exploration (Fig. 11.5)
The trainees of a vocational school class usually complete their practical training
in the companies of the region. The spectrum of the business fields of the local
training companies represents the vocational fields of action of their training pro-
fessions. The concrete vocational work processes and situations in the companies fit
into the context-free description of the job profiles and training regulations. On the
other hand, the real work situations/tasks are expressions of company-specific
production and service processes and the contents and forms of the professional
work processes given by them. Knowing these is very important for teachers, as their
434 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.4 Steps to design learning tasks

Table 11.1 Example of professional action/activity fields


Professional action/activity fields
Auto-
mechatronics
Shipping clerks engineers Nursing professions
Import-export assignments Service/mainte- Nursing as part of the nursing process
Procurement nance Implementation of initiated measures
Marketing/proposal creation Repairs Training, instruction and consultation
Forwarding and logistic business Conversion and of patients and relatives
processes/controlling retrofitting Administrative tasks/management
Diagnostics

trainees gain their work experience and are trained practically in these company-
specific contexts.
In many cases, it will make sense to conduct a more in-depth investigation of the
work situation or task together with the trainees in order to explore the company or
the company’s expertise. For this purpose, a detailed questionnaire or an exploration
grid with the most important aspects to be considered should be developed. Only
such an instrument turns an unsystematic inspection into a targeted exploration.
Despite thorough preparation, however, the operational events can never be fully
11.2 Designing Vocational Education Processes in Vocational Schools 435

Exploring company work situations / tasks


ƒ Types of services (repair, maintenance, installation, assembly, consultancy,
documentation, presentation, etc.)
ƒ Used technology, systems, machines, tools, auxiliaries
ƒ Work organisation, working methods, work processes (assembly, installation, set-up,
equipping, etc.)
ƒ Manufactured products/sub-products, their use and field of application
ƒ Introduction of new technologies, products, forms of work
ƒ Requirements for the products and sub-products/services, the manufacturing process,
the organisation of the work (processes), the employees/professionals
ƒ Alternatives to the technology used and to the organisation of work (processes)
ƒ Problems/ “hot spots” in the production/service and organisation of work(s).

Fig. 11.5 Grid for exploring company work situations/tasks

recorded. In addition to the consideration of overall company contexts, it is, there-


fore, of particular importance to track down details at the specific workplace and to
obtain additional information from the employees on site. This requires a preselec-
tion based on the following questions, particularly in the case of larger companies:
• Which (typical or unusual) work situations/tasks of professional are of interest
to us?
• Which jobs of professionals should we examine in more detail?
In smaller companies, the boss can usually spontaneously name tasks that are a
little out of the ordinary, demanding and suitable for training purposes. Suggestions
from part-time trainers who are integrated into the company’s work and business
processes and, therefore, have a great deal of background knowledge can also be
valuable. As a rule, they can provide information about current innovations, but also
about company focal points and problems. Of course, the activities, work situations/
tasks and difficulties of the employees with which the trainees are also confronted in
their future profession are of particular interest.
The objectives of the exploration are, of course, discussed with the company. It
should become clear that it is a matter of becoming acquainted with professional
work. In this case, experience has shown that corporate managers are happy to
support such measures.

Experience from Training Practice


Experience has shown that training companies, even within a learning venue
network, have a very positive and open attitude towards the exploration
process. It obviously contributes to the qualitative improvement of their
training, leads to stronger practical orientation and can help to make the

(continued)
436 11 The Didactic Quality of the Competence and Measurement Model

Table 11.2 Checklist for verifying the suitability of in-company work tasks for training purposes
Trainees
• Do the trainees have sufficient previous knowledge and practical experience to cope with the
task?
• Can the trainees learn anything while working on the tasks in line with their training?
• Is the time and organisational effort required to complete the work task clear and manageable for
the trainees?
• Trainers and teachers
• Do the trainers and teachers possess the necessary technical, social and methodological-didactic
competences or can they acquire missing competences?
• Companies
• Is it possible to reconcile work task processing by trainees with the interests of the training
companies?
• Is there any benefit for the training companies or for the learning venue network?
• Are the burdens for the training companies distributed fairly?
• Can the production or the service be taken out of the company’s time-critical process for training
purposes?
• Do those responsible in the company agree?
• Is there enough time available for the part-time trainers?
• Vocational school
• Do those responsible at the school agree?
• Are the colleagues whose lessons may also be affected informed and do they agree?
• Resources
• Are the necessary resources available or can they be procured?
• Are there suitable learning and work locations available for processing the work task?
• Framework curricula
• Can a reference be made to the framework curricula?
• Is the work task relevant for examinations?
• Possibilities for design and potential
• Does processing the task allow alternative approaches and solutions?
• Skilled work/craftsmanship
• Does the processing of the work task for the skilled work or craft work place exemplary
demands on the trainees?
• Financing
• Can any necessary funding be raised?

transition of young professionals to their role and function as skilled profes-


sional much smoother. The management should be sufficiently informed about
the training activities (of the network) by the part-time and full-time trainers.
This means that the objectives of an exploration have been agreed in advance
with the respective company. Any reservations about this measure can, there-
fore, be avoided from the outset.

The following example of a checklist for the selection of suitable work tasks
contains selection criteria that can be checked by means of partial questions that can
be answered with “Yes” or “No” (Table 11.2):
Basically, work processes are always learning processes. Trainees—but also all
professionals—gain experience, gain confidence in handling specific professional
11.2 Designing Vocational Education Processes in Vocational Schools 437

tasks (exercise effect), learn how to deal with mistakes and solve unforeseeable
problems, work together with colleagues, superiors and trainees. It usually also deals
with the concern of the consequences of their own actions regarding
• The superordinate work result
• The clients
• The team
In this respect, work tasks are always associated with work experience. It depends
on the in-company training—the trainers and the company practice group—whether
and to what extent the work experience is reflected.

Questions to Reflect the Work Experience


• What was new and what was already routine?
• What was particularly important to meet the quality requirements?
• Did I have to take into account new rules and new knowledge?
• Did I understand everything?
• What room for manoeuvre was given and how was it used?

These questions are all within the context of operational circumstances and the
scope for design. Nevertheless, the reflection of the operational work and the
exchange of ideas with the operational actors are a first step towards the transfor-
mation of a work task into a process of generalising the situational work experience.
The result is knowledge that detaches itself from the work process and opens up the
possibility of dealing prospectively with the specific work processes in technical
discussions with colleagues, trainers and teachers: What could be improved in the
implementation of the work processes?
At school, the relationship between work and learning—between the work and
learning task—is fundamentally changing. It is no longer a matter of professionally
carrying out a work task—embedded in a company work process. It is exclusively
about learning. In this respect, it is consistent that we are talking here about learning
tasks and learning situations. The term “work and learning tasks”, which is occa-
sionally used, is intended to remind us that the learning tasks are directly related to
concrete work tasks. That would speak for this designation. It should, however, be
reserved for projects which are carried out in cooperation between schools and
companies and which are embedded in real work processes. School-based learning
tasks, on the other hand, have as their reference point “significant work situations” or
work tasks and processes which teachers consider to be characteristic of the profes-
sion and adequate for the respective situation of the learner’s competence
development.

For learning tasks, it is, therefore, not important that they are based on the
subjective experience of the trainees, but that the trainees are able to build on
their own work experience by working on a learning task in the process of
school learning.
438 11 The Didactic Quality of the Competence and Measurement Model

Step 2: Developing and Describing Learning Tasks from Work


Situations/Tasks

Prospectivity

The following six design features can be derived from the COMET competence
model and the theoretical integration of the learning field concept for the design of
learning tasks: transcending professional reality: prospectivity.
Trainees from different companies have similar or different experiences in the
same professional action field. In total, they point beyond the problem-solving
horizon of individual companies. The school, therefore, has the potential to think
and experiment prospectively and beyond the current company situation. When
designing the learning situations, it is, therefore, very important to make full use
of the experimental possibilities for a prospective interpretation of the learning tasks.
To this end, study labs must be equipped accordingly. Unfortunately, they rarely
have this quality: they are usually intended to experimentally comprehend and apply
theoretically acquired knowledge.

Example An alternative TÜV [the German technical inspection association’s


roadworthy test]
In the model experiment of action-oriented technical instruction in motor
vehicle mechatronics classes in Melsungen (at the beginning of the 1980s),
one learning task was to develop an alternative TÜV and to test it practically
and experimentally in an integrated study lab. The results of the project were
presented to the local TÜV officers and discussed with them. One comment:
“If I had the opportunity to do so, I would transfer much of what I have seen
here today to our TÜV”.

The Concept of Holistic Task Solution

The central idea of a design-oriented vocational education and training system


suggests that learning tasks should be placed at the level of work process knowledge
(refer to p. 58 ff.), in order to avoid slipping into a subject-systematic structuring of
“specialist tasks”. The ability to understand vocational tasks in their complexity and
to solve them completely presupposes that the working contexts are not subdivided
into a structure of context-free subtasks (!). Learning tasks always refer to a
professional/company reality that is always socially constituted. The solution of
these tasks is, therefore, also about weighing up values and interests.
11.2 Designing Vocational Education Processes in Vocational Schools 439

Example Control system for a hotel lift


When designing a lift control system for a hotel, the description of the
situation also referred to the rooms distributed over the eight floors (fitness
centre, luxury apartments, conference rooms, hotel management offices, etc.).
The task was not only to design a functioning control system, but also one
that was adequate for the hotel’s situation.

Action Consolidation and Accentuation

Learning tasks allow and suggest highlighting of work situations and aspects and
neglecting other—less learning-relevant—aspects, as long as the authenticity and
objectivity of the work situation is not affected. This achieves a certain dramatisation
of the work situation/task, which strengthens the motivation of the learners to deal
with the given task with commitment.

Solution and Design Scopes

Learning tasks are formulated with reference to realistic work situations from the
perspective of “customers”. The learners are, therefore, challenged to lead a problem
analysis based on the customer’s description and to ultimately develop a profes-
sional procedure and solution of the task. This concept of open tasks requires a more
or less wide scope for design through the form of the situation description taking into
account the criteria of the concept of the complete task solution.

Representativity

The learning task represents work situations that are typical for the profession and
contain problems with adequate learning and development potential. They have the
quality of development tasks. Focal points of operational organisational develop-
ment, for which there are no fixed solutions, are also suitable.

Competence Development

“Growing into” a profession is subject to the novice-expert paradigm. Vocational


training has the function of supporting this development from a professional begin-
ner to an expert with the logical developmental structure of learning as the conse-
quence. The respective educational plans should be interpreted in line with the
novice-expert paradigm when selecting and formulating the learning task.
440 11 The Didactic Quality of the Competence and Measurement Model

The following structure has proved to be useful for the description of learning
tasks that are intended to challenge the complete solution of tasks (Lehberger, 2015,
67):
• Specification of the learning task, which shows the reference to the action
• A description of the situation which relates to a typical and problematic profes-
sional work situation (if necessary, with illustrations), which is formulated from a
customer perspective and which is open to alternative solutions—in line with
professional practice
• A task that clarifies the perspective from which the situation is to be viewed and
from which the objective of the action is to be derived

Publication of Learning Tasks

Experience from the COMET projects shows that active use is made of the option of
publishing tried and tested learning situations/tasks via the Internet—for example,
using net-based groupware. As these are open learning situations/learning tasks and
not conventional teaching designs, there is no standardisation of the teaching-
learning processes, in which the situation-specific peculiarities remain unaltered.
In this respect, there is every reason to establish such platforms.
However, one condition should be fulfilled before learning tasks are “published”:
Each learning task includes a solution space, so that teachers can recognise the
learning potential in a learning task from the perspective of the developers. In
principle, solution spaces cannot be complete. However, they define the possible
solutions accordingly—related to all aspects of the solution. Therefore, in the course
of time, the solution spaces are expanded by new users.
With some practice, experienced teachers are able to develop learning tasks
virtually “on a continuous basis”. Practice shows that whenever learning tasks
have not been tested and are tasks that the authors have only somehow thought up,
the quality suffers considerably (cf. Lehberger, 2015, 213 f.).

Learning tasks that are placed on the Internet and published should always be
tested in class and include a description of the solution space.
It is recommended to develop learning tasks in a team. According to all
experience, this increases the quality of the tasks.

Of course, it is particularly interesting if the results of the self-evaluation are


documented in addition to the learning tasks.
11.2 Designing Vocational Education Processes in Vocational Schools 441

Step 3: Identify Learning Opportunities and Diagnose Competence


Development

Learning Tasks: With Solutions Possible at Different Levels

Vocational education and training is particularly challenging because of the very


heterogeneous nature of trainees (previous education, interests, talents, etc.).
Teachers are, therefore, faced with the task of dealing professionally with these
different prerequisites. However, the content structure of vocational education and
training is helpful to teachers. While a large number of subjects in general school are
concerned with finding the right solution, for example, to a mathematical problem,
via a predetermined solution, the solution of professional work tasks depends on
making full use of the respective solution spaces. Learning tasks have a solution
space, as the working world is always about the search for good and situation-
adequate solutions. For example, when designing office lighting, two trainees can
present solutions that are of equal quality. However, the level of competence of the
two solutions can differ considerably if one solution is explained at the level of
action-leading knowledge and the other at the level of action-reflecting knowledge.

Possible Differentiation in the Evaluation of Task Solutions

Based on the standardised COMET test procedure, it is possible to record the


distribution of the test subjects (e.g. of a location) across the test levels (Fig. 11.6).
The distribution of the test persons at competence levels shows that each com-
petence level can be reached at a low, medium and higher level.

Fig. 11.6 Distribution of competence levels across two locations (shipping clerks)
442 11 The Didactic Quality of the Competence and Measurement Model

The Teaching Objective: Dealing with Heterogeneity Aims


at the Individual Promotion of Professional Competence

Formulating and justifying educational goals without programming the learning


process, but rather exploiting the potential of a—subject-related—learning task
distinguishes the pedagogical skill of the teaching staff.
In one form or another, every educational plan encompasses not only the content
to be conveyed but also the educational objectives. In modern terms, these are
described in the form of educational standards and competences that are expected
after completing defined sections of an educational programme. Structured VET is
not possible without clearly identified educational objectives, irrespective of how
they are described. However, the idea that only educational outcomes that can be
checked or even measured embody didactic value is misleading. Promoting profes-
sional self-awareness, creativity and democratic behaviour are some of the many
important educational objectives that cannot be squeezed into the templates of
learning-outcome taxonomies or educational standards and that largely elude the
established forms of learning-outcome taxonomies as well as the measurement of
professional competencies. It is precisely for this reason that they must be kept in
mind when designing and organising vocational training processes. A widespread
practice of designing educational processes is based on the idea of teaching oriented
towards learning outcomes. Based on the educational goals, the didactic actions of
the teacher are to be structured. In other words, according to the central idea of
learning outcome orientation, there is a deductive interrelation between the educa-
tional objectives and the didactic structure. However, this idea of planning, design-
ing and evaluating teaching is problematic because it ignores the reality of vocational
education and training (Fig. 11.7).

Developing Competences

The COMET Competence Model offers a solution that is oriented towards the
guiding principle of imparting professional competence by working on and solving
work tasks that demonstrate the quality of development tasks. The overarching
educational objective, the ability to completely solve work tasks, cannot be called
into question because incompletely solved work tasks entail risks to a greater or
lesser extent. Empirical competence research shows that the great heterogeneity
within and between the test groups (e.g. classes) persists even if the teacher succeeds
in raising the competence level of their class (Fig. 11.8).
If one depicts the professional competence (development) of trainees or technical
college students in the form of competence profiles (Fig. 11.9), then learners and
teachers can answer important questions such as
• Has the trainee/student completely solved the work/learning task?
• If not, which aspects of the solution were not or insufficiently considered?
• Is the level of competence similar in all learners?
11.2 Designing Vocational Education Processes in Vocational Schools 443

Developing competences Achieving learning objectives


competence-oriented TUITION goal-oriented

Trainees grow into a profession by learning to Teachers define the learning objectives for
solve increasingly complex professional tasks their lessons: target-learning behaviour of the
completely and responsibly: The professional pupils = lesson planning.
skills as well as the understanding and They organise learning by the optimal
responsibility of what one does is an arrangement of learning steps:
indissoluble connection.
This is about the attempt to achieve the target-
Therefore, the potential of the learning task to learning behaviour of the pupils = lesson
trigger competence development is organisation; then the teacher checks whether
particularly important. Professional the pupil Sch has become a pupil Sch’ as a
competence arises from the reflected work result of learning: learning control.
experience.

Schematic representation of work process Schematic representation of learning


knowledge objective-oriented teaching according to
MÖLLER 1969, p. 20

The degree to which prospective professionals The didactic action of the teacher is based on
(trainees/students) are able to exploit the scope the type of purposeful rational action and
for solutions or design given by vocational corresponding didactics, as expressed, for
tasks and justify their solutions is the indicator example, in the concept of programmed
for the development of professional learning.
competence.
Teachers as “Development Supporters” and
Teachers as “Teaching System”
“Learning Consultants”

Fig. 11.7 Developing competences, achieving learning objectives

• If this is the case, then the teacher is challenged to promote the sub-competences
developed, for example, by means of corresponding learning tasks.
• At what level of knowledge were the task solutions based?
The competence profiles of the trainees/students are a good basis for the design of
differentiated teaching (individual support).
This form of diagnostics (evaluation) also shows the level of knowledge at which
trainees/students can solve work tasks/learning tasks: At the level of action leading,
explaining or reflecting knowledge of the work process.
444 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.8 Percentile bands for professional competence via test groups at class level for apprentices
(results 2009)

The strengths and weaknesses of task solutions or project results can be


discussed, so that every learner/student can see how their solution or the project
result of their working group should be classified. The standards are always the
same:
• Has it been possible to solve the task completely in line with the situation
description?
• Were the solution aspects to be considered weighed against each other in relation
to the situation?
• How differentiated was the justification of the task solution with regard to the
result and the procedure?
Conclusion Competence (development) diagnostics instead of controlling learning
objectives.
Competence diagnostics measures how the professional competence of trainees/
students develops qualitatively and quantitatively and how the ability of learners can
be promoted so that they can solve vocational tasks completely “with social and
ecological responsibility”.
Learning objective-oriented tests serve to check whether the “learning objec-
tives” defined by the teacher are achieved—measured in the form of scores or marks.
The tuition of teaching material and skills will be reviewed. Whether and how these
contribute to professional competence development is out of sight.
11.2 Designing Vocational Education Processes in Vocational Schools 445

Fig. 11.9 (a) Average competence profile of a test group of vocational school students (type
“VET”), n ¼ 27 and (b) Differentiation of the competence profiles according to the total score (TS)
and the coefficient of variation: (a) E-B, Class No 7, n ¼ 26; (b) E-B, Class No 5, n ¼ 18; (c) E-B,
Class No 24, n ¼ 19; (d) E-B, Class No 23, n ¼ 17 (Results 2009).
446 11 The Didactic Quality of the Competence and Measurement Model

The learning field concept places professional competence development at the


centre of subject-oriented VET didactics.

An informative incident from a pilot project for the introduction of a


teaching practice oriented towards the guiding principle of shaping
competence
A group of pupils (second-year electronics technicians specialising in
energy and building technology) decided on a project: electrical building
protection for a residential building.
The description of the situation was based on relatively unspecific require-
ments of the house owner, so that there was a great deal of scope for the design
of the project. The solution of the project was presented and discussed by the
group of students at a pilot event. The workshop participants (teachers,
members of the education administration, training experts from chambers,
vocational training researchers) were extremely impressed by both the project
outcome and the presentation. The solution and its professional justification
were convincing. The trainees were able to answer technical questions about
possible alternatives with confidence. But the answer to the question of how
the pupils assessed their project in the context of their vocational training
provided a big surprise. The students agreed: “The project was great fun! We
occasionally went on after work, but learning was a little neglected.”
It is obvious that this evaluation of the project caused some amazement and
head-shaking among the workshop participants.
There is a simple reason for the differing assessment of what was learnt
during this project by the group of pupils on the one hand and the vocational
training experts on the other. The learning concept of the pupils is shaped by
their learning experiences in general school. They apparently do not yet have a
professional learning concept.
The acquisition of a professional learning concept by this group of pupils
was also hampered by the fact that training in their company in the first half of
training was characterised by course-based learning in an apprenticeship
workshop. This group of pupils experienced “learning” as the acquisition of
teaching material and skills.

Such experiences show that in vocational education and training it is also


important from the outset to reflect on the results of learning or, more precisely, of
professional competence development with the trainees during training. The ques-
tion of the learning opportunities it contains already arises at the planning stage of
the learning task/learning situation. Then, during the evaluation of the learning task/
learning situation, the next question can also be answered: What did we learn while
working on the learning task?
11.2 Designing Vocational Education Processes in Vocational Schools 447

11.2.4 Designing Teaching-Learning Processes

The planning and preparation of project-based forms of learning are confronted with
a dilemma. Detailed planning largely determines the goals, contents and course of
didactic action. However, good lesson planning is only given if it opens up leeway
for the learners/students. A first hint on how to deal with the described dilemma is
provided by the training paradox already considered: Professional beginners
become experts by doing what they want or should learn. The teacher is, therefore,
not allowed to spoon-feed the students what they want to learn. This is where the
new role of teachers comes into play, namely, opening up design and decision-
making spaces for trainees/students. The saying “from knowledge mediator to
learning process facilitator” is first made concrete here.

The traditional role of the teacher/trainer as a knowledgeable person who


passes on his or her specialist knowledge to the trainees, using more or less
good teaching methods, is a thing of the past. Gottfried Adolph made the
statement: “Teaching is destructive” (Fig. 11.10).

The following, therefore, deals with the design and organisation of teaching-
learning processes which enable the individual learner/student to deal with learning
tasks which have a suitable learning potential. In order for them to learn something,
it is particularly important that they can contribute their work experience and
determine where they need to learn something in order to be able to work on the
tasks they have derived from the learning tasks. The individual discussion of a
learning task includes cooperation with other trainees/students if the work or learn-
ing process requires it.

Step 1: Selection of a Customer Order with “Suitable” Learning Potential


and Formulation of a Learning Task

The selection of suitable customer orders and their formulation as a learning task
plays a very central role in the design and organisation of vocational training
processes. Two sources for the selection of customer orders have already been
described in the form of the works survey and the job portal. Based on the selected
assignments, the teacher can create learning tasks with corresponding situation
descriptions. If there is a job portal, then of course the learners/students can also
choose learning tasks that correspond to their level of development and that have the
right potential for something novel. The question from the learner’s point of view is:
Where do I see the challenges that the task holds in store for me? The learner will
only find an answer to the question of what they can actually learn by dealing with
the new situation once the task has been solved. At this point, the teacher has a
special responsibility to ensure that the learners are able to learn something while
448 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.10 The structure of the working and learning process


11.2 Designing Vocational Education Processes in Vocational Schools 449

completing the learning task. In order to fulfil this responsibility, it must clarify very
precisely what experiences the (individual) learners have already had and what
knowledge they have acquired. Only then can the challenge be described, by the
accomplishment of which they can gain new work experience.

In current teaching practice, it can be observed time and again that learners are
usually able to master learning tasks through the use of their existing work and
learning experience. Very often nothing new is learnt!

The selection of a suitable learning task is not critical in view of the heterogeneity
between learners or between different learning groups, as this form of learning leaves
open the depth and breadth to which individual learners or learning groups work on
the task. There is, therefore, not only a task-specific learning potential or a learning
potential related to a competence level that can be summarised in “learning objec-
tives”. Just as in sports, an improvement in the long jump from, for example, 4.20 m
to 4.40 m may be a great personal success for some, while the 5.20 m mark is not a
success for others if they have already jumped 5.40 m.

Learning tasks with their possible solutions leave open the level at which they
can be solved. They have an individual development potential for the trainees/
students.

What teachers should bear in mind when taking this first step:
• The learning task must be selected so that it has an appropriate learning potential
for the learning group and all trainees/students on their way to achieving employ-
ability (also refer to the job profiles and vocational curricula).
• The learning task should be a challenge for both the weak and the strong learners
and offer correspondingly challenging solutions.
• The learning task must be described from the customer’s perspective (refer to
p. 541 and p. 549). In the case of extensive tasks, the question arises as to a
division of labour in groups. This form of learning organisation is very demand-
ing because the coordination of division of labour learning involves cooperation
between groups and all participants should benefit from the learning processes
and outcomes:
• The combination of sub-solutions and new knowledge must be carefully planned
• How should the group inform each other about what they have learnt (refer to
p. 508).
450 11 The Didactic Quality of the Competence and Measurement Model

Step 2: Analysing and Functionally Specifying the Customer’s Situation


Description

In this step, it is particularly important that the teacher succeeds in getting the
trainees/students to adopt the respective learning task as their own. For this purpose,
they first clarify the customer’s (externally or internally) formulated requirements
and wishes on the basis of the situation description. This analysis gives the trainee/
student an initial orientation regarding the questions as to the result of processing the
learning task from a technical perspective and what needs to be done (tasks) in order
to achieve appropriate solutions (technical specification). At this point, it is also a
matter of identifying possible solutions and deciding which approaches to solutions
“remain in play” for the time being, that is, which will be pursued further.
All learning tasks are described from the customer’s perspective. The task of the
learners—as prospective professionals—is then to:
• Check the customers’ requirements for their feasibility
• Check whether individual requirements contradict each other and how these
contradictions can be resolved by comparing all requirements in their weighting
• Check whether the customer has overlooked requirements that are technically
feasible, for example
The most important thing then is to incrementally translate the customer’s wishes
into a specification.
It remains to be seen whether the specification formulated in a first step will prove
feasible and whether the further steps of the task solution will result in new insights
and “better” solution options. It is likely that an initially formulated specification will
only take its final form when the procedure and its justification are documented.
If a learning task is in the form of a specification given by the teacher, then the
trainee/student becomes the executor of the detailed solution, as shown by the
following example:

Manufacture Two Grippers (Material: 1.2842) from 20 3 15 Flat Steel


According to the Drawing

How teachers and trainers can hinder the process of competence development:
• When they formulate situation descriptions that do not reflect the cus-
tomer’s requirements and wishes
• If they give the trainee/student learning tasks in the form of specifications,
thereby telling them exactly what they have to do

(continued)
11.2 Designing Vocational Education Processes in Vocational Schools 451

• If they do not reflect the learning opportunities contained in learning tasks


with the trainees/students—also with reference to the training objective:
employability
• If they limit learning to the acquisition of theoretical knowledge—and,
therefore, lose sight of the vocational action and learning fields
• If they underchallenge the trainees/students and, therefore, do not challenge
their competence development
• If they do not take the trainees/students with their specific competence
development—and, therefore, also with their strengths and weaknesses—
seriously
• If they do not identify with their lessons

In practice, teachers frequently set tasks without considering what their trainees/
students can learn. Their professional task concept may be based on a
misconception, at least if it is the task concept of operational work preparation
(WP) that ensures through detailed specifications that the task solution is
implemented as planned. This unintentionally destroys the learning opportunities
of trainees/students.

A misconception: The aim of the lesson is not achieved when the learning
tasks have been solved, but when learning tasks are solved with a learning
potential identified in advance by the teacher and when the trainees/students
“learn” to pursue this question while reflecting on the work experience.

It is, therefore, part of the professionalism of the didactic actor to estimate the
degree of difficulty of learning tasks so that the trainee/student is not over- or
underchallenged. With heterogeneous learning groups, it might be difficult to find
the “right level of difficulty”. Here, the concept of the “open learning task” requires a
rethink. It is not important to adjust the level of difficulty of a learning task, as there
cannot be an appropriate level of difficulty for all learners in a learning group!
Rather, the teacher formulates realistic learning tasks that have development poten-
tial in learning a profession. These are learning tasks that
• Are appropriate for the “level of training” (beginner, advanced beginner, etc.)
• Do not restrict the scope for design
• Enable trainees/students to base their tasks on a level of knowledge
corresponding to their individual competence development
The solution variants of the individual learners and those of the working group, as
well as the depth and breadth of their justifications then represent the competence
level and the competence profiles of the trainees. When learners give their best, there
is no underchallenging of the stronger. The challenge for the teacher is to provide
“process-related help”, so that the weaker also find a solution to the problem.
452 11 The Didactic Quality of the Competence and Measurement Model

In a class that feels committed to the individual promotion of vocational


competence development, achieving “the goal” does not mean putting all
trainees/students “in one basket” (the same “learning objectives”).

After the analysis and technical specification of the learning task, the trainees/
students are able to reflect on their learning opportunities together with the teacher.
With the learning task analysed in this way, the trainees/students now link the two
types of objectives: “learning objectives” and “work objectives”. They have been
concretised to such an extent that they represent the orientation basis for working on
the task—alone or in a team.

Questions for reflection could include:


• What do I already know?
• What do I need to learn? (What is the challenge for me?)
• Can I overcome the challenge with the available tools?
• For which tasks do I need the help of the teacher?
• Which tasks are best suited for cooperation with fellow learners?

Of course, the trainees/students can only answer the preceding questions if the
analysis of the situation description is successful: Have the customer’s requirements
and wishes become clear to them and could they make an initial technical specifi-
cation? In tuition practice, however, it is not uncommon for trainees/students to have
difficulty understanding the description of the situation. They then have no access to
the learning situation: “I don’t understand the task and don’t know what to do.” The
challenge for the teachers now is to help the trainees/students without depriving
them of the chance to find access themselves. This is where process-related help is an
obvious option, for example, in the form of questions and requests that open up the
learner’s own access to the task at hand.

Possible questions and requests to trainees/students:


• What does the customer want?
• Which customer requirements and wishes did you recognise?
• What exactly remains unclear?
• What would you do first?
• Remember the last learning situation: How did you proceed in that case?
• Create a sketch that illustrates the facts.
11.2 Designing Vocational Education Processes in Vocational Schools 453

Step 3: Development and Definition of Evaluation Criteria

Once the approximate result from the solution of the tasks or the processing of a
project and which alternative solutions and procedures must be weighed up has been
clarified, it is necessary to define the evaluation criteria for the task solution. The
COMET rating procedure can be used as an orientation framework here. The
didactic benefit of this step is obvious: learners now know exactly what is important
when developing a task solution (Table 11.3).
The results of empirical teaching research show that the development of evalu-
ation criteria (and their application in the self-evaluation of work and learning
outcomes) increases learning success in terms of
• The scope and evaluation of alternative solutions
• The possibilities for the design and organisation of the task solution (work
process)
• The reflection of what was learnt and the learning process
As the evaluation criteria describe not only the expectations of the result, but also
of the task solution process, they are a good basis for reflecting on what has been
learned and the quality of the task solution.
In this phase of teaching, teachers are challenged to become aware of their
expectations of the learners’ individual competence development and to assess the
learning potential of the learning task on the basis of the following questions:

Table 11.3 Evaluation criteria for task solution, approach and competence
Criteria for evaluating the
Acquisition of new
Task solution Method competences
• Does the task solution have • Has the planned procedure • What work experience and
an appropriate utility value for proven worthwhile? knowledge could be drawn on?
the “customer” (client)? • Was it possible to translate • What knowledge and skills
• Was the task solved the situation descriptionØ had to be acquired in order to
completely? into technical specifications? solve the task?
• Was there a well-founded • Was it necessary to deviate • Where and how was the
balance between alternative from the client’s require- knowledge and know-how of
solutions? ments—if so, why? the teacher used?
• Was the presentation of the • In which steps did prior • What means were used to
results successful (for whom)? knowledge not suffice to solve the task (reference books,
• How is the quality of the task solve the task? internet, etc.)?
solution—based on the evalu- • What aids were used to • Did the know-how of indi-
ation criteria—assessed? solve the task? vidual pupils (pupils learn from
• Which errors and dead ends pupils) help?
occurred and how were they • What role did trying out and
corrected? experimenting play in the
acquisition of new
competences?
454 11 The Didactic Quality of the Competence and Measurement Model

• What new work process knowledge is there in a learning task?


• How must the situation description be formulated to result in a realistic solution
space for the learning task and a wide scope for the learner?
• Which competences and which prior knowledge does the learning task require?
• Will the trainees/students succeed in translating the situation description into
technical language, that is, into a specific task?
• Do the learners keep an eye on the utility value of the work result for the
customer?
• Do the trainees/students succeed in recognising the need to acquire new
knowledge?
• Do trainees/students resort to good sources and effective forms of learning when
expanding their professional knowledge?
• Do the trainees/students check the already available concepts for their usability in
the current context?
• How do the learners solve the task?
• Do they already have a professional work concept?
• Is the level of challenge appropriate for the learner?
• At which competence level is the learning task solved?

When observing and advising learners on these questions, it is always


important to keep an eye on the competences and the development of each
individual’s competences.
The requirement level of a task and the level at which it is solved are
different for each trainee/student. This is where the great advantage of open
tasks comes into play: Open tasks can be solved at a very different level of
knowledge and competence.
The evaluation criteria for solving open tasks make it possible to make the
learner’s competence development transparent.

Step 4: Provisional Definition (Rough Plan) and Implementation


of the Task-Solving Procedure: Development of Vocational Concepts
of Learning and Working

Planning while processing a learning task or a project already has a provisional


character because experience is gained as the task is solved and unforeseeable
difficulties arise that have to be solved. It is not uncommon for the newly acquired
knowledge to suggest a modified approach.
Practical experience, therefore, provides the basis for decision-making for further
planning. This possibility of decision-making is particularly important for challenges
to which competence-developing potential is ascribed. Planning, execution and
evaluation—on the way to the solution—are, therefore, always alternating steps in
11.2 Designing Vocational Education Processes in Vocational Schools 455

Fig. 11.11 Steps to solve


challenging situations

the processing of learning tasks and in the execution of projects. This “gradual
approach” (Böhle, 2009) is an explanation for the dissolution of the described
training paradox in connection with action learning (refer to p. 495) (Fig. 11.11).
When observing current teaching practice, it is noticeable that in the practical
implementation of the theory of complete action this very “gradual approach” is
often ignored as a possibility of knowledge acquisition. Instead, the complete
handover phases are used to structure the entire work and learning process, and it
is assumed that the entire knowledge required for planning can be made available in
advance via the one-off procurement of information. This practice takes the concept
of acting learning ad absurdum, because it only draws on objectively available
knowledge and excludes learning through reflection of experience. The preceding
explanations naturally do not exclude the possibility that at the beginning of the
discussion with a task, initial planning decisions can be made and approaches to
solutions developed by accessing the available knowledge. The development of
professional competence is not only about obtaining missing information, but
especially about developing concepts for:
• Professional learning
• Professional work
• Professional cooperation (Bremer, 2006, 293 ff.)
It usually some time for learners to understand how work and learning are
interrelated and that they are two sides of the same coin. Teachers and trainees/
students are challenged in vocational education and training to understand what
distinguishes work process knowledge and the vocational skills based thereon. The
concept of collegial cooperation is based on experiences of cooperation in opera-
tional work processes.
The possibilities for dealing with a new challenge, which initially appears to be an
insurmountable hurdle to solving a problem, are manifold. First of all, the reflection
of the learning experience when solving new tasks—under the guidance of the
teacher—is an essential part of tuition. This is about the development of a
456 11 The Didactic Quality of the Competence and Measurement Model

professional learning concept. It is not enough for this to emerge randomly, but for
trainees/students to become aware of their possibilities of learning on the path to
employability.

Forms of learning to acquire professional learning competence:


• Perplexity and mistakes: Allowing mistakes is an important prerequisite for
learning from mistakes. It is also about the insight of the trainees/students
“that it depends on themselves” when it comes to mastering a situation.
Blaming others and the circumstances is not the answer!
• Encouragement and self-confidence are important prerequisites for master-
ing new situations.
• The concept of open learning tasks and the possibility of solving the tasks at
very different levels of competence are beneficial for the trainee/student.
• “I’ll try!”
• Testing and experimenting help to cope with new situations, which also
includes detours. What went right or wrong and at which points becomes
apparent at the end.
• Group dialogue or internet research does of course also help.
• The teacher, the textbook and the relevant specialist literature are ultimately
available.

The support provided by the teachers should be process-related and not product-
related. References to sources of information, methods of learning, experimental
possibilities, software tools or even mathematical methods belong to the process-
related aids which give the trainees the opportunity to solve the task themselves.
Process-related support also includes requests or questions expressed by teachers to
learners. Product-related support, on the other hand, is aimed directly at solving a
task or a problem.

Requests or questions could include:


• What exactly prevents you from processing the task further?
• What could help you to overcome this “edge”?
• What is the first thing to do?
• Can you remember a similar situation in the company? How did you deal
with it?
• Just try it out; you can also learn from mistakes.
11.2 Designing Vocational Education Processes in Vocational Schools 457

Learning Within a Group

Group learning is particularly important in vocational education and training because


working in “semi-autonomous groups” is highly prioritised in the working world.
Group work is appreciated by management and employees alike.
The advantages of group work from the perspective of the management and the
employees are tabulated. Group work enables the following tasks:

The management The employees


It enables
Shifting responsibility and tasks to directly More responsibility means more interesting
value-adding processes (increases labour pro- work
ductivity) Less control through work preparation and
Shifting the elements of quality control to the personal freedom strengthen self-esteem and
work processes: producing instead of controlling job satisfaction
quality (increases labour productivity) “We control our own work”
Group work increases the flexibility of work This experience strengthens professional self-
planning and organisation and, therefore, work confidence and commitment
productivity. Experiencing work contexts strengthens the
Increases job satisfaction and operational com- interest in co-designing work processesMore
mitment, thereby promoting work productivity responsibility means more interesting work
Less control through work preparation and
personal freedom strengthen self-esteem and
job satisfaction
“We control our own work”
This experience strengthens professional self-
confidence and commitment
Experiencing work contexts strengthens the
interest in co-designing work processes

In the school learning processes, the trainees/students tie in with their own
experiences or those of their fellow learners. It is, therefore, important to understand
that in vocational education and training, “group work” for teachers and trainees/
students is not a question of changing the social form, as is often the case in
textbooks.

If cooperation in groups does not result from the learning task/learning


situation or at least appears to be advantageous, then the decisive basis for
group work or learning is missing.

It is not unusual for trainees/students to exclaim “not group work again!” when
teachers prescribe group work according to the principle of method change in order
to practice the ability to work together. If cooperation in a working or learning group
is also to be lived and experienced subjectively as meaningful, then working and
learning is a prerequisite for a common cause. If learners are aware of this and have
adopted the corresponding learning task as their own, then it is also a question of
how the learning process can be shaped together.
458 11 The Didactic Quality of the Competence and Measurement Model

Group work as “cooperation because of a common cause”—for example, in


the implementation of a project—results from the content and complexity of
the projects.
It is not uncommon for a learning task to suggest that, after joint planning,
sub-tasks should be divided up into tasks so that they can later be brought
together and the final result evaluated according to jointly definable criteria.
With this form of cooperation, the insight grows that professional tasks can be
solved better in a team.
Someone who carries out a subtask at a single place and knows how to
contribute to the success of an overall project also proves to be capable of
cooperation (cf. Rauner & Piening, 2014, 34).

This is where the teacher comes in, who can draw on the results of learning
research regarding the organisation and design of group work. In in-company
vocational training, trainees are assigned different functions—consciously or sub-
consciously—namely, that of
• Minions
• Spectators, observers
• Worker’s assistants
• Employees or colleagues
These functions can ultimately result in stable roles with a lasting impact on the
success or failure of training.

Trainees who remain in the role of the assistant for too long and become
accustomed to someone always telling them what to do and how to do it run
the risk of not achieving the objective of vocational training “professional
action competence”.

Very similar traps lurk in the implementation of learning situations and projects at
school. Teachers and trainers, therefore, have the important task of making trainees
aware of these traps.

Cooperative learning
Especially in the practical manual on “Cooperative Learning” by Bruning
and Saum (2006), the methods of group work are presented vividly and in
detail on the basis of extensive international experience and research. Some
central elements of cooperative learning should, therefore, be pointed out here.
The most important principle in advance: “Individual work is a core element of
cooperative learning” (Fig. 11.12).
11.2 Designing Vocational Education Processes in Vocational Schools 459

Fig. 11.12 Principles of cooperative learning according to Brüning and Saum (2006, 15)

The three basic principles of cooperative learning are the sequence of


• Think: In this phase, all students work alone.
• Pair: Results are compared, deviating results are discussed, etc. in pairs or small
groups.
• Share: The group results are presented, discussed, improved, corrected, etc. in
class.

11.2.5 Dealing with Heterogeneity

As in no other form of our educational system, teachers are confronted with a


particularly pronounced heterogeneity of learning groups. The formation of working
groups is, therefore, of particular importance (Table 11.4).
Experience with the strengths and weaknesses of working with heterogeneous
groups clearly shows that, contrary to the opinions of the mainstream, these groups
have an extraordinarily high learning potential. In fact, the heterogeneity of the
learning group offers completely new opportunities, which can be exploited through
appropriate teaching arrangements (learning arrangements). However, they also
require a corresponding methodology on the part of the learning process facilitator
(ibid., 209).
460 11 The Didactic Quality of the Competence and Measurement Model

Table 11.4 Comparing the advantages and disadvantages of homogeneous and heterogeneous
learning groups (Bauer, 2006, 208)
Learning
group Advantages Disadvantages
Homogeneous • Tendency to favour high- • Real existing differences threaten to
performing learners. be ignored.
• Classroom-style teaching “suffi- • Learners with learning disabilities
cient”, therefore, less pedagogical miss out.
effort. • Pupils with high social status are
• Reduced complexity. favoured/supported more strongly.
• Teachers feel less overwhelmed. • Development inequality is promoted
• High resistance to pedagogical and consolidated.
errors. • Sustainable Pygmalion effects, early
fixation on a certain performance level.
• Fear of loss of control.
• Teacher-centred.
Heterogeneous • Favouring learners with learning • Not to be mastered by classroom-style
disabilities. teaching.
• More equal opportunities. • Higher pedagogical effort.
• Better support of individual per- • Less resistance to pedagogical errors.
sonality development.
• Familiarisation with different per-
spectives and life plans.
• Confrontation with other perspec-
tives.
• Promotion of social learning, for-
mation of social competences.
• Reflection on own positions.
• Improved preparation for modern
social challenges.
• Basis for the use of versatile
methods.
• Learning-centred.

Important steps and rules for learning/working in groups:


• It must first be clarified “in plenary” whether a learning project is to be
worked on in groups or whether phases of individual and group work
should alternate. If the (sub-) tasks are not specified by a project or the
teacher, the group must agree on the task. It is important that the respective
task is accurately understood by everyone.
• Then—in the second step—in a period of time to be agreed upon, everyone
thinks about the possible solutions on their own and outlines their proposed
solutions.
• The individual results are shared in the third step.

(continued)
11.2 Designing Vocational Education Processes in Vocational Schools 461

This sharing phase requires the establishment of rules. “If no rules are
introduced and the discussion and evaluation process remain unregulated, then
the speaking-time slots allotted to the contributions is often distributed very
unequally among the members of the group. Some tend to be more reluctant to
contribute to discussions, others “talk faster than they think”. Therefore, rules
that allow for a balanced exchange are important. The eloquent group mem-
bers learn to listen and to take a back seat and the reserved ones are given the
(learning) chance to argue, discuss and present (Rauner & Piening, 2014, 43).

With a method of group work tested in Canada, good experiences were made with
so-called talking chips (Johnson & Johnson, 1999, 69). Each student receives an
equal amount of talking chips. In the exchange phase, the rule applies that members
of the group may only speak if they hand in one of their chips. If the supply of cards
of a group member is used up, then they can only participate in a new round of talks
(with new chips).

Talking chips ensure “that the speaking-time slots of group members balance
themselves, while they also have an educative effect, as both the reserved and
the eloquent pupils very quickly become aware of their speaking behaviour”
(Brüning & Saum, 2006, 34).

11.2.6 Step 5: Evaluating the Task Solution


(Self-Assessment)

Finding the solution to the problems only means half the distance covered in the
processing of the learning tasks. Now it is important to evaluate the quality of the
task solution. It generally turns out that different solutions were developed by
individual trainees/students or working groups. The competence profiles
(Fig. 11.13) show where the differences lie. This is where the teacher comes into
play, who can show how the solution space given for a learning task has been
exploited with the individual solutions.
The results obtained in different ways are to be evaluated and assessed by the
learners with the help of the agreed evaluation criteria. The primary focus here is on
the utility value of the results for the customer: it is about developing a professional
work concept.
Assessment forms have proven useful as a tool for self-assessment in teaching
practice (Table 11.5). If the tasks are not solved to the satisfaction of the learners
themselves (not OK), the task-solving procedure must be reconsidered. The assess-
ments then lead to corrections or additions within the planned and implemented
462 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.13 Different exploitation of the solution space (Final Report 2010, Hesse)

work and learning processes, if necessary. The trainees/students also decide to what
extent their results meet the requirements of a complete task solution. Only when the
tasks have been satisfactorily solved (OK) does it make sense to reflect on the work
and learning processes carried out in context.

Both the learners (self-assessment) and the teachers (external assessment) can
use the assessment form specially developed in the COMET project for use in
teaching as a tool for evaluating and assessing the task solution.

Only documented facts can be evaluated (don’t “read between the lines”)!
The assessment form can be modified for use in a comparison of self-assessment
and external assessment. The assessment results can be transferred into a network
diagram for illustrative presentation in the plenum. The result, which is judged to be
sustainable by the learners, is then prepared for presentation in the plenum.

11.2.7 Step 6: Reflection on Work and Learning Processes

In vocational education and training, learning requires the reflection and


systematisation of work and learning experiences. For this purpose, the procedure
for processing the task is to be reproduced by the trainees/students in thought, in
order to visualise the experiences made in the process. In particular, the experience
gained with the approaches that have led to viable and complete solutions to the new
challenges is important. These reflected experiences of action generate the knowl-
edge of the work process that justifies competence development. The following
questions should help to consider different perspectives in the reflection process.
11.2 Designing Vocational Education Processes in Vocational Schools 463

Table 11.5 Assessment form for use in class (e.g. electronics technician) (Katzenmeyer et al.,
2009, 202)
In no
Fully Partly Not way
Criteria/indicators Comments met met met met
CLARITY
1 Presentation appropriate for client?
For example:. description, operating
instructions, cost plan, component list
2 Representation appropriate for special-
ists?
For example: circuit diagrams, installation
diagrams, terminal diagram, cable diagram,
programme printout with comments
3 Solution illustrated?
For example: technology scheme, site plan,
sketches
4 Structured and clear?
For example: cover page, table of contents,
page numbers, company contact info, cus-
tomer contact info
FUNCTIONALITY
5 Functional capability?
For example: dimensioning/calculation o.k.,
fuse protection, necessary interlocks, limit
switch
6 Practical feasibility considered?
For example: electrical and mechanical
design possible?
7 Are presentations and explanations cor-
rect and state of the art considered?
8 Solution complete?
For example: are all required and necessary
functions in place?
UTILITY VALUE
9 Utility value for client?
Are useful and helpful functions consid-
ered? Possible automatic error detection,
interventions and changes
10 User friendliness?
Operability, operator guidance, clarity,
alarm and operating displays
11 Low susceptibility to faults considered?
For example: preventive error information,
redundancy, partial running capacity, are
material properties optimal for application?
12 Longer-term usability and expansion
options considered?
(continued)
464 11 The Didactic Quality of the Competence and Measurement Model

Table 11.5 (continued)


In no
Fully Partly Not way
Criteria/indicators Comments met met met met
. ECONOMIC EFFICIENCY
13 Actual costs economical?
For example: time and personnel resources,
material use
14 Follow-up costs considered?
For example: electricity costs, maintenance
costs, downtime costs in the event of control
failure
15 Operational and economic aspects con-
sidered?
For example: downtime costs in the event of
component failure weighed against produc-
tion costs?
WORK AND BUSINESS PROCESS
16 Process organisation in own company
and at the customer’s
For example: time and resource planning,
general conditions for installation work
clarified?
17 Work process knowledge (work experi-
ence)
For example: does the solution have a
structure allowing the identification of the
workflow? Are upstream and downstream
processes taken into account?
18 Limits of own professional work
exceeded?
For example: structural changes, orders for
other trades, foundation for switch cabinet,
scaffolding for sensor assembly planned
SOCIAL COMPATIBILITY
19 Humane work and organisational design
For example: ergonomics, serviceability
20 Health protection considered?
For example: toxic fumes, radiation, noise,
injury hazards detected and prevented
Actions possible and explained in an emer-
gency?
Risk analysis performed for assembly,
operation, service, malfunction and
disassembly?
21 Occupational safety and accident pre-
vention considered?
Working on ladders and scaffolding, PSA
Instruction of third party companies, hazard
warnings, hazardous material labels
(continued)
11.2 Designing Vocational Education Processes in Vocational Schools 465

Table 11.5 (continued)


In no
Fully Partly Not way
Criteria/indicators Comments met met met met
ENVIRONMENTAL
COMPATIBILITY
22 Recycling, reusability, sustainability
For example: ROHS material, PVC-free
material, prevention, reduction and
recycling of waste
23 Energy saving and energy efficiency
For example: energy-saving lamps, EFF
class for motors, minimising standby losses,
displays with LED instead of lamps
CREATIVITY
24 Does the solution show problem sensitiv-
ity?
For example: customer request fully
recorded and implemented?
25 Is the scope for design exploited?
For example: useful additional functions
planned?
26 Business and economic aspects consid-
ered?
For example: downtime costs during com-
ponent failure weighed up against produc-
tion costs?

Questions on the Concretisation of the Reflection Process


• How did I proceed in processing the task (what was my first, second, etc.
step)?
• What were the most important tools for solving the tasks?
• What was I able to solve easily based on my previously gained knowledge?
• What did I consider specifically for processing this task?
• How exactly did I proceed within the respective steps?
• Why did I proceed like this and what are the reasons for my decisions?
• Which methods did I apply?
• Which “hurdles” did the task contain and what new things did I have to
learn?
• Which new learning methods did I have to learn to fully complete the task?
• How do the estimated challenge and the actual gain in learning match?
• How did I organise my/our work and what will I do differently next time?
• At which “hurdles” did I take advantage of the teacher’s support?
Note: Depending on how task processing is organised, the questions must
be answered individually and/or in the group.
466 11 The Didactic Quality of the Competence and Measurement Model

11.2.8 Step 7: Presenting and Evaluating the Task Solution,


Work and Learning Process as Well as Learning
Outcomes (External Evaluation)

The presentation and evaluation of task solutions, work and learning processes as
well as learning outcomes are of high didactic value. Among other things, this helps
to
• Clarify the satisfaction/dissatisfaction of the “customer” with the offered task
solution
• Assess the ability of the expert listeners (co-learners, teachers, trainers and
similar) to solve the problem
• Clarify questions as to whether the procedure (including the methods used) has
proven itself, where there have been problems and how it can be optimised for use
in the next learning situations/projects
• Evaluate explanations, justifications and considerations of alternative solutions
and procedures according to whether they are professional and conclusive
• Exchange experiences with different learning methods and work-organisation
structures
• Describe and evaluate gains in knowledge, new experience and new abilities and
considers the question: “What else did you learn?” In this case, it all depends on
• The methodical procedure
• The capacity for cooperation
• The reflection on conflict settlement
It is also an opportunity to reflect on the importance of vocational learning for the
non-working world.
The evaluation is based on the evaluation criteria defined at the beginning (refer
to Table 2.1, p. 568). It may also turn out that the evaluation concept contains
weaknesses that can be avoided in the next learning situation/project.
In this tuition phase, teachers must decide on the role they want to play.
• Do they leave the assessment of the work results to the trainees/students?
• Do they assume a governing role?
• Do they evaluate the work results themselves?
• In the case of group work, it also makes sense for the working groups to evaluate
their documentation mutually.
When working on tasks in small groups, all group members should be involved in
presentation and reporting. A prerequisite is that a group member takes over the
moderation of the presentation and that the roles in the presentation are precisely
agreed upon. This also includes the form of presentation. “Ad hoc” reports and
presentations should be avoided as they tend to discredit project learning. When
presenting learning and work results, students with weaker language skills should be
given the opportunity to present and demonstrate their presentation in practical
manner.
11.2 Designing Vocational Education Processes in Vocational Schools 467

The documentation and presentation of project results should meet high formal
standards. They should be presentable to “customers”. The more successful
this is, the more likely the participants are to identify with their learning
outcomes. This strengthens the self-confidence and motivation of the learners.
In the case of outstanding projects, the public exhibition of project results at
school or in public is also an option. The experience that pupils proudly show
presentable learning and work results to family members and friends is an
indication that the form of documentation and presentation (in addition to the
learning and work results themselves) contributes considerably to the devel-
opment of professional identity and, therefore, also to the strengthening of
self-esteem. This technical aspect of task/project learning is, therefore, of
considerable social-pedagogical importance.

In the preceding remarks, the requests and requirements of the customer are again
referred to, which are expressed here in the satisfaction or dissatisfaction with the
presented task solution. This again refers to the high formal standards that the
presentation has to meet. At this point, it is only about the result—the objective
dimension of the learning process and its evaluation (product-related presentation).
The other learners and the teacher can take on the role of “customer” in this phase of
the presentation and give feedback from this role to the presenter(s).
In the other phases of the presentation, the other learners and the teachers are
addressed as “experts”. This deals with the
• Completeness of the task solution
• Technically sound justifications (knowledge-explaining action) and conclusive
balancing between different solution variants (knowledge-reflecting action)
• Working and learning concepts and concepts for cooperation and reflection on the
experience gained in their use
• Unresolved questions or questions that arose during the presentation, and finally
the question: What did I/we learn?
The second phase of the presentation (process-related presentation) deals with
learning and the competences acquired—the subjective dimension of the learning
process.
The co-learners and the teacher, therefore, take on the role of the teacher. In terms
of content, they refer to knowledge of the individual work process and the previously
agreed evaluation criteria (refer to Table 11.3, p. 520). The teacher also has their own
described solution space in mind.
The question: What have I/we learnt has a special meaning, because the students’
understanding of “learning” is shaped by general schooling.
468 11 The Didactic Quality of the Competence and Measurement Model

11.2.9 Step 8: Systematising and Generalising Learning


Outcomes

While the presentation of the work and learning outcomes reveals the individual
work-process knowledge of the trainee/student in relation to the current learning
task, this phase of tuition is about the specific task for the vocational school to
generalise this knowledge. A knowledge that can be traced back to the reflection of
the experiences that were made while working on the learning task.

“In the process of dual vocational training, the incremental generalisation of


professional experience and knowledge ultimately leads to concepts and
theories that are available to the individual as generalised ‘tools of thinking’
as well as of communicating and reflecting and at the same time refer to the
real context from which they emerged” (Rauner, 2007, 244).

Generalisation is about uncoupling the work experience gained from the concrete
learning task and the task solution achieved in order to make it available for
subsequent customer orders. It is now up to the teachers to ensure that their pupils
become aware of their broader understanding of the subject and are able to use this in
their thinking, acting and skills in professional and appropriate manner.

Practical experience with numerous learning groups shows that the absence of
the generalisation described above among learners/students means that the use
of the developed task-solving approaches is limited to the learning task for
which it was developed. As a result, the subsequent customer orders are often
not considered in the light of previous experience but are treated as completely
new challenges.

The experience that technical terms, which are already known, and action con-
cepts, which are already available, gain extended significance and that connections
between initially independent concepts become conscious, characterises the
profession-related extension of the fields of meaning of action-relevant concepts,
which in their sum and combination constitute work-process knowledge (Lehberger,
2013) and the development of technical language based thereon.
For example, a nurse at the beginning of his/her training expands his/her prior
understanding of how to put on a bandage with ever new aspects of meaning in
dealing with the diversity of bandages in equally diverse and always different
individual cases. “Bandaging” as a semantic field quickly develops into a compre-
hensive and professional concept of acting, thinking and ability.
The rudimentary prior understanding of a tool mechanic apprentice of the surface
quality of tools—and how to achieve this quality—is expanded by the alternation of
reflected work experience and the expansion of the semantic field of the concept
11.2 Designing Vocational Education Processes in Vocational Schools 469

“high-quality surfaces of tools” to a cognitive disposition of professional action and


shaping competence.

Here is a detailed example:


For most learners, teamwork means sitting in a group with other learners at the beginning—and
often also towards the end of their training—and somehow working together on a task. This
contradicts the principle: “You do not acquire the ability to work in a team simply by [learners]
working in a team as often as possible”. For learners to develop a viable concept for working and
learning in a team, it is always a question of expanding and changing the semantic field that
characterises the professional concept of teamwork. Important aspects of meaning can be identi-
fied:
• Team-specific social competences (these include, among other things, communicating and
actively listening in friendly and respectful manner)
• Teamwork (in the sense of a goal-oriented and method-oriented approach, the quality of which
must be evaluated)
• Methods of cooperative work and learning (the phases of think-pair-share must be implemented
methodically)
• Solving tasks in a team, where the aim is to perform the tasks necessary for successful
teamwork, for example, moderator, process observer, time guard, minute recorder
• The formation of groups, whereby a team composition capable of work and learning must be
ensured, that is, a team needs team members who come up with ideas, those who pay attention to
quality, those who think strategically, etc.)
• Team development, that is, an appropriate realisation of the phases of team development
[forming, storming, norming and performing].

As certain subjects of work and only certain aspects are taken into consideration
depending on the situation and task, the aim of this phase of tuition is to convey that
the development of work process knowledge is a process of subjective development
of vocational concepts with their semantic fields, which the learners have to place
within their work-process knowledge. The dimensions of working and learning refer
to a possibility of systematisation oriented towards the vocational work process.
In this phase of tuition, the teacher must steer the learning process so that the
learners feel challenged to realise the processes of generalisation and systematisation
in such a way that their individual professional concepts are further developed. This
also applies to processes of social learning within the framework of teamwork
(Fig. 11.14).
470 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.14 Semantic fields of team work

11.3 COMET as a Didactic Concept in Nursing Training


at Higher Technical Colleges in Switzerland: Examples
of Teaching and Examinations

Karin Gäumann-Felix and Daniel Hofer

11.3.1 The Higher Vocational Nursing Schools


in Switzerland

In the Swiss educational landscape, there are currently two options for training in the
professional care and assistance for people. On the one hand, studies can take place
at a higher technical college (HF). In addition to a successful entrance examination,
the admission requirements are a vocational or school-leaving certificate at second-
ary level II1. The second option is to study at a university of applied sciences (FH).
The admission requirement is a Matura degree (university entrance qualification) or
completed HF training. Switzerland, therefore, has two equivalent variants of higher
(tertiary) continuing vocational education and training for nurses, one more practice-
oriented and one more academically oriented.

1
After compulsory schooling (9 years), young people enter upper secondary education. Secondary
level II can be subdivided into general education (grammar schools and technical secondary
schools) and vocational training (learning a trade in a training company with supplementary
schooling).
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 471

The training of qualified HF nurses (Höhere Fachschule) is a joint task of the


federal government, the cantons and OdA (organisations in the world of work). The
three partners in the network jointly assume responsibility for vocational training
and the quality of training. The higher vocational schools have an educational
concept that leads directly to employability. In contrast to “academic” courses of
study, this makes it unnecessary for students to gain practical experience in their
profession after completing their studies.
With its genuine duality, nursing training requires didactics that are based on
authentic professional situations and convey the competence to master them
professionally.
The study to become a qualified nurse takes 3 years. With a relevant apprentice-
ship as a health specialist (FaGe), training can be shortened by 1800 learning hours.
The learning hours are divided into three learning areas: Practice, School and
Training & Transfer (LTT). A nationwide curriculum framework2 regulates the
temporal proportions of the learning areas. Half of the training takes place in school
and half in professional practice. Both learning venues each contribute approx. 10%
of their time resources to the design and organisation of the third learning area LTT.
The practical assignments take place in various fields of health care3. The school
lessons impart the necessary breadth of knowledge of the work process. At the third
learning location (LTT), authentic professional tasks in their respective complexity
and considering the diverging requirements are processed and reflected in a
protected framework. LTT offers both the possibility of recording current cases
relevant to training independent of the curriculum and of taking individual needs on
the part of learners into account. Following Dehnbostel, we distinguish three types of
work-related learning (Dehnbostel, 2005): Learning in the work process (practical
learning area), acquiring knowledge of the work process on the basis of reflected
work experience and the relevant specialist knowledge (scholastic learning area) as
well as practical and practice-related learning outside the workplace context (LTT)
in specialist practice rooms and school. The concrete descriptions of the compe-
tences4 to be achieved and our portfolio system, which is platform-based in all three
learning areas, serve as a connecting element. In 2012, this concept was expanded by
the COMET competence model.
The examples and experiences documented below show the successful imple-
mentation of the COMET Competence Model as a didactic concept in the training of
qualified nursing staff at the Bildungszentrum Gesundheit und Soziales in Olten5.

2
http://www.sbfi.admin.ch/bvz/hbb/index.html?lang¼de&detail¼1&typ¼RLP&item¼17
3
Practical training covers six fields of work in the care and support of: (1) people with long-term
illnesses/(2) children, adolescents, families and women/(3) mentally ill people/(4) people in reha-
bilitation/(5) somatically ill people/(6) people at home
4
https://www.bz-gs.ch/bildungszentrum/lernen-am-bz-gs-1/konkrete-kompetenzen
5
The Bildungszentrum Gesundheit und Soziales [health and social education centre] (BZ-GS) is a
part of the Berufsbildungszentrum Olten (BBZO). The BBZO is a regional vocational training
centre with over 4200 apprentices and students in 28 professions. Refer to http://www.bbzolten.so.
ch/startseite/ and http://www.bbzolten.so.ch/bz-gs-olten/
472 11 The Didactic Quality of the Competence and Measurement Model

11.3.2 COMET in the Context of the BZ-GS [Health


and Social Education Centre]

In 2012, the BZ-GS, together with five other Swiss educational centres6 in the health
and social sectors, launched the first Swiss COMET project under the title “Survey-
ing and imparting vocational competence, professional identity and professional
commitment in the training occupations of nursing in Switzerland” (cf. Gäumann-
Felix & Hofer, 2015).

Example Lesson: Pain and Temperature Regulation

This example from the first year of an HF training course in somatics shows a rather
simple case description to introduce the students to the COMET method and to give
them an understanding of the criteria and items of the competence dimensions. After
a brief introduction to the understanding of the eight competence criteria
(sub-competences), the students received the case description from
Ms. G. (Table 2.2). In a first step, they analysed the case study in working groups
with the aid of instructions in which the competence criteria and their breakdown
into five items are presented. They tried to assign the information in the case
description to the competence criteria and to discuss the first possible interventions
(Table 11.6).
After initial processing of the case in groups, the results were presented and
discussed in the plenum. The following questions were examined:
• What are the family, social and cultural factors influencing Ms. G.?
• What influence do they have on her current state of health?
• What would be interventions that support sustainable exit planning?
• What other services should be included?
• How does the student manage the situation, as human resources are scarce due to
absenteeism?
• How does the student set priorities? What kind of interventions are necessary?
During this expert discussion, the COMET competence criteria were gradually
filled with content. The holistic view of the patient’s situation became increasingly
clear and this triggered important “Eureka!” moments among the students.
Due to the various starting points and possibilities provided by the situation
description, it was also possible to consider the heterogeneous previous education
of the students in the educational programme. All were able to build on their current
state of knowledge and experience and reflect on their individual competence
development.

6
In addition to Solothurn, the Cantons of Aargau, Basel, Berne, Lucerne/Central Switzerland and
Zurich.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 473

Table 11.6 Ms. Graziano as a case study


Ms. Graziano as a case study
You have the early shift on a surgical ward when you are notified of an emergency admission.
That’s all you need... It’s all very hectic already, because a senior nurse is sick. There are three of
you, group management, one FaGe* student and you.
Reason for admission: Severe stomach pains. As you are assigned to admit Ms. Graziano, you
take her over from the emergency nurse, who tells you that Ms. Graziano has been suffering from
upper abdominal pain for two days, which is getting worse all the time. In addition, a stubborn
cough has been plaguing her for days. With this little information, she leaves the ward again.
When you turn to Ms. Graziano, she lies crying in bed with her legs pulled up. You greet her and
measure her vital signs (BP 125/90, P 80, T 37.5 ). You ask her how she is, whereupon she tells
you in broken German about her last 2 days: Her husband is currently abroad for a week for work,
she misses him very much, feels very alone. When the pain started, she wouldn’t tell him on the
phone so he wouldn’t worry unnecessarily. So he doesn’t know she’s in hospital yet either. She
tells us that until 5 years ago they lived in her native country of Italy, where her husband also came
from. After their wedding, they moved to Switzerland for professional reasons, but he is always
abroad on business. She has hardly slept the last two nights because of the severe pain and only
managed to come to hospital with difficulty because she hates hospitals. Ms. G. is very afraid of a
malignant disease, because her mother died of a carcinoma many years ago, and according to
Ms. G., had suffered excruciating pain in the end. In the following conversation, you will learn
that she is “experienced in pain”. She has been suffering from severe backache due to a herniated
disc for 3 years. She has so far resisted surgery—the focus has always been on pain therapy. For
several months, she has been taking 4  1 g Dafalgan tablets and 4  20 drops of Tramal for
backache. During the conversation, she suddenly begins to freeze and you notice that she has
chills. You also tell her that she really must be in a lot of pain and offer to get her a painkiller.
FaGe ¼ Health Specialist
*

In the setting described, the competence criteria and items were used as work aids
with the help of a simplified representation (Fig. 11.15). The learning outcome was
discussed together. As the aim was to enable an initial examination of the COMET
competence model, no written evaluation was carried out. It turned out that all
students benefited from this setting several times. On the one hand, they learnt to
view a situation holistically with the help of the competence criteria and to recognise
“blind spots” in their knowledge and skills. On the other hand, they were able to
build on their current state of knowledge and experience and identify topics for
further in-depth study in subsequent lessons.
This form of tuition has meanwhile been tested in a variety of ways. For example,
in an open book7 test, like the test tasks in the COMET project, a situation was
processed and assessed with slightly adapted rating items.

7
Open-book examinations allow students to use all available documents during the examination.
They have free access to their own documents and books. They have free Internet access with their
laptops and, therefore, also access to the online library and other resources. The only thing
forbidden is mutual exchange among each other.
474 11 The Didactic Quality of the Competence and Measurement Model

Fig. 11.15 Simplified presentation of criteria and items (This representation of the requirements
dimension of the COMET Competence Model adapted to nursing training was simplified linguis-
tically and graphically for the students (cf. Fischer, Hauschildt, Heinemann, & Schumacher, 2015)

11.3.3 Example Lesson: Nursing Relatives


and Palliative Care

This example is set in the third academic year, 6 months before the diploma
examinations. During a whole week, the students dealt with the topics: Nursing
relatives, Caring and Palliative Care. The initial situations were 20-minute encounter
sequences, which the students carried out with simulation patients8. In a fully
equipped (teaching) hospital room, an initial contact was simulated with a bedridden
cancer patient and his wife, who had taken care of him at home until the current
emergency occurred. The students were confronted with this simulation without long
preparation time. Table 3.1 shows which preliminary information the students
received shortly before their assignment. As there was little preparation time avail-
able, they had to rely on their previous knowledge and experience. The 20-min
simulation sequence was recorded on video and then handed over to the students.
Each student received a video documentation of their own sequence on a USB stick

8
At the BZ-GS we use amateur actors to simulate (“play”) certain given roles. Depending on the
lessons they receive a more or less given script. Within this teaching setting, they orient themselves
to a basic starting position, supplemented with possible questions and topics.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 475

for further processing and reflection during the week of the event. The questions
described in the initial situation for the simulation patients already show how the
sequences were designed. The following teaching days were finally oriented towards
the eight competence criteria of the COMET competence model. The simulated case
study, which the students experienced very realistically, was viewed, analysed and
discussed from different perspectives. The lack of specialist knowledge was also
dealt with and the students were able to reflect on their own recorded situation. At the
end of the week, a similar sequence was played and again recorded on video. The
students were impressed by the increase in their competence in “Nursing caring
relatives and Palliative Care” during this week: how they learnt to analyse the
simulated cases in all their complexity and to derive conclusions for their caring
actions from their analyses.
Conclusion After this week, the students reported a significant amount of learning
progress. It was precisely because this teaching took place at a time shortly before
the diploma examinations that it was important for the students to be able to assess
their own level of knowledge and experience. Due to the always very real experi-
enced settings with the simulation patients and the video recordings, the identifica-
tion of competences that still needed to be acquired was multi-layered. With the help
of video recordings, they were able to reflect on their appearance and behaviour
(appearance, interaction with patient and wife, reproduction of information, com-
munication, technical language, facial expressions, gestures, etc.). They were always
confronted with their expertise (Which questions was I able to answer? Where did I
lack expertise?). In addition, the targeted holistic approach based on the eight
COMET criteria drew their attention to other problems that they would otherwise
not have “discovered”. Through this continuous process of reflection throughout the
week, a variety of interrelated topics could be explored in depth.
The differences between the video recordings at the beginning and end of the
week were impressive. As central to their learning process, the students stressed that
they did not work on “strange” learning examples, but that their own experienced
examples formed the basis for the teaching week (Tables 11.7 and 11.8).

Table 11.7 Initial situation for students


Initial situations for students
• You work in the medical-oncological department.
• You appear at 07:00 for the early shift.
• The night shift reports on Mr. X*, who was admitted 2 h ago, accompanied by his wife.
• Diagnosis: Metastatic bronchial carcinoma, known for 1 year. Mr. X has undergone chemo-
therapy, which he discontinued a month ago due to his poor general condition and the severe side
effects.
• Mrs. X takes care of Mr. X at home; they live in a detached house.
• Mr. X was admitted because his condition was steadily worsening, with weakness and shortness
of breath becoming a problem. Breathlessness can hardly be alleviated with the portable oxygen
device at home, weakness hardly permits mobility.
• This is all the information you have for the moment. You are responsible for Mr. X today and
now go to his room.
*
may also be Mrs. X
476 11 The Didactic Quality of the Competence and Measurement Model

Table 11.8 Initial situation for simulation patients


Initial situation for simulation patients
Initial situation (set default)
• It is 07:00 in the morning.
• Mr. X came in two hours ago, accompanied by his wife.
• Diagnosis: Metastatic bronchial carcinoma, known for one year. Mr. X has undergone
chemotherapy, which he discontinued a month ago due to his poor general condition and the
severe side effects.
• Mrs. X takes care of Mr. X at home; they live in a detached house.
• Mr. X was admitted because his condition was steadily worsening, with weakness and shortness
of breath becoming a problem. Breathlessness can hardly be alleviated with the portable oxygen
device at home, weakness hardly permits mobility. Due to the increasing bed rest there is an acute
danger of decubitus (bed sores). Mr. X already has a red right heel that hurts him. (Note on
shortness of breath: The shortness of breath should not be in the centre, because it is mainly about
communication. Mr. X can cough and have some breathing problems, but not the whole 200 ,
because otherwise there is too much weight on the shortness of breath and the conversation can
stagnate.)
• Mr. and Mrs. X have seen the nurses of the emergency ward (admission at night) as well as the
nurse who has the night shift on the medical-oncological ward. She briefly showed them the room
and gave Mr. X oxygen by nasal probe. Then, she referred to the nurse of the day shift who will
take care of the further admission formalities and give the couple further information.
Freely designable elements (discussed by each couple X with each other in advance)
• Description of living conditions (wheelchair accessible? at ground level? stairs?)
• Description of other family members (who? relationship? Relationship with each other? Good?
Tense? Quarrels? Other associates— not related, e.g. friends, neighbours, work colleagues? ...)
• Who supported care at home? (family members? Spitex? meal service? etc.)
• How has dying and death (and burial, if any) been addressed so far? (do the spouses talk about
it? Is “foreseeable death” accepted? Do they rage against death? What else do you want to do from
a medical perspective?)
• Are you considering moving to another institution? (e.g. hospice?)
Mandatory questions/topics for the nurse (topics should be addressed in some form if possible,
but there is also room for manoeuvre here)
• Mrs. X wants to take her husband back home because she promised him that he could die in
peace at home.
Or:
• Mr. X absolutely wants to die at home, but Mrs. X is currently overwhelmed by the idea.
Or:
• Mrs. X is visibly disappointed with herself that she was no longer able to look after her husband
at home and struggles a lot with the fact that he now must be in hospital “because of her”.
Accordingly, she is very critical of all nursing activities and often “complains”.
• Mr. and Mrs. X ask about support possibilities at home (“technical” support such as a nursing
bed, but also “spiritual” support such as a self-help group, pastoral care, opportunities for
conversation, etc.).
• Mr. and Mrs. X have heard that palliative care is offered here on the ward. However, they do not
know what this term means and ask the nurse.
• Questions about dying, death, grief, burial, ...
Possible questions to the nurse (can be included depending on the situation)
• Possible questions regarding assisted dying.
• Possible questions about problems within the family (disagreement about how to proceed, some
find that chemotherapy should definitely be continued, others confirm that Mr. X stopped
chemotherapy, etc.).
• There may be a family member with whom the spouses are in dispute, and the spouse X do not
(continued)
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 477

Table 11.8 (continued)


want this person to be given information about the state of health.
• Various possible questions/anxieties about dying, dying phases, pain, shortness of breath,
nutrition, infusions, life-extending measures, legal issues, death, grief, .....
• And much more ...

11.3.4 Example Lesson: CPR—Cardiopulmonary


Resuscitation

Resuscitation courses are regular units during the 3 years of study. As a rule, the
resources required for basic life support (BLS) and advanced life support (ALS) are
offered together with other topics over 2 days. The instructions for the resuscitation
measures are primarily characterised by flow charts and algorithms. This suggests
that there can be no doubt about what is right and what is wrong. However, the
dimension of ethical decision-making already clarifies the fact that resuscitation is
also about standard-oriented, clever solutions. It quickly became apparent to us that
the COMET dimensions were also suitable structuring aids in this area.
Here is an example of one of three BLS/ALS units in the second academic year:
The students were provided with a document with two pictures. The pictures are
starting points for teaching about shock management, cardiac arrhythmias, cardiac
output and resuscitation measures. The students were given the task of forming
groups of a maximum of four people and then describing a realistic story, a situation
they had experienced, which matched the pictures. We hoped, among other things,
that the narratives would explicate “implicit” knowledge. The students also had the
opportunity to use their portfolios or the patient documentation tool to draw on a
concrete situation that they had already described. Then the students had to choose
one of the stories to work on. The questions shown in Table 3.3, most of which were
based on the COMET dimensions and items, were to be dealt with (Table 11.9).
After the discussion within the groups, the examples, answers and findings were
discussed and further deepened. Very soon, it became clear that at first glance rather
“technical” situation of a resuscitation covers all dimensions in broad and complex
manner. Various questions were, therefore, discussed and transferred to the dimen-
sions. As an example, it became clear how important it is within the dimension of
work-process orientation to argue with the involved services (e.g. medical service)
using the correct technical terms.
After this sequence, the students reported great learning success at various levels.
478 11 The Didactic Quality of the Competence and Measurement Model

Table 11.9 Questions to be dealt with in relation to the stories


Questions to be dealt with in relation to your story
1. What caused you to favour this story?
2. Problems:
• Presentation of the “main problems” arising from the overall situation of the chosen story.
• Which reasons and reference standards arise for the defined problems?
• What additional significance do the following criteria have in this story: Efficiency, costs/
benefits, personnel resources/skill and grade mix, follow-up costs?
• Which hygienic features must be observed?
• What is important regarding personal health protection?
• Which relevant issues concerning the social/sociocultural level might have an influence on the
situation?
3. Solutions:
• Which solution approaches are suitable to stabilise or improve the overall situation?
• Which work processes are particularly important in this situation?
• What related reference standards do I know?
4. Resources:
• Which knowledge and skills do I need to activate in order to sustainably cope with the situation/
problems?
• What do I know for sure—what do I want to deepen and differentiate?
5. Communication:
• To visualise everything in suitable form.
• How do I present the situation to the other groups—So that it is easy to understand?
• Note any questions that may have arisen during processing.
The criteria for a complete task solution
Clarity/presentation
• What situations could I be confronted with in my field of work that could lead to resuscitation?
• What infrastructure is available to me?
• What tasks can I assume during resuscitation?
• What tasks would I like to assume during resuscitation?
• Which tasks during resuscitation do I feel unsure about?
Functionality/professional solutions
• How is resuscitation performed?
• What variations are there?
• How can I prevent resuscitation?
• Can I evade/refuse to help during resuscitation?
• When is a person dead?
• Which score systems can I apply?
Sustainability
• When is resuscitation mandatory?
• When is resuscitation not performed?
• How are living wills observed?
• Organ donation and resuscitation. What should I pay special attention to?
Efficiency/cost-effectiveness
• How many people does it take to resuscitate?
• What assistance is available?
• How long does one resuscitate?
Work-process orientation
• Who determines what during resuscitation?
• Which partner services are helpful for resuscitation?
• Who do I involve in which situations?
• Process following resuscitation—from the patient’s perspective?
(continued)
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 479

Table 11.9 (continued)


• Process following resuscitation—from the expert’s perspective?
Social and environmental compatibility
• What standards must I adhere to during resuscitation?
• From which standards can I deviate during resuscitation?
• What protection do I need?
Family/sociocultural context
• In what situations can relatives be present during resuscitation?
• How do I communicate what and when with relatives, the press, etc.?
• Which cultural/ethical/philosophical peculiarities are to be considered?
“DNR” stamp
Creativity
• What creative aspects can be required of me during resuscitation?
• What aesthetic aspects can be important during resuscitation?

11.3.5 Examinations

The examples described above show a wide range of possibilities for integrating
COMET into exam settings. Two variants are explained here as examples, which
demonstrate very well the creative use of the basic COMET principles.

Synthesis Examination as an Example

The first example is an oral synthesis examination in the second year of study. The
basis was the tuition of the entire 12-week school block as well as the knowledge
imparted since the beginning of studies. Based on a real complex patient situation9,
the students dealt with the COMET competence model during the preparation period
and described their thoughts and assessments with concrete reference to the compe-
tence criteria (see information on preparation, general conditions and assessment
criteria for the students in Table 3.4). For example, the synthesis expert discussion
focused on work-process orientation, social and environmental compatibility, sus-
tainability, economy and other competence criteria. The discussions revealed broad
thematic diversity, which was summatively assessed with the help of the items
(Table 11.10).
With this and similar examination settings over the course of the three academic
years, we introduce the examination interviews that will take place at the end of
training.

9
The written 3-page patient situation contains information on medical history, diagnosis, admission
reason and situation, procedure, medication, treatment plan, nursing diagnoses and previous course
of hospitalisation. The personal data are anonymised for data protection reasons, but originate from
a real situation, which is why the content is not reproduced here.
480 11 The Didactic Quality of the Competence and Measurement Model

Table 11.10 Preparation, general conditions of the synthesis examination


Exam Preparation and General Conditions
• Introduction: As part of an introductory sequence, students receive relevant information about
the exam setting as well as a case study (real anonymous patient example) 2 weeks before the
exam day.
• Preparation: Each student prepares for the interview individually. The focus is on the clinical
picture, medication and care process. Nursing diagnoses are listed in the case study, goals and
interventions of a selected diagnosis are formulated by the students themselves. In addition, all
subject areas previously taught in class may be used.
• Presentation: For this exam, we give the student free rein regarding creativity (PowerPoint
presentations, MindMaps, flip charts, slides, collages, case notes, etc.).
• For assessment criteria, see below.
• The assessment is based on the competence level at the end of the second training year and the
competence criteria and items of the COMET model.
• The expert discussion takes place in groups of two with one teacher and lasts 60 minutes.
Approx. 25 minutes are available per student. The interview is opened with a five-minute
presentation of the patient’s situation in accordance with the preparation assignment—this is then
continued in accordance with the evaluation criteria. The last 10 min are available for random
questions, transfer questions, clarifications and additions or discussions.
• Students receive their results on the following day.
• Assessment criteria: see Table 11.11.

Diploma Examination as an Example

The final examination interview is explained here as a second example. And this was
certainly one of the most decisive moments for the training construct to demonstrate
the stringency and ultimately the credibility of our competence-oriented training.
The oral interview is one of three elements of the final qualification process10,
lasts 40 min and takes place in the last 12 weeks of the last year of training. The
training companies are also involved in the examination interview and its evaluation
by an expert.
The basis for the interview at our school is a real patient situation from the field of
work of the person to be tested. This creates the framework within which the
candidate can give a broad presentation of his or her planning and reasoning skills.
In this form of final examination, complexity is not trivialised, knowledge content is
not atomised and broken down into subjects. Considering the nature of the final
qualification methods, it will probably result in the preliminary situation that those
teaching will teach and those learning will learn. This is also where the stringency of
a genuinely competence-oriented education becomes apparent in the final analysis.
The dimensions that are evaluated:
• The description of the situation contains the essential information and is presented
systematically.

10
The other two parts are a practice-oriented written diploma or project thesis one the one hand and
the practical training qualification on the other: The final practical assessment is conducted by the
training company in the second half of the last practical training period.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 481

Table 11.11 Assessment criteria for synthesis examinations


Criteria 0 1 2 3 Commentsa
1 Clarity/presentation
• To whom do I have to present the solution?
• Do I use an adapted form of communication?
• How can I present the solution comprehensibly?
• Is the presentation clear and structured?
• Do I use suitable assessment tools, reports, etc. for commu-
nication/presentation?
2 Work-process orientation
• Where and how does the concrete situation influence the
organisation and leadership?
• Which work processes are also affected?
• Which persons, groups and organisations are also affected?
• What competences does this require?
• Is cooperation inter-, intra- or transdisciplinary?
3 Family/sociocultural context
What aspects need to be considered regarding:
• Family context?
• Institutional-social context?
• General conditions?
• The social environment?
• The cultural context?
4 Sustainability
• What does a sustainable solution look like (avoiding a
revolving door effect)?
What do I have to consider regarding:
• Health promotion and prevention?
• Autonomy?
• Social integration?
• What must be basic effect be?
5 Efficiency/cost-effectiveness
• What does an efficient strategy look like?
• What resources are available (time, personnel, financial,
etc.)?
• What level of quality is required?
• Which expenditure must/can be applied?
• What follow-up costs could arise?
• How is society/the system affected?
6 Functionality/professional solutions
• What is the justification for my strategy/solution?
• What are the current technical findings (evidence)?
• Is the ..... feasible (solution/intervention)?
• What are the professional contexts?
• How can I present the .... correctly?
• Are my statements well-founded?
7 Creativity
• What scope for design is offered and how can I exploit it?
• What are the “main problems”?
• Is my solution approach:
• Sensible?
(continued)
482 11 The Didactic Quality of the Competence and Measurement Model

Table 11.11 (continued)


Criteria 0 1 2 3 Commentsa
• Broadly based?
• Aesthetic?
• Creative?
8 Social and environmental compatibility
Significance for
• Environment, hygiene and health protection?
• Occupational safety and accident prevention?
• Ergonomics?
• Work and organisational design?
• What consequences do I expect on the social/sociocultural
level?
a
Mandatory if rating is 0 or 1 point

• Nursing problems, focal points and objectives are identified and justified.
• Natural and social science problems, focal points and objectives are identified and
justified.
• Activating, preventive and/or health-promoting measures are identified and a
position is taken on their possibilities and limits.
• Professional policy problems, focal points and objectives are identified and
justified.
• Management problems, focal points and objectives are identified and justified.
• Ethical aspects are critically reflected.
• Concepts, models and theories are used for analysis, planning and justification,
including evidence.
• Linguistic expression is differentiated, and professional terminology is used
correctly.
If we compare our items with the COMET model’s criteria of the complete
(holistic) solution of professional tasks, we can assign all evaluation groups to the
COMET criteria. We have described the solution space with “fulfilment standards”.
Consequently, we can also use the COMET criteria to evaluate the final
examinations.

Examinations: Conclusion

The two examples above and our experience with examination settings during the
course of training and in the oral diploma examinations at the end of the three-year
course of study in HF nursing confirm that the COMET competence criteria and the
corresponding items meet the quality criteria of validity, objectivity and reliability to
a high degree. Our experience with the quality criteria is thus clearly in line with the
statements made by Rauner et al. (2015a) in their “Feasibility study on the use of the
COMET test procedure for examinations in vocational education and training”.
Based on our experience, we also clearly agree with the conclusion written by
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 483

Rauner et al. (cf. Rauner et al., 2015a, 31–32). The inclusion of the basic idea of the
COMET model confirms our conviction that our examination settings can compre-
hensively capture the competencies of our students.
It should be mentioned that the teaching team is also convinced that the acqui-
sition of factual knowledge is an indispensable part of competence development.
However, competence-oriented teaching should ideally not only ask for factual
knowledge in exam questions, since holistic problem solutions involve valid open
exam tasks. Consequently, there cannot simply be a right or wrong, but acceptable
justifications and strategies within a defined solution space. We implement this
continuously and successfully in our understanding and design of examinations.

11.3.6 Suggestion for Lesson Preparation

As can be seen from the examples, the COMET method is not only a scientific
evaluation tool, but also a didactic model to learn what can be understood in the
discussions by the term “the whole”. Since the start of the project in 2012
(Gäumann-Felix & Hofer, 2015), the COMET competence model has also found
its way into the didactic concept for lesson planning, as the following example
shows. The “suggestion for lesson preparation” is composed as follows:

• Personal reference
– My reference, my resources and competences in relation to the topic/problem
• Meaningfulness of schooling for learners/students
– What relevance does the topic/problem have in the specific occupational field?
– What relevance does the topic/problem have in general?
– Significance of the topic/problem
In the past
Currently
In the future
From the practical perspective
From the theoretical perspective
• General conditions
– Curricular guidelines and focal areas
– What needs to be tested?
• The situation
– Which current and concrete situation/problem fits?
With regard to the work area
With regard to the learners
• Key points of the situation and the solution space (according to COMET incl. the 40 items).
– With regard to the:
– Clarity/presentation
– Functionality/professional solutions
– Sustainability
– Efficiency/cost-effectiveness
– Work-process orientation
– Social and environmental compatibility
– Family/sociocultural context
– Creativity
(continued)
484 11 The Didactic Quality of the Competence and Measurement Model

• Structure/methodology concerning participants.


– Time budget and priorities
– Appropriate problem confrontation (in addition to the situation)
– Methods (activities)
Documents for the information and data

Portfolios and Patient Documentation Tool

In the above examples, it has not yet been mentioned that the BZ-GS works
intensively with portfolios managed by the students and the electronic patient
documentation tool. Both are instruments based on real-life situations and patient
examples. Here, too, there are innumerable variants for integrating COMET.
The aim of our patient documentation tool is to provide teachers and students with
an instrument with which real patient situations can be recorded, processed, further
developed and reflected upon.
The patient documentation tool is available at all learning venues (school & LTT
school, practice & LTT practice) and can be used in various fields (at the BZ-GS
Canton of Solothurn, specifically in acute somatics, psychiatry, long-term care and
Spitex).
The password-protected tool facilitates:
• Recording of new cases
• Processing and further developing existing cases
• Reflection on individual steps of the process and making considerations regarding
the individual steps transparent for others
• The design of examination—module degrees as well as the final qualification
procedure
• Feedback—teachers/students or “peer-to-peer”
In this project phase, the competence criteria are integrated into the COMET tool.
This makes it possible to illuminate real or fictitious patient situations with the help
of the corresponding criteria. Here, too, COMET enables quality assurance in the
sense of a holistic approach to patient situations.
Two real extracts in Table 3.6 show how the competence criteria are examined in
the context of the documentation of real practical situations for the final oral expert
discussions (part of the diploma examination) (Table 11.12).

Table 11.12 Excerpts from described competence criteria


Examples of COMET references in the patient documentation tool
Sustainability dimension: It is highly probable that the incremental structure and exact imple-
mentation of the staged anorexia concept will lead to long-term success.
Efficiency/cost-efficiency dimension: The recommended exit solution (assisted living) could not
be implemented because the family’s financial situation was difficult at that time.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 485

11.3.7 Conclusion

The examples illustrate how the COMET competence model can be used for the
planning, implementation and evaluation of teaching and examinations. Students
learn to look at situations from different perspectives, ones that they had previously
often forgotten or neglected and which at first glance do not appear to be common
“everyday topics” in student practice.
Teachers experience the COMET method as an ideal supplement for the prepa-
ration and implementation of vocational training based on the guiding principle of
holistic education, which contributes to not losing sight of the interrelationships
between complex professional tasks. The teachers further emphasise that the
COMET method prevents the complexity of professional fields from being
trivialised and the knowledge content from being atomised and broken down into
subjects. The facts are not taken out of context. This was also ensured by the
concrete and authentic working and learning situations that form the centre of the
lessons.
The results of the competence measurements carried out within the framework of
the COMET project prove the success of the strategy documented in the teaching
examples, to take the complexity of the work and learning processes seriously and to
understand the heterogeneity in the educational programmes as a resource and a
challenge. For teachers, this also means meeting the contextual requirements of
educational practice with a high degree of didactic creativity and flexibility.
The results of the COMET project confirmed our competence orientation, which
has been anchored for years. Consequently, it is not surprising that even after
completion of the project, the COMET model remains an integral part of our
everyday lives. With the eight competence criteria, there is an optimally suitable
reference standard for the “holism” construct, which can guide many processes at a
school (cf. Gäumann-Felix & Hofer, 2015). Kapitel 11: Verzeichnisse.
Appendix A: The Four Developmental Areas

Developmental Area 1: Orienting Work Tasks—Knowledge for


Orientation and Overview
Job starters already have some prior experience and knowledge with regard to
their occupation, which they selected not the least on the basis of this prior
knowledge. At the beginning of their training, they are introduced to orienting
work tasks that give them an opportunity to gain an overview of the work in
this occupation. Novices work on these tasks systematically and in accordance
with existing rules, prescriptions and quality standards. This first learning area
is thus characterised by the acquisition of professional knowledge for orien-
tation and overview that allows the trainees to become aware of the structure of
the training occupation from a professional perspective. At the same time, they
experience the diverse requirements of work processes and the integration of
these processes into the development and innovation processes in the enter-
prise. Work and technology are thus experienced also as phenomena that can
be structured by the people involved.
Developmental area 2: Systemicwork tasks—integratedprofessional
knowledge
The advanced beginner, who already has concrete ideas of the occupation
from the perspective of application and utilisation and who has acquired some
relevant competences, now encounters systemic work tasks for the develop-
ment of integrated professional knowledge (perspective of systems architec-
ture). The relationship and interaction of skilled worker, technology and work
organisation also requires an integrated view. The mastering of systemic tasks
means that the trainees fulfil these tasks with a view to the context and in
consideration of the systemic structure of technology and work organisation.

(continued)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 487
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
488 Appendix A: The Four Developmental Areas

At this second level of vocational learning, the basic concept of the occupation
formulated at the first level and the integrated professional knowledge can lead
to a reflected professional identity when the educational potentials of the
corporate work environment are exploited.
Developmental area 3: Problem-oriented specialwork tasks—knowl-
edge of details and functions
The professional knowledge for orientation and overview, the integrated
knowledge and the ability to solve tasks systematically enable the trainees at
the third level to work on problem-oriented special work tasks. The solution of
these tasks is no longer possible on the basis of pre-defined rules and patterns.
The task includes some novelty that is not fully covered by the problem-
solving strategies applied to former tasks. The trainees need to analyse the task
first and to identify the problem in order to plan their activities.
The paradigm of the holistic and complex work activity, which was devel-
oped in the 1980s, and the associated capacity of independent planning,
implementation, control and evaluation of professionalwork tasks, corresponds
to the third step of the logical structuring of vocational education. At this level,
the professional identity leads to professional responsibility as a condition for
performance (intrinsic motivation) and quality awareness as an essential con-
dition for the fulfilment of complete work tasks in problematic work contexts.
Developmental area 4: Unpredictablework tasks—experiential and
systematic in-depth knowledge
When the trainees have developed a sufficient understanding of the tasks of
professional work, they can gain experience with the handling of non-routine
situations and problems. Unpredictablework tasks that are too complex to be
fully analysed in the concrete work situation so that they cannot simply be
mastered systematically put high demands on the trainees on their way to the
level of competent professionals. Competence in this case is based on knowledge
about previous tasks where the constellation was at least similar, on the antici-
pation of possible strategies, on theoretical knowledge and practical skills as well
as on intuition. Problems are solved in a situative way without the necessity to
calculate the activity with all its preconditions and consequences in detail.
The aim at the fourth level of this model of vocational education is to
integrate reflected professionalism with subject-specific competence in order
to open the opportunity for higher education. The aptitude for higher education
emerges from an extended self-conception, which is not so much rooted in a
narrowly defined occupational profile, but rather in a career path that is
associated with this occupation.

The four developmental areas according to which vocational training courses can
be arranged in a developmentally logical manner
Appendix B: Rating Scale

Rating sheet COMETSouth Africa 2011–2016 Requirement is . . .


not met rather rather fully
Code: Teacher at all not met met met
(1) Clarity/presentation
Is the solution’s presentation understandable for the client/
orderer/customer/employer?
Is the solution presented on a skilled worker’s level?
Is the solution visualised (e.g. graphically)?
Is the presentation of the task’s solution structured and
clearly arranged?
Is the presentation adequate (e.g. theoretically, practically,
graphically, mathematically, causative)?
(2) Functionality
Is the solution operative?
Is the solution state of the art?
Are practical implementation and construction
considered?
Are the relations to professional expertise adequately
presented and justified?
Are presentations and explanations right?
(3) Use value/sustainability
Is the solution easy to maintain and repair?
Are expendabilities and long-term usability considered
and explained?
Is countering susceptibility to faults considered in the
solution?
How much user-friendly is the solution for the direct user?
How good is the solution’s practical use value (e.g. of
some equipment) for the orderer/client?
(4) Cost-effectiveness/efficiency
(continued)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 489
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
490 Appendix B: Rating Scale

Rating sheet COMETSouth Africa 2011–2016 Requirement is . . .


not met rather rather fully
Code: Teacher at all not met met met
Is the solution efficient and cost-effective?
Is the solution adequate in terms of time and persons
needed?
Does the solution consider the relation between time and
effort and the company’s benefit?
Are follow-up costs considered?
Is the procedure to solve the task (work process) efficient?
(5) Orientation on business and work processes
Is the solution embedded in the company’s work and
business processes (in company/at the client)?
Do the solutions base on work experiences?
Does the solution consider preceding and following work/
business processes?
Does the solution express skills related to work processes
that are typical for the profession?
Does the solution consider aspects that go beyond the
particular profession?
(6) Social compatibility
To what extent does the solution consider possibilities of a
humane work organisation?
Does the solution consider aspects of health protection?
Does the solution consider ergonomical aspects?
Does the solution follow the relevant rules and regulations
regarding work safety and prevention of accidents?
Does the solution consider social consequences?
(7) Environmental compatibility
Does the solution consider the relevant environmental
regulations?
Do the materials used comply criteria of environmental
compatibility?
To what extent does the solution consider an environ-
mentally friendly work organisation?
Does the solution consider recycling, reuse, and
sustainability?
Does the solution address possibilities of energy saving
and better energy efficiency?
(8) Creativity
Does the solution include original aspects in excess of the
solution space?
Have different criteria been weighted against each other?
Has the solution some creative quality?
Does the solution show awareness of the problems?
Does the solution tap the task’s leeway?
Rating sheet as used in the tests COMETSouth Africa 2011–2016
Appendix B: Rating Scale 491

Evaluation sheet (rating scale) for large-scale projects in


the field of nursing and health care professions:
Measurement of cognitive dispositions Requirement is . . .
not met rather rather fully
at all not met met met
(1) Clarity/presentation 0 1 2 3
Is the presentation form of the task solution suitable for
discussing it with clients, patients, parents, relatives, etc.?
Is the task solution adequately presented for experts (col-
leagues, superiors)?
Is the solution of the task illustrated (e.g. by means of risk
assessment scales, documentation sheets, etc.)?
Is the presentation of the task's solution structured and
clearly arranged?
Is the presentation of the task solution appropriate to the
issue (e.g. technical, technical-practical, language-based,
etc.)?
(2) Functionality
Is the task solution technically justified?
Is the state of technical knowledge taken into account?
Is practical feasibility taken into account?
Are the professional contexts adequately presented and
justified?
Are the descriptions and explanations correct?
(3)Sustainability/use value
Is the task solution aimed at long-term success?
Are aspects of health promotion and prevention taken into
account?
Is the task solution aimed at encouraging self-determined,
autonomous action?
Does the task solution aim at fundamental effects?
Is the aspect of social inclusion taken into account?
(4) Cost-effectiveness/efficiency
Is the implementation of the solution economical in terms
of material costs?
Is the implementation of the solution appropriate (justified)
in terms of time and human resources?
Is the relationship between effort and quality considered
and justified?
Are the follow-up costs of implementing the solution var-
iant considered and justified?
Is the efficiency of the solution also considered in terms of
social costs at system level?
(5) Orientation on business and work process
Is the solution embedded in the process and organisational
structure of the institution?
(continued)
492 Appendix B: Rating Scale

Evaluation sheet (rating scale) for large-scale projects in


the field of nursing and health care professions:
Measurement of cognitive dispositions Requirement is . . .
not met rather rather fully
at all not met met met
Are the upstream and downstream tasks and processes
considered and justified in the solution?
Does the solution include the transfer of all necessary
information to everyone involved in the care process?
Is the solution an expression of typical work process-
related skills?
Does the solution take into account aspects that go beyond
the boundaries of one's own professional work (involve-
ment of other professionals)?
(6)Social compatibility
To what extent does the proposed solution take into
account aspects of humane work and organisational
design?
Are the relevant rules and regulations of hygiene and
health protection considered and justified?
Are ergonomic design aspects considered and justified in
the proposed solution?
Are the relevant rules and regulations for work safety and
accident prevention considered?
Are aspects of environmental protection and sustainable
management considered and justified?
(7) Family/social-cultural context
Is the family context taken into account in the analysis and
development of a proposed solution?
Are the institutional social framework conditions taken
into account?
Are the task-related aspects of the social milieu taken into
account?
Are the cultural aspects of the task (e.g. migration back-
ground) analysed and taken into account in the justification
of the task solution?
To what extent are social/socio-cultural consequences also
taken into account in the solution?
(8)Creativity
Does the solution contain elements that extend beyond the
expected solution space?
Is an unusual and at the same time sensible solution being
developed?
Does the solution have a design (e.g. aesthetic) quality?
Does the solution show problem sensitivity?
Is the creative leeway offered by the task exhausted in the
solution?
Appendix B: Rating Scale 493
494 Appendix B: Rating Scale
Appendix B: Rating Scale

Only items that differ from rating scaleA are listed.


495
496 Appendix B: Rating Scale
Appendix C: Examples for Test Tasks

Note

The representation of the solution spaces is deliberately not standardised. The


solution spaces for the different learning situations therefore have different forms
of presentation. This gives the reader the opportunity to try out the different variants
when creating his own learning situations.

Example Millwright

COMET Test Task: Signals


Situation
A railway crossing without barriers in a remote location is to be secured by
light signals. The signals shall supplement the existing four St. Andrew’s
crosses (see Fig. C.1).
The level crossing is located in a water protection area. The power supply
for the signals and the controls has a voltage of 24 V DC ( 20%). The next
public power line connection is 2 km away.
Between 7:00 am and 8:00 pm, two trains per hour are operating on the
single-track railroad. The rest of the day one train per hour passes the crossing.
The maximum speed of the trains in this part of the line is 80 km/h.
In accordance with the relevant regulations, the signals have to start
operating 30 seconds before a train reaches the crossing. The signalling
consists of an initial phase of yellow light and a subsequent phase of red.
The estimated power requirements of the system are as follows:

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 497
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
498 Appendix C: Examples for Test Tasks

rail track

road

Fig. C.1 Map and close-up of the signals

Controls 5 W continuously
For each passage of a train 5 seconds yellow light, then ca. 1 minute red light
Power consumption for each light 60 W

The operator of the railway line expects a guaranteed independent power


supply for the system over a period of at least 7 days.
Assignment
Prepare a documentation of the system as complete as possible.
If you have further questions, e.g. to the client or workers from other trades,
please note them down in preparation of a meeting.
Please give a detailed and comprehensive explanation of your proposal,
taking into account the following criteria:
• The functionality of a good complete solution.
• The clear presentation of your solution so as to be able to discuss it with
customers and work superiors.
• The utility and economy of the proposal.
• The aspects of environmental compatibility and related regulations.
• The effectiveness of the process and its integration into the business
operations.
• The aspects of (work) safety and health.
• Finally, you are always encouraged to show your creativity.
Auxiliary material
You may use all of the standard materials such as table manuals, textbooks,
your own notes and a pocket calculator.
Appendix C: Examples for Test Tasks 499

Solution Space Test Task: Signals


Indicator 1: Clarity/presentation
• The structure of the system is clearly described (ground plan).
• The method of power supply is clear (circuit diagram).
• The work steps are clear (work plan).
• The functioning is clear (programme).
• The functioning of the system is described.
• The necessary components are indicated (bill of materials).
Indicator 2: Functionality
• Specification of the energy requirements (50 trains per day, 125 W maxi-
mum power, 240 Wh in 7 days)
• The method of power supply guarantees independent functionality for
7 days, for example, solar panels with backup battery
• Dimensioning of the photovoltaic modules (two in line for 24 V)
• Estimated energy conversion efficiency of the solar modules (monocrys-
talline): ηS ¼ 0.17
• Estimated energy conversion efficiency of the charge controller: ηC ¼ 0.95
• Worst case: winter month with an insolation of approx. E ¼ 0.9 kWh/m2
• Module surface: A ¼ η ηQdED ¼ 0:170:950:9
0:24 kWh
kWh ¼ 1:65 m
2
S C
m2

Alternatively:
• Standard module with 40 cells (10  10 cm each), output approximately
50 W under favourable conditions.
• A desired maximum output of 125 W would require three modules plus one
module in order to allow for series circuits with two modules each.
• Input with uninterruptible power supply or generator (here, the voltage
difference must be considered as well as the separation of circuits).
• The power supply delivers a maximum of 250 W.
• Appropriate protective means were selected (difference between network-
powered and solar-powered operation).
• Appropriate control sensors were selected and adequately placed, for
example, induction loops for detecting the axes (minimal distance
s ¼ 80 km/h  30 s ¼ 667 m).
• Functionality of the control system is guaranteed.
• Disturbances are taken into consideration.

(continued)
500 Appendix C: Examples for Test Tasks

Indicator 3: Utility
• The system can be adapted to other situations (e.g. time adjustment,
transferability to other systems)
• Maintenance is possible by client staff, instructions of use were provided
• Malfunctions are signalled appropriately
Indicator 4: Economy
• The system requires little maintenance.
• Standard components are used.
• Procurement/production costs, operational costs, maintenance costs,
follow-up costs.
Indicator 5: Work and business process
• It is clearly indicated what needs to be arranged with the client for the
installation (At what time can the work be done? Who is responsible for the
safety? Will the railroad be closed during the installation works?)
• Does the work plan take into account more than one worker?
• Third-party suppliers were included, for example, for preparing the
foundations.
• There are suggestions for a maintenance plan.
Indicator 6:Social compatibility
• The safety of the workers is guaranteed (protective equipment, posts).
• Appropriate technical equipment, for example, hoists, barriers
• Is the status of the system communicated to the train driver?
• Were alarms taken into consideration?
Indicator 7:Environmental compatibility
• No hazardous material (e.g. battery acid, fuel) can leak into the
environment.
• If hazardous material is used, instructions are given for safe use and
disposal.
• Application of LED instead of light bulbs
Indicator 8:Creativity
• The system is equipped with an automatic signalling and alarm system.
• The design and structure of the system suit the surroundings.
• The system detects when a train is stopping on the railway crossing.
Appendix C: Examples for Test Tasks 501

Example Electrician

Skylight Control Test Task


Background situation
A company produces equipment for aircraft kitchens in a two-shift mode
(Monday to Friday from 6 am to 10 pm, Saturday from 6 am to 2 pm). Up to
now, the four skylights of the heated assembly hall have been opened and
closed decentrally by means of four separate crank handles (see Fig. C.2). Due
to this time-consuming manual control, it sometimes happens that workers
forget to close the skylights at the end of the day. There were also incidents
when open skylights were damaged during stormy weather.
The management requests a new skylight control system that is safer and
more comfortable.
In a meeting, further specifications are formulated:
Assignment
Prepare complete documents for the revision of the system. If you have
further questions, for example, to the client, the users or workers from other
trades, please note them down in preparation of a meeting.
Please give a detailed and comprehensive explanation of your proposal,
taking into account the following criteria:
• The functionality of a good complete solution.
• The clear presentation of your solution so as to be able to discuss it with
customers and work superiors.
• The utility and economy of the proposal.
• The aspects of environmental compatibility and related regulations.
• The effectiveness of the process and its integration into the business
operations.
• The aspects of (work) safety and health.
• Finally, you are always encouraged to show your creativity.
Auxiliary material
You may use all of the standard materials such as table manuals, textbooks,
your own notes and a pocket calculator.
502 Appendix C: Examples for Test Tasks

Fig. C.2 Close-up of a skylight and sketch of the assembly hall- “The skylights ought to be opened
and closed centrally.”- “When the temperature in the workspace within the hall gets too high, the
skylights have to open.”- “There is an enlargement of the assembly hall scheduled for the next
year.”

Solution Space: Skylight Control


Indicator 1: Clarity/presentation
• Did the candidate present a technology scheme or any other sketch with
explanations?
• Did the candidate prepare a clear bill of materials (e.g. table) showing the
required components or materials?
• Are there appropriate circuit diagrams?
• Are special features highlighted by colours?
Indicator 2: Functionality
Realisation is possible by means of contactors, a programmable logic
controller (LOGO, Easy) or EIB. Solutions that use a programmable logic
controller should be given better marks with a view to the convenient enlarge-
ment options.
Bill of materials (example):
• 4  button for opening skylight
• 4  button for closing skylight
• 1  wind sensor
• 1  rain sensor

(continued)
Appendix C: Examples for Test Tasks 503

• 1  temperature sensor
• 1  controller
• 4  motors with clockwise and counterclockwise rotation (possibly with
automatic stop, in which case end sensors are unnecessary)
• 4  relay for clockwise rotation
• 4  relay for counterclockwise rotation
• Wiring material
• Installation material
• Fuses or circuit breakers as necessary
The control can be installed in the existing distribution board.
Wiring (example):
• Distribution board to skylight motors (4 wires)
• Wind sensor (on the roof) to distribution board (4 wires)
• Rain sensor (on the roof) to distribution board (4 wires)
• Temperature sensor (at a representative place in the hall) to distribution
board (3 wires)
• Button panel (near the door) to distribution board (9 wires)
When a programmable logic controller is used, the integrated time switch
can be used for closing the skylights at the end of the working day.
• Would a proposed skylight control be operative from a technical point of
view?
• Are the explanations and diagrams correct from a technical point of view?
• Have the stop switches been implemented correctly?
• Have the sensors (wind, temperature) been implemented correctly?
• Is it possible to open and close the skylights?
Indicator 3: Utility
• Easy operation, easy adaptation to changing requirements by programma-
ble controller, choice and placement of sensors, instructions for mainte-
nance (e.g. for the motors)
• Can the explanations and diagrams be understood by a non-expert as well?
• How convenient is the operation of the skylights for the user?
• Are there status signals and alert signals?
• Is a time switch (integrated into the controller) used for changing between
the weekday/Saturday/Sunday modes?
Indicator 4: Economy
• Easy expansibility of the system in the event of an enlargement of the
assembly hall, cost-efficient use of a programmable logic controller, capacity
of the control system, application of standard sensors, use of existing
equipment

(continued)
504 Appendix C: Examples for Test Tasks

• Were costs and workload of different control systems taken into account?
• Is the solution economical?
• Was the cost-benefit ratio taken into account?
Indicator 5: Work and business process
• Compliance with the requirements of the management, coordination with
master/foreman, use of existing equipment.
• Have the requirements of the client been taken into account?
• Does the proposal refer to particular circumstances of the installation
(e.g. installation during holidays)?
• Does the proposal foresee the involvement of professionals from other
departments in the installation works (e.g. installation of the motors by
in-house mechanics)?
• Has the handing over to the client been planned?
• Is there a time schedule?
Indicator 6:Social compatibility
• Consideration of work safety, automatic opening of the skylights in case of
high temperature.
• Does the proposal comply with particular work safety regulations, for
example, with regard to the installation of components on the roof?
• Does the proposal comply with safety regulations for electrical equipment?
• Is there a kill switch?
• Is there a feature for closing the skylights in case of fire?
Indicator 7:Environmental compatibility
• Saving energy by appropriate opening and closing of the skylights.
• Does the proposal refer to environment-friendly materials (e.g. wires with-
out PVC or halogen)?
• Does the proposal consider energy-saving measures (e.g. opening the
skylights only for a short time when the outdoor temperature is below
zero)?
Indicator 8:Creativity
• Proposals for the extension of the control system, for example, integration
of the rolling gate, projected enlargement of the hall, integration of the
heating control
• Ideas that go beyond the assignment, for example, to use the roof for the
installation of solar panels
• Did the students come up with special functions for the control system?
Appendix C: Examples for Test Tasks 505

Example Welder

Welding Lifting Lug Test Task


Background situation
A ship repair company operating in Cape Town has been using a lifting lug
fabricated from a 50 mm plate in the form of a washer. It has been welded to a
flat base. The base is in turn bolted to the structure. The diagram shows the
configuration. The ring was fillet welded to the base using a fillet weld all
round. See Figs. C.3 and C.4.
While the weld joint material was sufficient for the load lifted, the lug failed
in duty.
Review the drawing and give your assessment of why the lug failed. How
would you redesign the lug to ensure success, given that it must be able to lift a
load of 2 tons?
Assignment
Prepare the required documentation that thoroughly explains and covers the
task at hand. Write down any additional steps to implement the client’s
requirement/s. Write down additional questions and/or suggestions to be
asked or made in follow-up meeting/s with the client or with workers from
within or outside the trade. Provide a comprehensive explanation of your
proposed solution by taking into account the following criteria:
The clear presentation of your solution so as to be able to discuss it with
customers and your work supervisors:
(a) The functionality of a good complete solution.
(b) Aspects of value in use and over time.
(c) The cost-effectiveness and cost-efficiency of your solution.
(d) The effectiveness of the process and its integration into business
operations.
(e) The social responsibility, including aspects of work, health and safety.
(f) The aspects of environmental compatibility and related regulations.
(g) You are encouraged to show your creativity.
Additional material
You may use all standard materials such as tables, manuals, textbooks,
calculators, the Internet, your own notes and the applicable regulations and
standards, including but not limited to the South African National Standards
(SANS) and the Occupational Health and Safety Act (85) of 1993 with the
various amendments to do the assignment.
506 Appendix C: Examples for Test Tasks

Fig. C.3 Drawing of failed lug design

Fig. C.4 Photo of failed lug design

Welding Lifting Lug Solution Space


1. Clarity/Presentation
• Did the candidate prepare a detailed project plan outlining, amongst
others, the project timelines, project human resources and financial
resources/budget?
• Has a drawing showing the new design been submitted?
• Is the drawing technically correct, using the correct symbols and nota-
tions (first angle autographic projection drawing is given)?
• Has the candidate has clearly identified the problem with the original
lug?
• Does the presentation make clear the choice of materials and consum-
ables, processes and types of welds to be used?

(continued)
Appendix C: Examples for Test Tasks 507

• Is there enough information for a qualified welder to be to be able to


correctly weld the new design of lifting lug?
• Is there enough information, covering all aspects, to be able to imple-
ment the plan without gathering additional information; or has the
candidate produced a comprehensive list of questions to be addressed
at a follow-up meeting?
• Does the presentation convincingly and logically motivate the solution
to the correct “client”?
2. Functionality
• Will the solution hold under load?
• Does the solution discuss forces expected to be encountered by the
proposed lug?
• Does the method of joining meet the required standard?
• Does the solution comply with safety regulations for lifting equipment
(Working Load Limit and Safe Working Limit)?
• Is the solution practical for lifting?
• Does the solution cover adequate non-destructive testing?
• Does the plan cover risk assessment and risk management?
3. Use Value/Sustainability
• Is the new design suitable for many different uses?
• Are the materials proposed readily available?
• Are the materials proposed long lasting in normal conditions of use?
4. Cost-effectiveness/Efficiency
• Is the suggested design the most cost-effective for the task?
• Does the design avoid over-engineering?
5. Business and Work Process Orientation
• Does the design work for common forms of lifting and lifting
equipment?
• Does the solution use skills related to work processes that are typical of
lifting gear?
• Does the solution consider aspects that go beyond the welding
occupation?
• Does the solution include reference to using correct communication
channels to relevant people (according to company procedures) in
order to achieve the solution?

(continued)
508 Appendix C: Examples for Test Tasks

6. Social Responsibility
• Does the explanation of the failure of the original design talk about
safety implications?
• Will the solution be safe (i.e. hold under load and allow secure attach-
ment of the lifting gear)?
• Does the solution specify (or make mention of the need to specify) the
load capacity of the lug?
• Is there provision for safely testing the new design?
• Does the solution discuss safety precautions while making the lug?
• Does the solution comply with or make reference to all relevant codes
and standards?
7. Environmental Responsibility
• Does the solution consider responsible use and disposal of materials?
• Does the solution consider recycling/reuse?
8. Creativity
• In examining the reasons for the failure of the first design, has the
candidate explored multiple options?
• Does the solution consider a wide range of options?
– Processes
– Materials
– Designs
• Does the solution include original aspects in excess of the solution
space?
Appendix D: Four-Field Matrix (Tables)

Profession Occupational identity Organisational identity


Warehouse operator 0.6706285 0.4078155
Specialist employee for bathing establishments 0.0477596 0.0624348
Warehouse logistics specialist 0.1034256 0.0417342
Specialist for hospitality industry 0.1010839 0.0747539
Specialist for vehicle operations 0.4443006 0.6287455
Hairdresser 0.2843638 0.2770461
Cook 0.1661394 0.1675460
Carpenter 0.3340340 0.0104548
Painter and varnisher 0.1822839 0.1487034
Stonecutter 0.4405951 0.0334404
Duct builder 0.2861519 0.1998749
Gardener 0.1556112 0.2201187
Farmer 0.5670278 0.3551654
Fully qualified groom 0.3597894 0.0794280
Office clerk 0.2458676 0.0943638
Managementassistantinhotelandhospitality 0.1632624 0.1650111
Management assistant in real estate 0.6913549 0.1859677
Salesman 0.0324006 0.0263969
Retail dealer 0.0478673 0.1696661
Car mechatronic 0.2626955 0.1527297
Surface coater 0.3876961 0.3424532
Vehicle painter 0.3774316 0.2295652
Glass constructor 0.7582568 0.3130811
Plant mechanic 0.6334552 0.4081456
Industrial mechanic 0.2211778 0.0704398
Mechatronic 0.1196560 0.1086569
(continued)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 509
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
510 Appendix D: Four-Field Matrix (Tables)

Profession Occupational identity Organisational identity


Process mechanic 0.3424487 0.0048912
Cutting machine operator 0.2805183 0.2104140
Occupational and organisational identity: List of professions in the four-field matrix

Occupational Organisational
Profession commitment commitment
Plant mechanic 0.58 0.48
Industrial mechanic 0.26 0.08
Mechatronic 0.27 0.07
Process mechanic 0.24 0.03
Cutting machine operator 0.20 0.15
Car mechatronic 0.21 0.15
Surface coater 0.28 0.34
Vehicle painter 0.18 0.02
Glass constructor 0.42 0.45
Office clerk 0.11 0.02
Managementassistantinhotelandhospitality 0.11 0.01
Management assistant in real estate 0.36 0.30
Salesman 0.06 0.12
Retail dealer 0.05 0.05
Gardener 0.12 0.29
Farmer 0.18 0.30
Fully qualified groom 0.31 0.38
Warehouse operator 0.79 0.50
Specialist employee for bathing 0.02 0.04
establishments
Warehouse logistics specialist 0.06 0.14
Specialist for hospitality industry 0.07 0.14
Specialist for vehicle operations 0.13 0.36
Hairdresser 0.21 0.03
Cook 0.23 0.21
Carpenter 0.20 0.13
Painter and varnisher 0.18 0.07
Stonecutter 0.40 0.07
Duct builder 0.12 0.09
Occupational and organisational commitment: List of professions in the four-field matrix
Appendix E: Correlation Values for the
Correlation Between Occupational Competences
and I-C Averages

Correlations car mechatronics


**
Correlation is significant at level 0.01 (two-sided)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 511
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
512 Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations electronics technicians for industrial engineering


*
Correlation is significant at level 0.05 (two-sided)
**
Correlation is significant at level 0.01 (two-sided)
Appendix E: Correlation Values for the Correlation Between Occupational. . . 513

Correlations electronics technicians for energy and building technology


*
Correlation is significant at level 0.05 (two-sided)
**
Correlation is significant at level 0.01 (two-sided)
514 Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations medical assistant


**
Correlation is significant at level 0.01 (two-sided)
Appendix E: Correlation Values for the Correlation Between Occupational. . . 515

Correlations carpenter
*
Correlation is significant at level 0.05 (two-sided)
**
Correlation is significant at level 0.01 (two-sided)
516 Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations logistic clerks


**
Correlation is significant at level 0.01 (two-sided)
Appendix E: Correlation Values for the Correlation Between Occupational. . . 517

Data:Total score (TS)ofoccupationalcommitment (OC)


List of References

This COMET Handbook is based on a large number of publications on the COMET


test procedure and its application in regional, national and international comparative
projects. These publications also consistently document the development, applica-
tion and evaluation of the methods of competence diagnostics and development
based on the COMET competence and measurement model.
The compilation of the sources, which is structured according to chapters, avoids
overloading the text with a large amount of literature references. For example, the
short form “COMET Vol. I” will be introduced for the book series “Measuring
Vocational Competences” Volumes I to V and the short form “A + B” (e.g. A + B
11/2013) for the “Research Reports on Work and Education”.

Chapter Reference
2 COMET IV: 1.1; 1.4
3 COMET I: 2
COMET III: 1
COMET IV: 2.1
4 COMET I: 3
COMET III: 2
4.7 A + B 01/2016;
Rauner, F.; Frenzel, J; Piening, D.; Bachmann, N. (2015): Engagement und
Ausbildungsorganisation. Einstellungen sächsischer Auszubildender zu ihrem Beruf
und ihrer Ausbildung. Eine Studie im Rahmen der Landesinitiative Steigerung der
Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen
(QEK). Bremen: Universität Bremen, I:BB.
5.1 Kleiner, M.; Rauner, F.; Reinhold, M.; Röben, P. (2002): Curriculum design I:
Identifizieren und Beschreiben von beruflichen Arbeitsaufgaben, Arbeitsaufgaben
für eine neue Beruflichkeit. In: Berufsbildung und innovation—Instrumente und
Methoden zum Planen, Gestalten und Bewerten, band 2, Koblenz: Christiani.
5.2 COMET IV: 3.2; 3.3.3
5.3 COMET IV: 2.4
5.5.1 COMET III: 4.2–4.3; COMET IV: S. 53–56
(continued)
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 519
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
520 List of References

Chapter Reference
5.6 COMET IV: 2.5
5.7 COMET IV: 2.5; S. 67–74
6 COMET IV: Abb. 16 (S. 64)
6.1 COMET I: 3.5
6.2 COMET I: 5.1; COMET III: 4.2
6.3 COMET III: 4.3
6.4 Kalvelage, J.; Heinemann, L.; Rauner, F; Zhou, Z. (2015): Messen von Identität und
engagement in beruflichen Bildungsgängen. In: M. Fischer, F. Rauner, Z. Zhou
(hg.). Münster: LIT, 305–326.
6.5 Zhuang, R.; li, J. (2015): Analyse der interkulturellen Anwendung der COMET-
Kompetenzdiagnostik. In: M. Fischer, F. Rauner, Z. Zhou (hg.). Münster:
LIT. S. 341–350.
7.1–7.3 Rauner, F.; Frenzel, J.; Piening, D. (2015): Machbarkeitsstudie: Anwendung des
KOMET-Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen:
Universität Bremen, I:BB.
A + B 16/2015
7.4 A + B 18/2014
7.5.3 COMET III: S. 53–56
8.1–8.2 COMET II: 3; A + B 14/2014; COMET IV: 78–91
8.3 COMET III: 6.2; A + B 14/2014
8.4 COMET IV: 6.3; A + B 15/2014
8.5 A + B 01/2016; COMET III: 3.5
8.7 A + B 01/2016
9.1 A + B 18/2015
Fischer, M; Rauner, F; Zhao, Z. (2015): Kompetenzdiagnostik in der beruflichen
Bildung. Methoden zum Erfassen und entwickeln berufliche Kompetenz: COMET
auf dem Prüfstand. Berlin: LIT
Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In:
Fischer, M.; Rauner, F.; Zhao, Z. (hg.): Kompetenzdiagnostik in der beruflichen
Bildung—Methoden zum Erfassen und Entwickeln beruflicher Kompetenz.
COMET auf dem Prüfstand. Münster: LIT.
9.5 Piening, D.; Frenzel, J.; Heinemann, L.; Rauner, F. (2014): Berufliche Kompetenzen
messen—Das Modellversuchsprojekt KOMET NRW. 1. und 2. Zwischenbericht.
9.6 A + B 11/2013
COMET III: 4.2
10.1–10.4 A + B 11/2013
10.5 Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In:
Fischer, M.; Rauner, F.; Zhao, Z. (hg.) (2015): Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher
Kompetenz. COMET auf dem Prüfstand. Münster: LIT, S. 413–436.
Zhao, Z. (2015): Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer
und Dozenten beruflicher Bildung in China. In: Ebd., S. 437–450.
11 Lehberger, J.; Rauner, F. (2014): Berufliches Lernen in Lernfeldern. Ein Leitfaden
für die Gestaltung und organisation projektförmigen Lernens in der Berufsschule.
Bremen: Universität Bremen, I:BB.
Bibliography

Adolph, G. (1984). Fachtheorie verstehen. Reihe Berufliche Bildung, Band 3. Wetzlar:


Jungarbeiterinitiative an der Werner-von-Siemens-Schule. Projekt Druck.
Aebli, H., & Cramer, H. (1963). Psychologische Didaktik. Didaktische Auswertung der
Psychologie von Jean Piaget. Stuttgart: Klett-Cotta.
Akaike, H. (1987). Factor analysis and AIC. Psychometrika, 52, 317–332.
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and
assessing. New York: Longman.
Asendorpf, J., & Wallbott, H. G. (1979). Maße der Beobachterübereinstimmung: Ein
systematischer Vergleich. Zeitschrift für Sozialpsychologie, 10, 243–252.
Baethge, M., Achtenhagen, F., Babie, F., Baethge-Kinsky, V., & Weber, S. (2006). Berufsbildungs-
PISA. Machbarkeitsstudie. München: Steiner.
Baethge, M., Gerstenberger, F., Kern, H., Schumann, M., Stein, H. W., & Wienemann, E. (1976).
Produktion und Qualifikation: eine Vorstudie zur Untersuchung von Planungsprozessen im
System der beruflichen Bildung. Hannover: Schroedel.
Baruch, Y. (1998). The rise and fall of organizational commitment. Human Systems Management,
17, 135–143.
Bauer, W. (2006). Einstellungsmuster und Handlungsprinzipien von Berufsschullehrern. Eine
empirische Studie zur Lehrarbeit im Berufsfeld Elektrotechnik. Bielefeld: W. Bertelsmann.
Bauer, W. (2013). Conceptual change research in TVET. In L. Deitmer, U. Hauschildt, F. Rauner,
& H. Zelloth (Eds.), The architecture of innovative apprenticeship (pp. 219–229). Dordrecht:
Springer Science + Business Media.
Baumert, J., Klieme, E., Neubrand, M., Prenzel, M., Schiefele, U., Schneider, W., et al. (Eds.).
(2001). PISA 2000: Basiskompetenzen von Schülerinnen und Schülern im internationalen
Vergleich. Opladen: Leske + Budrich.
Beck, U. (1993). Zur Erfindung des Politischen. Zu einer Theorie reflexiver Modernisierung (1st
ed.). Frankfurt/Main: Suhrkamp.
Beck, U., Giddens, A., & Lash, S. (1996). Reflexive Modernisierung. Eine Kontroverse. Frankfurt/
Main: Suhrkamp.
Becker, M. (2003). Diagnosearbeit im Kfz-Handwerk als Mensch-Maschine-Problem.
Konsequenzen des Einsatzes rechnergestützter Diagnosesysteme für die Facharbeit. Disserta-
tion, Bielefeld: W. Bertelsmann.
Becker-Lenz, R., & Müller-Hermann, S. (2013). Die Notwendigkeit von wissenschaftlichem
Wissen und die Bedeutung eines professionellen Habitus für die Berufspraxis der sozialen
Arbeit. In R. Becker-Lenz, S. Busse, G. Ehlert, & S. Mueller (Eds.), Professionalität in der

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 521
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
522 Bibliography

sozialen Arbeit. Standpunkte, Kontroversen, Perspektiven (3rd ed., pp. 203–229). Wiesbaden:
VS Verlag für Sozialwissenschaften.
Benner, P. (1984). From novice to expert. Excellence and power in clinical nursing practice. Menlo
Park: Addison-Wesley.
Benner, P. (1994). Stufen der Pflegekompetenz. From novice to expert. Bern u. a. O.: Huber.
Benner, P. (1997). Stufen zur Pflegekompetenz. From novice to expert. (2. Nachdruck). Bern u.a.:
Huber.
Bergmann, J. R. (1995). “Studies of work” – Ethnomethodologie. In U. Flick, E. von Kardorff,
H. Keupp, & L. von Rosenstiel (Eds.), Handbuch Qualitative Sozialforschung. Grundlagen,
Konzepte, Methoden und Anwendungen (pp. 269–272). Weinheim: Beltz.
Bergmann, J. R. (2006). Studies of work. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung
(2nd ed., pp. 640–646). Bielefeld: wbv.
Blankertz, H. (1972). Kollegstufenversuch in Nordrhein-Westfalen – das Ende der gymnasialen
Oberstufe und der Berufsschulen. DtBFsch, 68(1), 2–20.
Blankertz, H. (1983). Einführung in die Thematik des Symposiums. In: Benner, D., Heid, H.,
Thiersch, H. (Hg.) Beiträge zum 8. Kongress der Deutschen Gesellschaft für
Erziehungswissenschaften vom 22–24. März 1982 in der Universität Regensburg. Zeitschrift
für Pädagogik, 18. Beiheft. 139–142.
Blankertz, H. (Ed.). (1986). Lernen und Kompetenzentwicklung in der Sekundarstufe
II. Abschlussbericht der wissenschaftlichen Begleitung Kollegstufe NW. 2 Bde. Soest: Soester
Verlagskontor.
BLK. (2002). Kompetenzzentren – Kompetenzzentren in regionalen Berufsbildungsnetzwerken
Rolle und Beitrag der beruflichen Schulen, BLK-Fachtagung am 3/4. Dezember 2001 in
Lübeck. Heft 99. Bonn.
BLK (Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung). (1973).
Bildungsgesamtplan, Bd. 1. Stuttgart: Klett-Cotta.
Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher
IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und
Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13, 473–505.
Böhle, F. (2009). Weder rationale Reflexion noch präreflexive Praktik – erfahrungsgeleitet-
subjektivierendes Handeln. Wiesbaden: Springer.
Böhle, F., & Rose, H. (1992). Technik und Erfahrung. Arbeit in hochautomatisierten Systemen.
Frankfurt a. M., New York: Campus.
Borch, H., & Schwarz, H. (1999). Zur Konzeption und Entwicklung der neuen IT-Berufe. In
Bundesinstitut für Berufsbildung (Ed.), IT-Best-Practise, Gestaltung der betrieblichen
Ausbildung. Bielefeld: W. Bertelsmann.
Borch, H., & Weißmann, H. (2002). IT-Berufe machen Karriere. Zur Evaluation der neuen Berufe
im Bereich Information und Telekommunikation. In Bundesinstitut für Berufsbildung (Ed.),
IT-Best-Practise, Gestaltung der betrieblichen Ausbildung. Bielefeld: W. Bertelsmann.
Boreham, N. C., Samurçay, R., & Fischer, M. (Eds.). (2002). Work process knowlege. London,
New York: Routledge.
Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und
Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer.
Bozdogan, H. (1987). Model selection and Akaike’s information criterion (AIC): The general
theory and its analytical extensions. Psychometrika., 52(3), 345–370.
Bozdogan, H., & Ramirez, D. E. (1988). FACAIC: Model selection algorithm for the orthogonal
factor model using AIC and CAIC. Psychometrika., 53(3), 407–415.
Brater, M. (1984b). Künstlerische Übungen in der Berufsausbildung. In Projektgruppe
Handlungslernen (Hg.), Handlungslernen in der beruflichen Bildung (pp. 62–86). Wetzlar:
W.-von Siemens-Schule, Projekt Druck.
Brand, W., Hofmeister, W., & Tramm, F. (2005). Auf dem Weg zu einem Kompetenzstufenmodell
für die berufliche Bildung. Erfahrungen aus dem Projekt ULME. In: bwp@-Berufs- und
Wirtschaftspädagogik. Online. 8 (Juli 2005)
Bibliography 523

Brater, M. (1984). Künstlerische Übungen in der Berufsausbildung. In Projektgruppe


Handlungslernen (Ed.), Handlungslernen in der beruflichen Bildung (pp. 62–86). Wetzlar:
W.-von Siemens-Schule. Projekt Druck.
Braverman, H. (1974). Die Arbeit im modernen Produktionsprozess (Übersetzung von: Labor and
monopoly capital. The degradation of work in the twentieth century. New York, London:
Monthly Review Press 1974). Frankfurt/Main, New York: Campus.
Bremer, R. (2001). Entwicklungslinien wesentlicher Identität und Kompetenz vom Anfänger zum
Experten. In W. Petersen, F. Rauner, & F. Stuber (Eds.), IT-gestützte Facharbeit –
gestaltungsorientierter Berufsbildung. Bildung und Arbeitswelt (Vol. 4, pp. 269–282). Baden-
Baden: Nomos.
Bremer, R. (2004). Zur Konzeption von Untersuchungen beruflicher Identität und fachlicher
Kompetenz – ein empirisch-methodologischer Beitrag zu einer berufspädagogischen
Entwicklungstheorie. In K. Jenewein, P. Knauth, P. Röben, & G. Zülch (Eds.),
Kompetenzentwicklung in Arbeitsprozessen. Bildung und Arbeitswelt (Vol. 9, pp. 107–121).
Baden-Baden: Nomos.
Bremer, R. (2006). Lernen in Arbeitsprozessen – Kompetenzentwicklung. In F. Rauner (Ed.),
Handbuch Berufsbildungsforschung (2nd ed., pp. 282–294). Bielefeld: W. Bertelsmann.
Bremer, R., & Haasler, B. (2004). Analyse der Entwicklung fachlicher Kompetenz und beruflicher
Identität in der beruflichen Erstausbildung. In: Bildung im Medium beruflicher Arbeit. ZfPäd,
50(2), 162–181.
Bremer, R., & Jagla, H.-H. (Eds.). (2000). Berufsbildung in Geschäfts- und Arbeitsprozessen.
Bremen: Donat.
Bremer, R., Rauner, F., & Röben, P. (2001). Der Experten-Facharbeiter-Workshop als Instrument
der berufswissenschaftlichen Qualifikationsforschung. In F. Eicker & A. W. Petersen (Eds.),
Mensch-Maschine-Interaktion. Arbeiten und Lernen in rechnergestützten Arbeitssystemen in
Industrie, Handwerk und Dienstleistung (HGTB 1999) (pp. 211–231). Baden-Baden: Nomos.
BLK (Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung). (1973).
Bildungsgesamtplan (Bd. 1). Stuttgart: Klett-Cotta.
Brödner, P., & Oehlke, P. (2008). Shaping work and technology. In F. Rauner & R. Maclean (Eds.),
Handbook of technical and vocational education and training research. Berlin: Springer.
Brosius, F. (2013). SPSS 21 (1st ed.). Heidelberg: mitp.
Brown, A., Kirpal, S., & Rauner, F. (Eds.). (2007). Identities at work. Dordrecht: Springer.
Bruner, J. S. (1977). Wie das Kind lernt, sich sprachlich zu verständigen. Zeitschrift für Pädagogik,
23, 153 ff.
Brüning, L., & Saum, T. (2006). Erfolgreich unterrichten durch Kooperatives Lernen. Strategien
zur Schüleraktivierung. Essen: Neue Deutsche Schule Verlagsgesellschaft mbH.
Bundesministerium für Bildung und Forschung (BMBF). (2006a). Berufsbildungsbericht 2006.
Teil I, Anhang.
Bundesministerium für Bildung und Forschung (BMBF) (Hg.) (2006b). Umsetzungshilfen für die
Abschlussprüfungen der neuen industriellen und handwerklichen Elektroberufe. Intentionen,
Konzeption und Beispiele (Entwicklungsprojekt). Stand: 30.12.2005. (Teil 1 der
Abschlussprüfung); Stand: 09.01.2006. (Teil 2 der Abschlussprüfung). Manuskript.
Bundesministerium für Wirtschaft und Energie (BMWi). (2005). Abschlussbericht: Was muss ich
an Ausbildungsordnungen ändern, damit Unternehmen mehr ausbilden? (Oktober 2005).
Erstellt von Ramböll Management.
Bund-Länder-Kommission für Bildungsplanung. (1973). Bildungsgesamtplan I. Stuttgart: Ernst
Klett.
Bungard, W., & Lenk, W. (Eds.). (1988). Technikbewertung. Philosophische und psychologische
Perspektiven. Frankfurt/Main: Suhrkamp.
Butler, P., Felstead, A., Ashton, D., Fuller, A., Lee, T., Unwin, L., et al. (2004). High performance
management: a literature review. Leicester: University of Leicester.
Bybee, R. W. (1997). Achieving scientific literacy: from purposes to practices. Portsmouth, NH:
Heinemann.
524 Bibliography

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for
research. Chicago: Rand McNally.
Carey, S. (1985). Conceptual change in childhood. MIT Press.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale:
Erlbaum.
Cohen, A. (1991). Career stage as a moderator of the relationship between organizational commit-
ment and its outcomes: A meta-analysis. Journal of Occupational Psychology, 64, 253–268.
Cohen, A. (2007). Dynamics between occupational and organizational commitment in the context
of flexible labour market: A review of the literature and suggestions for a future research
agenda. Bremen: ITB-Forschungsbericht 26/2007.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of
reading, writing and mathematics. In L. B. Resnick (Ed.), Knowing, learning and instruction
(pp. 453–494). Hillsdale, NJ: Erlbaum.
COMET-Konsortium (in Zusammenarbeit mit dem Hessischen Kultusministerium und der
Senatorin für Bildung und Wissenschaft der Freien Hansestadt Bremen). (2010). Berufliche
Kompetenzen messen: Das Projekt KOMET der Bundesländer Bremen und Hessen. Zweiter
Zwischenbericht der wissenschaftlichen Begleitung – Ergebnisse 2009. Forschungsgruppe I:
BB: Universität Bremen.
Connell, M. W., Sheridan, K., & Gardner, H. (2003). On abilities and domains. In R. J. Sternberg &
E. L. Grigorenko (Eds.), The psychology of abilities, competencies and expertise (pp. 126–155).
Cambridge: Cambridge University Press.
Cooley, M. (1988). Creativity, skill and human-centred systems. In B. Göranzon & J. Josefson
(Eds.), Knowledge, skill and artificial intelligence (pp. 127–137). Berlin, Heidelberg,
New York: Springer.
Corbett, J. M., Rasmussen, L. B., & Rauner, F. (1991). Crossing the border. The social and
engineering design of computer integrated manufacturing systems. London u. a. O.: Springer.
Crawford, M. (2010). Ich schraube, also bin ich: Vom Glück, etwas mit den eigenen Händen zu
schaffen. Berlin: Ullstein.
Crawford, M. B. (2016). Die Wiedergewinnung des Wirklichen. Eine Philosophie des Ichs im
Zeitalter der Zerstreuung. Berlin: Ullstein.
Dehnbostel, P. (1994). Erschließung und Gestaltung des Lernorts Arbeitsplatz. Berufsbildung in der
wissenschaftlichen Praxis, 23(1), 13–18.
Dehnbostel, P. (2005). Lernen-Arbeiten-Kompetenzentwicklung. Zur wachsenden Bedeutung des
Lernens und der reflexiven Handlungsfähigkeit im Prozess der Arbeit. In G. Wiesner &
A. Wolter (Eds.), Die lernende Gesellschaft. Juventus: Weinheim.
Deitmer, L., Fischer, M., Gerds, P., Przygodda, K., Rauner, F., Ruch, H., et al. (2004). Neue
Lernkonzepte in der dualen Berufsausbildung. Bilanz eines Modellversuchsprogramms der
Bund-Länder-Kommission (BLK). Reihe: Berufsbildung, Arbeit und Innovation (Vol. 24).
Bielefeld: W. Bertelsmann.
Dengler, K., Matthes, B. (2015). Folgen der Digitalisierung für die Arbeitswelt.
Substituierungspotentiale von Berufen in Deutschland. IAB-Forschungsbericht 11/2015.
Deutsche Forschungsgemeinschaft (DFG). (1998). Sicherung guter wissenschaftlicher Praxis.
Denkschrift. Empfehlungen der Kommission “Selbstkontrolle in der Wissenschaft”. Weinheim:
WILEY-VCH. (ergänzende Auflage 2013).
Deutscher Bundestag (11. Wahlperiode). (1990). Berichte der Enquête-Kommission „Zukünftige
Bildungspolitik – Bildung 2000“. Drucksache 11/7820. Bonn.
Dewey, J. (1916). Democracy and education. The middle works of John Dewey 1899–1924 (Vol.
9). Edwardsville: Southern Illinois University Press.
Dörner, D. (1983). empirische Psychologie und Alltagsrelevanz. In G. Jüttemann (Ed.),
Psychologie in der Veränderung (pp. 13–30). Beltz: Weinheim.
Drescher, E. (1996). Was Facharbeiter können müssen: Elektroinstandhaltung in der vernetzten
Produktion. Bremen: Donat.
Bibliography 525

Drexel, I. (2005). Das Duale system und Europa. Ein Gutachten im Auftrag von ver.di und IG
Metall. Berlin: Hausdruck.
Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der
Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt.
Dybowsky, G., Haase, P., & Rauner, F. (1993). Berufliche Bildung und betriebliche
Organisationsentwicklung. Reihe: Berufliche Bildung (Vol. 15). Bremen: Donat.
Efron, B., & Tibshirani, R. J. (1994). An introduction to the Bootstrap. Boca Raton: Chapman &
Hall/CRC.
Embretson, S. E., & Reise, S. P. (2013). Item response theory for psychologists. Hoboken: Taylor
and Francis.
Emery, F. E., & Emery, M. (1974). Participative design. Canberra: Centre for Continuing Educa-
tion. Australian National University.
Erdwien, B., & Martens, T. (2009). Die empirische Qualität des Kompetenzmodells und des
Ratingverfahrens. In Rauner, F. u. a.: Messen beruflicher Kompetenzen. Bd. II. Ergebnisse
COMET 2008. Reihe Bildung und Arbeitswelt. Münster: LIT.
Erpenbeck, J. (2001). Wissensmanagement als Kompetenzmanagement. In G. Franke (Ed.),
Komplexität und Kompetenz. Ausgewählte Fragen der Kompetenzforschung (pp. 102–120).
Bielefeld: W. Bertelsmann.
Euler, D. (2011). Kompetenzorientiert prüfen – eine hilfreiche Version? In E. Severing & R. Weiß
(Eds.), Prüfungen und Zertifizierungen in der beruflichen Bildung. Anforderungen –
Instrumente – Forschungsbedarf (pp. 55–66). Bielefeld: W. Bertelsmann.
Fischer, M. (2000a). Arbeitsprozesswissen von Facharbeitern – Umrisse einer forschungsleitenden
Fragestellung. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches Arbeitsprozesswissen.
Ein Forschungsgegenstand der Berufsfeldwissenschaften (pp. 31–47). Baden-Baden: Nomos.
Fischer, M. (2000b). Von der Arbeitserfahrung zum Arbeitsprozesswissen. Rechnergestützte
Facharbeit im Kontext beruflichen Lernens. Opladen: Leske + Budrich.
Fischer, M. (2002). Die Entwicklung von Arbeitsprozesswissen durch Lernen im Arbeitsprozess –
theoretische Annahmen und empirische Befunde. In M. Fischer & F. Rauner (Eds.), Lernfeld:
Arbeitsprozess. Ein Studienbuch zur Kompetenzentwicklung von Fachkräften in gewerblich-
technischen Aufgabenbereichen (pp. 53–86). Baden-Baden: Nomos.
Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und
beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus:
Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der
Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang.
Fischer, R. (2013). Berufliche Identität als Dimension beruflicher Kompetenz. Entwicklungsverlauf
und Einflussfaktoren in der Gesundheits- und Krankenpflege. Reihe Berufsbildung, Arbeit und
Innovation (Vol. 26). Bielefeld: wbv.
Fischer, B., Girmes-Stein, R., Kordes, H., & Peukert, U. (1995). Entwicklungslogische
Erziehungsforschung. In H. Haft & H. Kordes (Eds.), Methoden der Erziehungs- und
Bildungsforschung. Band 2 der Enzyklopädie Erziehungswissenschaft (pp. 45–79). Stuttgart:
Klett.
Fischer, R., Hauschildt, U., Heinemann, L., & Schumacher, J. (2015). Erfassen beruflicher
Kompetenz in der Pflegeausbildung europäischer Länder. In M. Fischer, F. Rauner, &
Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und
Entwickeln beruflicher Kompetenz (pp. 375–392). Münster: LIT.
Fischer, M., Jungeblut, R., & Römmermann, E. (1995). “Jede Maschine hat ihre eigenen
Marotten!” Instandhaltungsarbeit in der rechnergestützten Produktion und Möglichkeiten
technischer Unterstützung. Donat: Bremen.
Fischer, M., & Rauner, F. (Eds.). (2002). Lernfeld: Arbeitsprozess. Ein Studienbuch zur
Kompetenzentwicklung von Fachkräften in gewerblich-technischen Aufgabenbereichen.
Reihe: Bildung und Arbeitswelt (Vol. 6). Baden-Baden: Nomos.
526 Bibliography

Fischer, M., Rauner, F., & Zhao, Z. (2015). Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand.
Münster: LIT.
Fischer, M., & Röben, P. (2004). Arbeitsprozesswissen im Fokus von individuellem und
organisationalem Lernen. Ergebnisse aus Großbetrieben in vier europäischen Ländern.
Zeitschrift für Pädagogik, 2(2004), 182–201.
Flick, U. (1995). Handbuch qualitative Sozialforschung: Grundlagen, Konzepte, Methoden und
Anwendung. Beltz: Weinheim.
Frank, H. (1969). Kybernetische Grundlagen der Pädagogik. Baden-Baden: Kohlhammer.
Frei, F., & Ulich, E. (Eds.). (1981). Beiträge zur psychologischen Arbeitsanalyse. Bern: Huber.
Freund, R. (2011). Das Konzept der multiplen Kompetenz auf den Analyseebenen Individuum,
Gruppe, Organisation und Netzwerk. Hamburg: Verlag Dr. Kovac.
Frey, A. (2006). Methoden und Instrumente zur Diagnose beruflicher Kompetenzen von
Lehrkräften – eine erste Standortbestimmung zu bereits publizierten Instrumenten. In:
Allemann-Gheonda, C., Terhard, E. (Hg.). Kompetenzen und Kompetenzentwicklung von
Lehrerinnen und Lehrern: Ausbildung und Beruf. Zeitschrift für Pädagogik. 51(Beiheft): 30–46
Frieling, E. (1995). Arbeit. In U. Flick et al. (Eds.), Handbuch Qualitative Sozialforschung (2nd ed.,
pp. 285–288). Weinheim: Beltz.
Ganguin, D. (1992). Die Struktur offener Fertigungssysteme in der Fertigung und ihre
Voraussetzungen. In G. Dybowski, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und
betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat.
Ganguin, D. (1993). Die Struktur offener Fertigungssysteme in der Fertigung und ihre
Voraussetzungen. In G. Dybowsky, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und
betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat.
Gardner, H. (1991). Abschied vom IQ: die Rahmentheorie der vielfachen Intelligenzen. Stuttgart:
Klett-Cotta.
Gardner, H. (1999). Intelligence reframed: multiple intelligences for the 21st century. New York,
NY: Basic Books.
Gardner, H. (2002). Intelligenzen. Die Vielfalt des menschlichen Geistes. Stuttgart: Klett-Cotta.
Garfinkel, H. (1967). Studies in Ethnomethodology. Englewood Cliffs, N.J.: Prentice-Hall.
Garfinkel, H. (1986). Ethnomethodological Studies of Work. London u. a.: Routledge & Kegan
Paul.
Gäumann-Felix, K., & Hofer, D. (2015). COMET in der Pflegeausbildung Schweiz. In M. Fischer,
F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum
Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 93–110).
Münster: LIT.
Georg, W., & Sattel, U. (1992). Einleitung: Von Japan lernen? In Dies (Ed.), Von Japan lernen?
Aspekte von Bildung und Beschäftigung in Japan (p. 7ff). Weinheim: Deutscher Studien Verlag.
Gerecht, M., Steinert, B., Klieme, E., & Döbrich, P. (2007). Skalen zur Schulqualität:
Dokumentation der Erhebungsinstrumente. Pädagogische Entwicklungsbilanzen mit Schulen
(PEB). Frankfurt/Main: Gesellschaft zur Förderung Pädagogischer Forschung. Deutsches
Institut für Internationale Pädagogische Forschung.
Gerstenmaier, J. (1999). Situiertes Lernen. In C. Perleth & A. Ziegler (Eds.), Pädagogische
Psychologie. Bern: Huber.
Gerstenmaier, J. (2004). Domänenspezifisches Wissen als Dimension beruflicher Entwicklung. In
F. Rauner (Ed.), Qualifikationsforschung und Curriculum (pp. 151–163). Bielefeld:
W. Bertelsmann.
Giddens, A. (1972). In A. Giddens (Ed.), Introduction: Durkheim’s writings in sociology and social
psychology (pp. 1–50). Cambridge: Cambridge University Press.
Girmes-Stein, R., & Steffen, R. (1982). Konzept für eine entwicklungsbezogene Teilstudie im
Rahmen der Evaluation des Modellversuchs zur Verbindung des Berufsvorbereitungsjahres
(BVJ) mit dem Berufsgrundschuljahr (BGJ) an berufsbildenden Schulen des Landes NW.
Münster: Zwischenbericht.
Bibliography 527

Glaser, B., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative
research. Chicago: Aldine Publisher Company.
Granville, G. (2003). “Stop making sense”: Chaos and Coherence. In the formulation of the Irish
qualifications framework. Journal of Education and Work, 16(3), 259–270.
Gravert, H., & Hüster, W. (2001). Intentionen der KMK bei der Einführung von Lernfeldern. In
P. Gerds & A. Zöller (Eds.), Der Lernfeldansatz der Kultusministerkonferenz (pp. 83–97).
Bielefeld: W. Berlelsmann.
Griffin, P., Gillis, S., & Calvitto, P. (2007). Standards-referenced assessment for vocational
education and training in schools. Australian Journal of Education, 51(1), 19–38.
Grob, U., & Maag Merki, K. (2001). Überfachliche Kompetenzen: Theoretische Grundlegung und
empirische Erprobung eines Indikatorensystems. Bern u. a. O.: Peter Lang.
Grollmann, P. (2003). Professionelle Realität beruflichen Bildungspersonals im institutionellen
Kontext ausgewählter Bildungssysteme. Eine empirische Studie anhand ausgewählter Fälle aus
den USA, Dänemark und Deutschland. Bremen: Institut Technik und Bildung der Universität.
Grollmann, P. (2005). Professionelle Realität von Berufspädagogen im internationalen Vergleich:
eine empirische Studie anhand ausgewählter Beispiele aus Dänemark, Deutschland und den
USA. Berufsbildung, Arbeit und Innovation (Vol. 3). Bielefeld: W. Bertelsmann.
Grollmann, P., Kruse, W., & Rauner, F. (2003). Scenarios and Strategies for VET in Europe (Vol.
130). Dortmund: Landesinstitut Sozialforschungsstelle Dortmund.
Grollmann, P., Kruse, W., & Rauner, F. (Eds.). (2005). Europäisierung beruflicher Bildung.
Bildung und Arbeitswelt (Vol. 14). Münster: LIT.
Grollmann, P., & Rauner, F. (Eds.). (2007). International perspectives on teachers and lecturers in
technical and vocational education. Dordrecht: Springer.
Grollmann, P., Spöttl, G., & Rauner, F. (2006). Europäisierung Beruflicher Bildung – eine
Gestaltungsaufgabe. Reihe: Bildung und Arbeitswelt (Vol. 16). Münster: LIT.
Grollmann, P., Spöttl, G., & Rauner, F. (Eds.). (2007). Europäisierung beruflicher Bildung – eine
Gestaltungsaufgabe. Münster: LIT.
Gruber, H., & Renkl, A. (2000). Die Kluft zwischen Wissen und Handeln: Das Problem des trägen
Wissens. In G. H. Neuweg (Ed.), Wissen – Können – Reflektion. Ausgewählte
Verhältnisbestimmungen (pp. 155–174). Innsbruck: Studien-Verlag.
Grünewald, U., Degen, U., & Krick, H. (1979). Qualifikationsforschung und berufliche Bildung.
Ergebnisse eines Colloquiums des Bundesinstituts für Berufsbildung (BIBB) zum
gegenwärtigen Diskussionsstand in der Qualifikationsforschung. Heft 2. Berlin: BIBB.
Gruschka, A. (1983). Fachliche Kompetenzentwicklung und Identitätsbildung im Medium der
Erzieherausbildung – über den Bildungsgang der Kollegschule und zur Möglichkeit der
Schüler, diesen zum Thema zu machen. In D. Benner, H. Herd, & H. Thiersch (Eds.), Zeitschrift
für Pädagogik 18 (pp. 142–152). Beiheft: Beiträge zum 8. Kongreß der Deutschen Gesellschaft
für Erziehungswissenschaft.
Gruschka, A. (Ed.). (1985). Wie Schüler Erzieher werden. Studie zur Kompetenzentwicklung und
fachlichen Identitätsbildung. (2 Bände). Wetzlar: Büchse der Pandora.
Gruschka, A. (2005). Bildungsstandards oder das Versprechen, Bildungstheorie in empirischer
Bildungsforschung aufzuheben. In L. A. Pongratz, R. Reichenbach, & M. Wimmer (Eds.),
Bildung - Wissen - Kompetenz (pp. 9–29). Bielefeld: Janus Presse.
Guillemin, F., Bombardier, C., & Beaton, D. (1993). Cross-cultural adaptation of health-related
quality of life measures: literature review and proposed guidelines. Journal of Clinical Epide-
miology, 46(12), 1417–1432.
Guldimann, T., & Zutavern, M. (1992). Schüler werden Lernexperten. Arbeitsberichte.
Forschungsstelle der Pädagogischen Hochschule des Kantons St. Gallen. Band 9. Pädagogische
Hochschule St. Gallen.
Haasler, B. (2004). Hochtechnologie und Handarbeit – Eine Studie zur Facharbeit im
Werkzeugbau der Automobilindustrie. Bielefeld: W. Bertelsmann Verlag.
Haasler, B., & Erdwien, B. (2009). Vorbereitung und Durchführung der Untersuchung. In
F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen beruflicher Kompetenzen.
528 Bibliography

Bd. 1. Grundlagen und Konzeption des KOMET-Projekts. Reihe Bildung und Arbeitswelt.
Münster: LIT.
Haasler, B., Heinemann, L., Rauner, F., Grollmann, P., & Martens, T. (2009). Testentwicklung und
Untersuchungsdesign. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen
beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des KOMET-Projektes (2. Aufl.)
(pp. 103–140). Bielefeld: W. Bertelsmann.
Haasler, B., & Rauner, F. (2012). Lernen im Betrieb. Konstanz: Christiani.
Hacker, W. (1973). Allgemeine Arbeits- und Ingenieurspsychologie. Bern: Huber.
Hacker, W. (1986). Arbeitspsychologie. Psychische Regulation von Arbeitstätigkeiten. Bern:
Huber.
Hacker, W. (1992). Expertenkönnen – Erkennen und Vermitteln. Göttingen: Verlag für Angewandte
Psychologie.
Hacker, W. (1996). Diagnose von Expertenwissen. Von Abzapf-(Broaching-) zu Aufbau-([Re-]
Constuction-)Konzepten. In Sitzungsberichte der sächsischen Akademie der Wissenschaften zu
Leipzig. Bd. 134. Heft 6. Berlin: Akademie-Verlag.
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work. Test of a theorie.
Organizational Behaviour of human Performance, 60, 250–279.
Hastedt, H. (1991). Aufklärung und Technik. Grundprobleme einer Ethik der Technik. Frankfurt/
Main: Suhrkamp.
Hattie, J. A. (2003). Teachers make a difference: What is the research evidence? Australian councel
for education, research annual conference on: Building Teacher Quality.
Hattie, J. A. (2011). Influences on students’ learning. www.arts.auckland.acoz/education/staff.
Hattie, J., & Yates, C. R. (2015). Lernen sichtbar machen aus psychologischer Perspektive.
Hohengehren: Schneider.
Hauschildt, U., Brown, H., Heinemann, L., & Wedekind, V. (2015). COMET Südafrika. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand
(pp. 353–374). Berlin: LIT.
Havighurst, R. J. (1972). Developmental Tasks and Education. New York: David McKay.
Heeg, F. J. (2015). Stellenwert des COMET-Kompetenzmodells für duale Ingenieur-studiengänge.
In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand
(pp. 111–126). Berlin: LIT.
Heid, H. (1999). Über die Vereinbarkeit individueller Bildungsbedürfnisse und betrieblicher
Qualifikationsanforderungen. ZfPäd, 45(2), 231–244.
Heid, H. (2006). Werte und Normen in der Berufsbildung. In R. Arnold & A. Lipsmeier (Eds.),
Handbuch der Berufsbildung (2nd ed., pp. 33–43). Wiesbaden: VS Verlag für
Sozialwissenschaften.
Heidegger, G., Adolph, G., & Laske, G. (1997). Gestaltungsorientierte Innovation in der
Berufsschule. Bremen: Donat.
Heidegger, G., Jacobs, J., Martin, W., Mizdalski, R., & Rauner, F. (1991). Berufsbilder 2000.
Soziale Gestaltung von Arbeit, Technik und Bildung. Opladen: Westdeutscher Verlag.
Heidegger, G., & Rauner, F. (1997). Reformbedarf in der Beruflichen Bildung für die industrielle
Produktion der Zukunft. Düsseldorf: Ministerium für Wirtschaft und Mittelstand, Technologie
und Verkehr NRW.
Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In
F. Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz.
Bd. III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT.
Heinemann, L., & Rauner, F. (2008). Identität und Engagement: Konstruktion eines Instruments
zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität. A+B
Forschungsberichte 1. Universität Bremen: IBB.
Heinz, W. R. (1995). Arbeit, Beruf und Lebenslauf. Eine Einführung in die berufliche Sozialisation.
München: Juventa.
Bibliography 529

Heinz, W. R. (2002). Self-socialization and post-traditional society. In R. A. Settersten & T. J.


Owens (Eds.), Advances in life course research. New frontiers in socialization (pp. 41–64).
New York: Elsevier.
Heinz, W. R. (2006). Berufliche Sozialisation. In F. Rauner (Ed.), Handbuch
Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 321–329). Bielefeld: W. Bertelsmann
Verlag.
Heinz, W. R. (2012). Die Perspektive des Lebenslaufs. In B. Dippelhofer-Stiem (Ed.), Enzyklopädie
Erziehungswissenschaften Online (EEO). Weinheim u. Basel: Beltz/Juventa.
Heinze, T. (1972). Zur Kritik an den Technologisierungstendenzen des Unterrichtsprozesses. Die
Deutsche Schule, 6, 347–361.
Hellpach, W. (1922). Sozialpsychologische Analyse des betriebstechnischen Tatbestandes
“Gruppenfabrikation”. In R. Lang & W. Hellpach (Eds.), Gruppenfabrikation (pp. 5–186).
Berlin: Springer.
Heritage, J. (1984). Garfinkel and Ethnomethodology. Cambridge: Polity Press.
Hirtt, N. (2011). Education in the “knowledge economy”: Consequences for democracy In:
Aufenanger, S., Hamburger, F., Ludwig, L., Tippelt, R. (Hg.) Bildung in der Demokratie:
Beiträge zum 22. Kongress der Deutschen Gesellschaft für Erziehungswissenschaft.
Schriftenreihe der Deutschen Gesellschaft für Erziehungswissenschaft (DGfE). Budrich
Hoey, D. (2009). How do we measure up? Benchmarking the world skills competition. In
R. McLean & D. Wilson (Eds.), International handbook of education for the changing world
of work (Bridging academic and vocational learning) (Vol. 6, pp. 2827–2840).
Hoff, E.-H., Lappe, L., & Lempert, W. (1991). Persönlichkeitsentwicklung in
Facharbeiterbiographien. Bern, Stuttgart: Huber.
Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and
organizations across nations. Thousand Oaks, Calif: Sage Publications.
Holzkamp, K. (1985). Grundlagen der Psychologie. Frankfurt/Main, New York: Campus.
Howe, F., & Heermeier, R. (Eds.). (1999). Abschlussbericht (MV GOLO): Gestaltungsorientierte
Lern- und Arbeitsaufgaben. Bremen: ITB Universität Bremen.
Industrie- und Handelskammer in Nordrhein-Westfalen. (2010). Eine Handreichung für
Unternehmen und Prüfer. Industrielle Metall- und Elektroberufe: Der Umgang mit dem
varianten Modell.
Jäger, C. (1989). Die kulturelle Einbettung des europäischen Marktes. In M. Haller, H.-J. Hoff-
mann-Nowonty, & W. Zapf (Eds.), Kultur und Gesellschaft: Verhandlungen des 24. Deutschen
Soziologentages, des 11 Österreichischen Soziologentages und des 8. Kongresses der
Schweizerischen Gesellschaft für Soziologie in Zürich 1988. Frankfurt/Main und New York:
Campus.
Jäger, C., Bieri, L., & Dürrenberger, G. (1987). Berufsethik und Humanisierung der Arbeit.
Schweizerische Zeitschrift für Soziologie, 13(1987), 47–62.
Johnson, D., & Johnson, R. (1999). Learning together and alone: cooperative, competitive, and
individualistic learning. Boston: Allyn and Bacon.
Jongebloed, H.-C. (2006). Vorwort. In T. Retzmann (Ed.), Didaktik der berufsmoralischen Bildung
in Wirtschaft und Verwaltung. Eine fachdidaktische Studie zur Innovation der kaufmännischen
Berufsbildung (pp. VII–XIV). Norderstedt: Books on Demand.
Kalvelage, J., Heinemann, L., Rauner, F., & Zhou, Y. (2015). Messen von Identität und Engage-
ment in beruflichen Bildungsgängen. In M. Fischer, F. Rauner, & Z. Zhao (Eds.),
Kompetenzdiagnostik in der beruflichen Bildung – Methoden zum Erfassen und Entwickeln
beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 305–326). Münster: LIT.
Kanungo, R. N. (1982). Work alienation: An integrative approach. New York: Praeger Publishers.
Karlinger, F. N. (1964). Foundation of Behavioral Research. New York: Holt Rinehart and
Winston Inc..
Katzenmeyer, R., Baltes, D., Becker, U., Gille, M., Hubacek, G., Kullmann, B., et al. (2009). Das
COMET-Kompetenzmodell in der Unterrichtspraxis. In F. Rauner et al. (Eds.), Messen
530 Bibliography

beruflicher Kompetenzen. Bd. II. Ergebnisse COMET 2008. Reihe Bildung und Arbeitswelt
(pp. 161–205). LIT: Münster.
Kelle, U., Kluge, S., & Prein, G. (1993). Strategien der Geltungssicherung in der qualitativen
Sozialforschung. Zur Validitätsproblematik im interpretativen Paradigma. Arbeitspapier Nr. 24.
Hg. Vorstand des Sfb 186. Universität Bremen.
Kern, H., & Sabel, C. F. (1994). Verblasste Tugenden. Zur Krise des Deutschen
Produktionsmodells. In N. Beckenbach & W. v. Treeck (Eds.), Umbrüche gesellschaftlicher
Arbeit. Soziale Welt, Sonderband 9 (pp. 605–625). Göttingen: Schwartz.
Kern, H., & Schumann, M. (1970). Industriearbeit und Arbeiterbewusstsein. Eine empirische
Untersuchung über den Einfluss der aktuellen technischen Entwicklung auf die industrielle
Arbeit und das Arbeiterbewusstsein (Vol. I, II). Frankfurt/Main: Europäische Verlagsanstalt.
Kern, H., & Schumann, M. (1984). Das Ende der Arbeitsteilung? Rationalisierung in der
industriellen Produktion. München: Beck.
Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine
praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften.
Kleiner, M. (2005). Berufswissenschaftliche Qualifikationsforschung im Kontext der
Curriculumentwicklung. Studien zur Berufspädagogik 18. Hamburg: Dr. Kovac Verlag.
Kleiner, M., Meyer, K., & Rauner, F. (2001). Berufsbildungsplan für den Industrie-mechaniker.
ITB-Arbeitspapier Nr. 32. Bremen: ITB.
Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-Design I. Arbeits-aufgaben
für eine moderne Beruflichkeit – Identifizieren und Beschreiben von beruflichen
Arbeitsaufgaben. In: Berufsbildung und Innovation – Instrumente und Methoden zum Planen,
Gestalten und Bewerten (Vol. 2). Konstanz: Christiani.
Kliebard, H. (1999). Schooled to Work. Vocationalism and the American Curriculum, 1876–1946.
New York, NY: Teachers College Press.
Klieme, E., Avenarius, H., Blum, W., Döbrich, P., Gruber, H., Prenzel, M., et al. (2003). Zur
Entwicklung nationaler Bildungsstandards: Eine Expertise. Berlin: Bundesministerium für
Bildung und Forschung.
Klieme, E., & Hartig, J. (2007). Kompetenzkonzepte in den Sozialwissenschaften und im
empirischen Diskurs. Zeitschrift für Erziehungswissenschaft. Sonderheft, 08, 11–29.
Klieme, E., & Leutner, D. (2006). Kompetenzmodelle zur Erfassung individueller Lernergebnisse
und zur Bilanzierung von Bildungsprozessen. Beschreibung eines neu eingerichteten
Schwerpunktprogramms der DFG. Zeitschrift für Pädagogik, 53(6), 876–903.
Klotz, V. K., & Winther, E. (2012). Kompetenzmessung in der kaufmännischen Berufs-ausbildung:
Zwischen Prozessorientierung und Fachbezug. Eine Analyse der ak-tuellen Prüfungspraxis.
bwp@-Ausgabe Nr. 22. Juni 2012. Universität Pader-born. URL: http://www.bwpat.de/
ausgabe22/klotz_winther_bwpat22.pdf (Stand: 03.09.2014).
Klüver, J. (1995). Hochschule und Wissenschaftssystem. In: Huber, L. (Hg.) Enzyklopädie
Erziehungswissenschaft. Bd. 10. Ausbildung und Sozialisation in der Hochschule. 78–91.
KMK – Kultusministerkonferenz. (1999). Handreichungen für die Erarbeitung von
Rahmenlehrplänen der Kultusministerkonferenz (Köln) für den berufsbezogenen Unterricht in
der Berufsschule und ihre Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte
Ausbildungsberufe, Bonn (Stand: 05.02.1999).
KMK – Kultusministerkonferenz. (2004a). Standards für die Lehrerbildung –
Bildungswissenschaften, Bonn (Stand: 16.12.2004).
KMK – Kultusministerkonferenz. (2004b). Argumentationspapier Bildungsstandards der
Kultusministerkonferenz, Bonn (Stand: 16.12.2004).
KMK – Kultusministerkonferenz. (2005). Bildungsstandards im Fach Physik (Chemie/Biologie)
für den mittleren Schulabschluss. München: Luchterhand.
KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik
Deutschland. (1991). Rahmenvereinbarung über die Berufsschule. Beschluss der
Kultusministerkonferenz vom 14./15.3.1991. ZBW, 7, 590–593.
Bibliography 531

KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik
Deutschland (Hg.) (1996). Handreichungen für die Erarbeitung von Rahmenlehrplänen der
Kultusministerkonferenz für den berufsbezogenen Unterricht in der Berufsschule und ihre
Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte Ausbildungsberufe, Bonn.
Kohlberg, L. (1969). Stage and sequence: The developmental approach to moralization.
New York: Holt.
König, J. (2010). Lehrerprofessionalität – Konzepte und Ergebnisse der internationalen und
nationalen Forschung am Beispiel fächerübergreifender und pädagogischer Kompetenz. In
J. König & B. Hoffmann (Eds.), Professionalität von Lehrkräften – was sollen Lehrkräfte im
Lese- und Schreibunterricht wissen und können (pp. 40–105). Berlin: DGLS.
Kruse, W. (1976). Die Qualifikation der Arbeiterjugend. Eine Studie über die gesellschaftliche
Bedeutung ihrer Veränderung. Frankfurt/Main: Campus.
Kruse, W. (1986). Von der Notwendigkeit des Arbeitsprozeßwissens. In J. Schweitzer (Ed.),
Bildung für eine menschliche Zukunft (pp. 188–193). Weinheim, Basel: Juventa Verlag.
Kunter, M., Schümer, G., Artelt, C., Baumert, J., Klieme, E., Neubrand, M., et al. (2003). Pisa
2000 – Dokumentation der Erhebungsinstrumente. Berlin: MPI für Bildungsforschung.
Kurtz, T. (2001). Aspekte des Berufs in der Moderne. Opladen: Leske + Budrich.
Kurtz, T. (2005). Die Berufsform der Gesellschaft. Weilerswist: Velbrück Wissenschaft.
Lamnek, G. (1988/89). Qualitative Sozialforschung. Bde. 1/2. Methodologie. München
Laur-Ernst, U. (Ed.). (1990). Neue Fabrikstrukturen – veränderte Qualifikationen. Ergebnisse eines
Workshops des Bundesinstituts für Berufsbildung. Berlin: BIBB.
Lave, J., & Wenger, E. (1991). Situated Learning. Legitimate Peripheral Participation. New York:
Cambridge University Press.
Lechler, P. (1982). Kommunikative Validierung. In G. L. Huber & H. Mandl (Eds.), Verbale Daten
(pp. 243–258). Beltz: Weinheim.
Lehberger, J. (2013). Arbeitsprozesswissen – didaktisches Zentrum für Bildung und Qualifizierung.
Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Berlin: LIT.
Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMET-
Messverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET
auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT.
Lehberger, J., & Rauner, F. (2014). Berufliches Lernen in Lernfeldern. Ein Leitfaden für die
Gestaltung und Organisation projektförmigen Lernens in der Berufsschule. Bremen:
Universität Bremen: I:BB.
Lehmann, R. H., & Seeber, S. (Eds.). (2007). ULME III. Untersuchungen von Leistungen, Moti-
vation und Einstellungen der Schülerinnen und Schüler der Berufsschulen. Hamburg: Behörde
für Bildung und Sport.
Lempert, W. (1995). Berufliche Sozialisation und berufliches Lernen. In R. Arnold & A. Lipsmeier
(Eds.), Handbuch der Berufsbildung. Verlag B. Budrich: Opladen.
Lempert, W. (2000). Berufliche Sozialisation oder was Berufe aus Menschen machen. Eine
Einführung (2nd ed.). Baltmannsweiler: Schneider Verlag.
Lempert, W. (2006). Berufliche Sozialisation. Persönlichkeitsentwicklung in der betrieblichen
Ausbildung und Arbeit. Baltmannsweiler: Schneider Verlag.
Lempert, W. (2007a). Vom “impliziten Wissen” zur soziotopologisch reflektierten Theorie.
Ermunterung zur Untertunnelung einer verwirrenden Kontroverse. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 103(4), 581–596.
Lempert, W. (2007b). Nochmals: Beruf ohne Zukunft? Berufspädagogik ohne Beruf? Postskriptum
zur Diskussion des Buches von Thomas Kurz “Die Berufsform der Gesellschaft”. Zeitschrift für
Berufs- und Wirtschaftspädagogik, 103(3), 461–467.
Lenger, A. (2016). Der ökonomische Fachhabitus – professionstheoretische Konsequenzen für das
Studium der Wirtschaftswissenschaften. In G. Minnameier (Ed.), Ethik und Beruf.
Interdisziplinäre Zugänge (pp. 157–176). Bielefeld: wbv.
Lenk, H., & Ropohl, G. (Eds.). (1987). Technik und Ethik. Stuttgart: Reclam.
532 Bibliography

Lenzen, D., & Blankertz, H. (1973). Didaktik und Kommunikation: Zur strukturalen Begründung
der Didaktik und zur didaktischen Struktur sprachlicher Interaktion. Athenäum: Frankfurt am
Main.
Lüdtke, G. (1974). Harmonisierung und Objektivierung von Prüfungen. PAL Schriftreihe Bd. 1.
Konstanz: Christiani.
Lutz, B. (1988). Zum Verhältnis von Analyse und Gestaltung der sozialwissenschaftlichen
Technikforschung. In F. Rauner (Ed.), “Gestaltung” – eine neue gesellschaftliche Praxis.
Bonn: Neue Gesellschaft.
Martens, T. (2015). Wie kann berufliche Kompetenz gemessen werden? Das Beispiel COMET. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand.
Berlin, Münster: LIT.
Martens, T., Heinemann, L., Maurer, A., Rauner, F., Ji, L., & Zhao, Z. (2011). Ergebnisse zum
Messverfahren [COMET]. In: Rauner, F. et al. Messen beruflicher Kompetenzen. Bd. III. Drei
Jahre COMET-Testerfahrung, 90–126.
Martens, T., & Rost, J. (1998). Der Zusammenhang von wahrgenommener Bedrohung durch
Umweltgefahren und der Ausbildung von Handlungsintentionen. Zeitschrift für Experimentelle
Psychologie., 45(4), 345–364.
Martens, T., & Rost, J. (2009). Zum Zusammenhang von Struktur und Modellierung beruflicher
Kompetenzen. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen
beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des COMET-Projekts
(pp. 91–95). Münster: LIT.
Mayring, P. (1988). Qualitative Inhaltsanalyse. Grundlagen und Techniken (2. Auflage).
Weinheim: Deutscher Studien Verlag.
McCormick, E. (1979). Job analysis. Methodes and applications. New York: Amacom.
Meyer, P. J., & Allen, N. J. (1991). A three-component conceptualization of organizational
commitment. Human Resource Management Review, 1, 61–89.
Meyer-Abich, K. N. (1988). Wissenschaft für die Zukunft. Holistisches Denken in ökologischer und
gesellschaftlicher Verantwortung. München: Beck.
Minnameier, G. (2001). Bildungspolitische Ziele, wissenschaftliche Theorien und methodisch-
praktisches Handeln – auch ein Plädoyer für “Technologieführerschaft” im Bildungsbereich.
In H. Heid, G. Minnameier, & E. Wuttke (Eds.), Fortschritte in der Berufsbildung? ZBW.
Beiheft 16 (pp. 13–29). Stuttgart: Steiner.
Monseur, C., Baye, A., Lafontaine, D., & Quittre, V. (2011). PISA test format assessment and the
local independence assumption. IERI Monographs Series. Issues and Methodologies in Large-
Scale Assessments. 4, 131–158. http://hdl.handle.net/2268/103137
Müller, W. (1995). Der Situationsfi lm – Ein Medium partizipativer Organisationsentwicklung. In
G. Dybowski, H. Pütz, & F. Rauner (Hg.), Berufsbildung und Organisationsentwicklung.
„Perspektiven, Modelle, Forschungsfragen“ (pp. 333–344). Bremen: Donat.
Müller-Fohrbroth, G. (1973). Wie sind Lehrer wirklich? Ideale Vorurteile Fakten. Stuttgart: Klett.
National Automotive Technicians Education. (1996). ASE certification for automobile technician
training programs. VA: Herndon.
Nehls, H., & Lakies, T. (2006). Berufsbildungsgesetz. Basiskommentar. Frankfurt: Bund.
Neuweg, G. H. (1999). Könnerschaft und implizites Wissen. Münster: Waxmann.
Neuweg, G. H. (Ed.). (2000). Wissen – Können – Reflexion. Ausgewählte Verhältnisbestimmungen.
Innsbruck, Wien, München: Studien-Verlag.
Nickolaus, R., Gschwendtner, T., & Abele, S. (2009). Die Validität von Simulationsaufgaben am
Beispiel der Diagnosekompetenz von Kfz-Mechatronikern. Stuttgart: Institut für
Berufspädagogik.
Nida-Rümelin, J. (2011). Die Optimierungsfalle. Philosophie einer humanen Ökonomie. München:
Irisiana.
Norton, R. E. (1997). DACUM handbook. The national centre on education and training for
employment. Columbus/Ohio: The Ohio State University.
Bibliography 533

OECD. (2009). Länderbericht zur Berufsbildung in der Schweiz. Learning for Jobs, OECD Studie
zur Berufsbildung Schweiz. http://www.bbt.admin.ch/themen/internationales/01020/index.
html?lang¼de (Zugriff 11.01.2016).
Oser, F. (1997). Standards der Lehrerbildung. Teil 1. Berufliche Kompetenzen, die hohen
Qualitätsmerkmalen entsprechen. Beiträge zur Lehrerbildung, 15(1), 26–37.
Oser, F., Curcio, G. P., & Düggeli, A. (2007). Kompetenzmessung in der Lehrerbildung als
Notwendigkeit – Fragen und Zusammenhänge. Beiträge zur Lehrerbildung, 25(1), 14–26.
Ott, B. (1998). Ganzheitliche Berufsbildung. Theorie und Praxis handlungsorientierter
Techniklehre in Schule und Betrieb (2nd ed.). Stuttgart: Steiner.
Pätzold, G. (1995). Vermittlung von Fachkompetenz in der Berufsbildung. In R. Arnold &
A. Lipsmeier (Eds.), Handbuch der Berufsbildung (pp. 157–170). Opladen: Leske + Budrich.
Pätzold, G., Drees, G., & Thiele, H. (1998). Kooperation in der beruflichen Bildung. Zur
Zusammenarbeit von Ausbildern und Berufsschullehrern im Metall- und Elektrobereich.
Hohengehren: Baltmannsweiler. Wirtschaft und Berufserziehung, 4, 89/98.
Pätzold, G., & Walden, G. (Eds.). (1995). Lernorte im dualen System der Berufsbildung. Reihe:
Berichte zur beruflichen Bildung, Heft 177. Hg. vom BIBB. Bielefeld: W. Bertelsmann.
Petermann, W. (1995). Fotographie und Filmanalyse. In U. Flick, E. von Kardoff, H. Keupp, L. von
Rosenstiel, & S. Wolff (Hg.), Handbuch qualitative Sozialforschung. Grundlagen, Konzepte,
Methoden und Anwendungen (2. Aufl, pp. 269–272). Weinheim: Beltz.
Petersen, A. W., & Wehmeyer, C. (2001). Evaluation der neuen IT-Berufe. Forschungskonzepte
und Ergebnisse der bundesweiten BiBB-IT-Studie. In A. W. Petersen, F. Rauner, & F. Stuber
(Eds.), IT-gestützte Facharbeit. Gestaltungsorientierte Berufsbildung. Reihe: Bildung und
Arbeitswelt (Vol. 4, pp. 283–310). Baden-Baden: Nomos.
Piaget, J. (1973). Äquiliberation der Kognitiven Strukturen. Stuttgart: Klett.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014). Berufliche Kompetenzen messen. Das
Modellversuchsprojekt KOMET NRW. Zweiter Zwischenbericht (Juli 2014). IBB, Universität
Bremen. http://www.ibb.uni-bremen.de.
Piening, D., & Rauner, F. (2010). Umgang mit Heterogenität. Eine Handreichung des Projektes
KOMET. Bremen: Universität Bremen I:BB.
Piening, D., & Rauner, F. (2014). Kosten, Nutzen und Qualität der Berufsausbildung. Berlin: LIT.
Pies, I. (2016). Individualethik versus Institutionenethik? – Zur Moral (in) der Marktwirtschaft. In
G. Minnameier (Ed.), Ethik und Beruf. Interdisziplinäre Zugänge (pp. 17–39). Bielefeld:
Bertelsmann Verlag.
Polanyi, M. (1966a). The tacit dimension. London: Routledge & Kegan Paul.
Polanyi, M. (1966b). The tacit dimension. Garden City: Doubleday & Company.
Polanyi, M. (1985). Implizites Wissen. Frankfurt/Main: Suhrkamp (orig.: The Tacit Dimension.
1966).
Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a
scientific conception: towards a theory of conceptual change. Science Education, 66(2),
201–227.
Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in
simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4),
717–731.
Prenzel, M., Baumert, J., Blum, W., Lehmann, R., Leutner, D., Neubrand, M., et al. (Eds.). (2004).
PISA 2003. Der Bildungsstand der Jugendlichen in Deutschland – Ergebnisse des zweiten
internationalen Vergleichs. Münster: Waxmann.
Przygodda, K., & Bauer, W. (2004). Ansätze berufswissenschaftlicher Qualifikationsforschung im
BLK-Programm “Neue Lernkonzepte in der dualen Berufsausbildung”. In F. Rauner (Ed.),
Qualifikationsforschung und Curriculum. Analysieren und Gestalten beruflicher Arbeit und
Bildung. Reihe: Berufsbildung, Arbeit und Innovation (Vol. 25, pp. 61–79). Bielefeld:
W. Bertelsmann.
Rademacker, H. (1975). Analyse psychometrischer Verfahren der Erfolgskontrolle und der
Leistungsmessung hinsichtlich ihrer didaktischen Implikationen. In Programmierte Prüfungen:
534 Bibliography

Problematik und Praxis. Schriften zur Berufsbildungsforschung (Vol. 25, pp. 63–100). Hanno-
ver: Schroedel.
Randall, D. M. (1990). The consequences of organizational commitment. Administrative Science
Quarterly, 22, 46–56.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (2nd ed.).
Chicago: University of Chicago Press.
Rauner, F. (1986). Elektrotechnik Grundbildung. Überlegungen zur Techniklehre im Schwerpunkt
Elektrotechnik der Kollegschule. Soest: Landesinstitut für Schule und Weiterbildung.
Rauner, F. (1988). Die Befähigung zur (Mit)Gestaltung von Arbeit und Technik als Leitidee
beruflicher Bildung. In G. Heidegger, P. Gerds, & K. Weisenbach (Eds.), Gestaltung von Arbeit
und Technik – Ein Ziel beruflicher Bildung (pp. 32–51). Frankfurt/Main, New York: Campus.
Rauner, F. (1995). Gestaltung von Arbeit und Technik. In R. Arnold & A. Lipsmeier (Eds.),
Handbuch der Berufsbildung (pp. 50–64). Opladen: Leske + Budrich.
Rauner, F. (1997). Automobil-Service im internationalen Vergleich. In F. Rauner, G. Spöttl, &
W. Micknass (Eds.), Service, Qualifizierung und Vertrieb im internationalen Automobil-Sektor:
Ergebnisse des Automobil-Welt-Congresses am 15. und 16. Oktober 1996 in München
(pp. 35–47). Bremen: Donat-Verlag.
Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur
reflektierten Meisterschaft. Zeitschrift für Berufs- und Wirtschaftspädagogik (ZBW), 95(3),
424–446. Stuttgart: Franz Steiner Verlag.
Rauner, F. (2000). Zukunft der Facharbeit. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches
Arbeitsprozesswissen (pp. 49–60). Baden-Baden: Nomos.
Rauner, F. (2002a). Qualifikationsforschung und Curriculum. In M. Fischer & F. Rauner (Eds.),
Lernfeld: Arbeitsprozess (pp. 317–339). Baden-Baden: Nomos.
Rauner, F. (2002b). Berufliche Kompetenzentwicklung – vom Novizen zum Experten. In
P. Dehnbostel, U. Elsholz, J. Meister, & J. Meyer-Henk (Eds.), Vernetzte
Kompetenzentwicklung. Alternative Positionen zur Weiterbildung (pp. 111–132). Berlin: Edi-
tion Sigma.
Rauner, F. (2004). Eine transferorientierte Modellversuchstypologie – Anregung zur
Wiederbelebung der Modellversuchspraxis als einem Innovationsinstrument der
Bildungsreform (Teil 2). Zeitschrift für Berufs- uns Wirtschaftspädagogik, 100, 424–447.
Rauner, F. (2004a). Qualifikationsforschung und Curriculum. Analysieren und Gestalten
beruflicher Arbeit und Bildung. In Berufsbildung, Arbeit und Innovation (Reihe). Band
25 Forschungsberichte. Bielefeld: W. Bertelsmann Verlag.
Rauner, F. (2004b). Praktisches Wissen und berufliche Handlungskompetenz. Reihe:
ITB-Forschungsberichte, Nr. 14. Universität Bremen: ITB.
Rauner, F. (2005). Offene dynamische Kernberufe als Dreh- und Angelpunkt für eine europäische
Berufsbildung. In P. Grollmann, W. Kruse, & F. Rauner (Eds.), Europäische Berufliche Bildung
(pp. 17–31). Münster: LIT.
Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.),
Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W.
Bertelsmann.
Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische
Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches
Zentrum für die Förderung der Berufsbildung.
Rauner, F. (2015a). Messen beruflicher Kompetenz von Berufsschullehrern. In M. Fischer,
F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung – Methoden
zum Erfassen und Entwickeln beruflicher Kompetenz. KOMET auf dem Prüfstand
(pp. 413–436). Münster: LIT.
Rauner, F. (2015b). Machbarkeitsstudie. Anwenden des KOMET-Testverfahrens für Prüfungen in
der beruflichen Bildung. (Unter Mitarbeit von Klaus Bourdick, Jenny Frenzel, Dorothea
Piening). Bremen: Universität Bremen, I:BB.
Bibliography 535

Rauner, F. (2017). Grundlagen der beruflichen Bildung. Mitgestalten der Arbeitswelt. Bielefeld:
wbv.
Rauner, F. (2018a). Der Weg aus der Akademisierungsfalle. Die Architektur paralleler
Bildungswege. Münster: LIT.
Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen
Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB.
Rauner, F., Bourdick, H., Frenzel, J., & Piening, D. (2015). Anwendung des COMET-
Testverfahrens für Prüfungen in der beruflichen Bildung. Machbarkeitsstudie. Bremen: I:BB.
Rauner, F., & Bremer, R. (2004). Bildung im Medium beruflicher Arbeitsprozesse. Die
berufspädagogische Entschlüsselung beruflicher Kompetenzen im Konflikt zwischen
bildungstheoretischer Normierung und Praxisaffirmation. In: Bildung im Medium beruflicher
Arbeit. Sonderdruck. ZfPäd, 50(2), 149–161.
Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015). Engagement und
Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer
Ausbildung. Eine Studie im Auftrage der Landesinitiative “Steigerung der Attraktivität, Qualität
und Rentabilität der dualen Berufsausbildung in Sachsen”. Bremen: Universität Bremen I:BB.
Rauner, F., Grollmann, P., & Martens, T. (2007). Messen beruflicher Kompetenz(entwicklung).
ITB-Forschungsbericht 21. Bremen: Institut Technik und Bildung.
Rauner, F., Schön, M., Gerlach, H., & Reinhold, M. (2001). Berufsbildungsplan für den
Industrieelektroniker. ITB-Arbeitspapiere 31. Bremen: Universität Bremen, ITB.
Rauner, F., & Spöttl, G. (2002). Der Kfz-Mechatroniker – Vom Neuling zum Experten. Bielefeld:
Bertelsmann.
Rauner, F., Zhao, Z., & Ji, L. (2010). Empirische Forschung zum Messen Beruflicher Kompetenz
der Auszubildenden und Studenten. Beijing: Verlag Tsinghua Universität.
Reckwitz, A. (2003, August). Grundelemente einer Theorie sozialer Praktiken: eine
sozialtheoretische Perspektive. Zeitschrift für Soziologie, 32(4), 282–301.
Ripper, J., Weisschuh, B., & Daimler Chrysler, A. G. (1999). Ausbildung im Dialog: das
ganzheitliche Beurteilungsverfahren für die betriebliche Berufsausbildung. Konstanz:
Christiani.
Röben, P. (2004). Kompetenzentwicklung durch Arbeitsprozesswissen. In K. Jenewein, P. Knauth,
P. Röben, & G. Zülch (Eds.), Kompetenzentwicklung in Arbeitsprozessen (pp. 11–34). Baden-
Baden: Nomos.
Röben, P. (2006). Berufswissenschaftliche Aufgabenanalyse. In F. Rauner (Ed.), Handbuch
Berufsbildungsforschung. 2. aktualisierte Aufl (pp. 606–611). Bielefeld: W. Bertelsmann.
Rost, J. (1999). Was ist aus dem Rasch-Modell geworden? Psychologische Rundschau, 50(3),
171–182.
Rost, J. (2004a). Lehrbuch Testtheorie - Testkonstruktion (2nd ed.). Bern: Huber.
Rost, J. (2004b). Psychometrische Modelle zur Überprüfung von Bildungsstandards anhand von
Kompetenzmodellen. Zeitschrift für Pädagogik, 50(5), 662–678.
Rost, J., & von Davier, M. (1994). A conditional item-fit index for Rasch models. Applied
Psychological Measurement, 18(2), 171–182.
Roth, H. (1971). Pädagogische Anthropologie. Bd. II: Entwicklung und Erziehung. Grundlagen
einer Entwicklungspädagogik. Hannover: Schroedel.
Sachverständigenkommission Arbeit und Technik. (1986). Forschungsperspektiven zum
Problemfeld Arbeit und Technik. Bonn: Verlag Neue Gesellschaft.
Sachverständigenkommission Arbeit und Technik. (1988). Arbeit und Technik. Ein Forschungs-
und Entwicklungsprogramm. Bonn: Verlag Neue Gesellschaft.
Schecker, H., & Parchmann, I. (2006). Modellierung naturwissenschaftlicher Kompetenz.
Zeitschrift für Didaktik der Naturwissenschaften (ZfDN), 12, 45–66.
Scheele, B. (1995). Dialogische Hermeneutik. In U. Flick, E. von Kardoff, H. Keupp, L. von
Rosenstiel, & S. Wolff (Hg.), Handbuch Qualitativer Sozialforschung (pp. 274–276).
Weinheim: Beltz.
Schein, E. (1973). Professional education. New York: McGraw-Hill.
536 Bibliography

Schelten, A. (1994). Einführung in die Berufspädagogik (2nd ed.). Stuttgart: Franz Steiner.
Schelten, A. (1997). Testbeurteilung und Testerstellung. Stuttgart: Franz Steiner.
Schmidt, H. (1995). Berufsbildungsforschung. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der
Berufsbildung (pp. 482–491). Opladen: Leske+Budrich.
Schoen, D. A. (1983). The reflective practitioner. How professionals think in action. New York:
Basic Books, Habercollins Publisher.
Scholz, T. (2013). Beitrag des Koordinators der Industriemechaniker-Arbeitsgruppe. In:
Forschungsgruppe Berufsbildungsforschung (I:BB): Berufliche Kompetenzen messen – das
Modellversuchsprojekt KOMET (Metall). Abschlussbericht. Bremen: Universität Bremen, I:BB.
Scholz, T. (2015). Warum das KOMET-Projekt “Industriemechaniker (Hessen)” eine so
unerwartete Dynamik entfaltete. In M. Fischer, F. Rauner, & Z. Zhao (Eds.),
Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln
beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 149–161). Berlin: LIT.
Schreier, N. (2000). Integration von Arbeiten und Lernen durch eine arbeitsprozessorientierte
Qualifizierungskonzentration beim Einsatz tutorieller Diagnosesysteme im Kfz-Service. In:
Pahl, J.-P., Rauner, F., Spöttl, G. (Hg.) Berufliches Arbeitsprozesswissen. Ein
Forschungsgegenstand der Berufsfeldwissenschaften (pp. 289–300), Baden-Baden.
Sennett, R. (1998). Der flexible Mensch. Die Kultur des neuen Kapitalismus (Originalausgabe: The
Erosion of Charakter. New York). Berlin.
Sennett, R. (2008). Handwerk. Berlin: Berlin-Verlag (aus dem Amerikanischen übersetzt). The
craftman. New Heaven and London: Yale University Press.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability.
Psychological Bulletin, 86(2), 420–428.
Skowronek, H. (1969). Lernen und Lernfähigkeit. München: Juventa.
Skule, S., & Reichborn, A. N. (2002). Learning-conducive work. A survey of learning conditions in
Norwegian workplaces. Luxembourg: Office for Official Publications of the European
Communities.
Spöttl, G. (2006). Experten-facharbeiter-workshops. In F. Rauner (Ed.), Handbuch
Berufsbildungsforschung (2nd ed., pp. 611–616). Bielefeld: W. Bertelsmann.
Stegemann, C., von Eerde, K., & Piening, D. (2015). KOMET Nordrhein-Westfalen: Erste
Erfahrungen in einem kaufmännischen Berufsfeld. In M. Fischer, F. Rauner, & Z. Zhao
(Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und
Entwickeln beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 127–136). Berlin: LIT.
Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2003). The psychology of abilities, competencies, and
expertise. Cambridge: Cambridge University Press.
Steyer, R., & Eyd, M. (2003). Messen und testen. Berlin: Springer.
Straka, A., Meyer-Siever, K., & Rosendahl, J. (2006). Laborexperimente und Quasi-Experimente.
In F. Rauner (Ed.), Handbuch Berufsbildungsforschung. 2. aktual. Aufl (pp. 647–652). Biele-
feld: wbv.
Stuart, M. (2010). The national skills development handbook 2010/11. Rainbow SA.
Suppes, P., & Zinnes, J. L. (1963). Basic measurement theory. In R. D. Luce et al. (Eds.), Handbook
of mathematical psychology. I (pp. 1–76). New York: Wiley.
Taylor, I. A. (1975). An emerging view of creative actions. In I. A. Taylor & J. W. Getzels (Eds.),
Perspectives in creativity (pp. 297–325). Chicago, IL: Aldine.
Tenorth, H.-E. (2009). Ideen und Konzepte von Bildungsstandards. In R. Wernstedt & M. John-
Ohnesorg (Eds.), Bildungsstandards als Instrument schulischer Qualitätsentwicklung
(pp. 13–16). Berlin: Friedrich-Ebert-Stiftung.
Terhart, E. (1998). Lehrerberuf. Arbeitsplatz, Biografie und Profession. In H. Altrichter et al. (Eds.),
Handbuch der Schulentwicklung (pp. 560–585). Innsbruck, Weinheim: Studienverlag.
Tiemeyer, E. (2015). Nordrhein-Westfalen klinkt sich ein. Ziele und erste Erfahrungen mit einem
ambitionierten COMET-Projekt. In M. Fischer, F. Rauner, & Z. Zhao (Eds.),
Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln
beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 73–91). Berlin: LIT.
Bibliography 537

Tomaszewski, T. (1981). Struktur, Funktion und Steuerungsmechanismus menschlicher Tätigkeit.


In T. Tomaszewski (Ed.), Zur Psychologie der Tätigkeit (pp. 11–33). Berlin: Deutscher Verlag
der Wissenschaft.
Ulich, E. (1994). Arbeitspsychologie (3rd ed.). Stuttgart: Zürich.
Ulrich, O. (1987). Technikfolgen und Parlamentsreform. Plädoyer für mehr parlamentarische
Kompetenz bei der Technikgestaltung. In: Aus Politik und Zeitgeschichte. Beilage zu „Das
Parlament“ v. 9.5.1987, 15–25.
VDI. (1991). Verband deutscher Ingenieure. Technikbewertung. Begriffe und Grundlagen. VDI
3780, März 1990.
Volkert, W., Oesterreich, R., Gablenz-Kollakowicz, S., Grogoll, T., & Resch, M. (1983). Verfahren
zur Ermittlung von Regulationserfordernissen in der Arbeitstätigkeit (Vera), Köln
Volpert, W. (1987). Psychische Regulation von Arbeitstätigkeiten. In U. Kleinbeck & J. Rutenfranz
(Eds.), Enzyklopädie der Psychologie, Themenbereich D, Serie III (Vol. 1, pp. 1–42).
Göttingen: Hogrefe.
Volpert, W. (2003). Wie wir handeln, was wir können. Ein Disput als Einführung in die
Handlungspsychologie (3rd ed.). Sottrum: artefact.
von Davier, M. (1997). Bootstrapping goodness-of-fit statistics for sparse categorical data. Results
of a monte carlo study. Methods of Psychological Research Online, 2(2), 29–48.
von Davier, M., & Carstensen, C. H. (Eds.). (2007). Multivariate and mixture distribution rasch
models – extensions and applications. New York: Springer.
von Davier, M., & Rost, J. (1996). Die Erfassung transsituativer Copingstile durch Stimulus-
Response Inventare. Diagnostica, 42(4), 313–331.
Vosniadou, S. (2008). International handbook of research on conceptual change (2nd ed.).
New York: Routledge.
Weber, M. (1922). Gesammelte Aufsätze zur Religionssoziologie I (5. Auflage 1982). Tübingen:
UTB.
Wehner, T., & Dick, M. (2001). Die Umbewertung des Wissens in der betrieblichen Lebenswelt:
Positionen der Arbeitspsychologie und betroffener Akteure. In Wissen in Unternehmen.
Konzepte, Maßnahmen, Methoden (pp. 89–117). Berlin: Erich Schmidt.
Weinert, F. E. (1996). “Der gute Lehrer”, die “gute Lehrerin” im Spiegel der Wissenschaft. Beiträge
zur Lehrerbildung, 14(2), 141–150. www.bzl-online.ch.
Weinert, F. E. (2001). Concept of competence: A conceptual clarification. In D. S. Rychen & L. H.
Salganik (Eds.), Defining and selecting key competencies (pp. 45–65). Seattle: Hogrefe &
Huber.
Weiß, R. (2011). Prüfungen in der beruflichen Bildung. In: Severing, E, Weiß, R. (Hg.). Prüfungen
und Zertifizierungen in der beruflichen Bildung, Bonn. http://www.bibb.de/dokumente/pdf/
a12_voevz_agbfn_10_weiss_1.pdf
Weniger, E. (1957). Die Eigenständigkeit der Erziehung in Theorie und Praxis. Beltz: Weinheim.
Winther, E., & Achtenhagen, F. (2008). Kompetenzstrukturmodell für die kaufmännische
Ausbildung. Adaptierbare Forschungslinien und theoretische Ausgestaltung. Zeitschrift für
Berufs- und Wirtschaftspädagogik., 104, 511–538.
Wirtz, M., & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität. Göttingen:
Hogrefe.
Womack, J. P., Jones, T., & Roos, D. (1990). The machine that changed the world. New York,
Oxford, Singapore, Sydney: Macmillan.
Womack, J. P., Jones, D. T., & Ross, D. (Eds.). (1991). Die zweite Revolution in der
Automobilindustrie: Konsequenzen aus der weltweiten Studie aus dem Massachusetts Institute
of Technology. New York: Campus Verlag. Frankfurt/Main.
Wosnitza, M., & Eugster, B. (2001). MIZEBA – ein berufsfeldübergreifendes Instrument zur
Messung der betrieblichen Ausbildungssituation? Eine Validierung in der gewerblich-
technischen Ausbildung. Empirische Pädagogik, 15(3), 411–426.
Wyman, N. (2015). How to find wealth and success by developing the skills companies actually
need. New York: Crown Business.
538 Bibliography

Young, M. (2007). Auf dem Weg zu einem europäischen Qualifikationsrahmen: Einige kritische
Bemerkungen. In P. Grollmann, G. Spöttl, & F. Rauner (Eds.), Europäisierung beruflicher
Bildung – eine Gestaltungsaufgabe. Hamburg: LIT.
Young, M. (2009). National qualification framework: Their feasibility for effective implementation
in developing countries. Skill Working Paper No. 22. Geneva: ILO.
Zentralverband der Elektrotechnischen Industrie (ZVEI). (1973). Ausbildungs-Handbuch für die
Stufenausbildung elektrotechnischer Berufe (Vol. 7, 2nd ed.). Frankfurt/Main: ZVEI-
Schriftenreihe.
Zhao, Z. (2014). KOMET-China: Die Schritte auf dem Weg zu einem nationalen Schlüsselprojekt
der Qualitätssicherung in der Beruflichen Bildung. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 110(3), 442–448.
Zhao, Z. (2015). Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer und Dozenten
beruflicher Bildung in China. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik
in der beruflichen Bildung (pp. 437–449). Münster: LIT.
Zhao, Z., Rauner, F., & Zhou, Y. (2015). Messen von beruflicher Kompetenz von Auszubildenden
und Studierenden des Kfz-Servicesektors im internationalen Vergleich: Deutschland – China. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzentwicklung in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand
(pp. 393–410). Berlin: LIT.
Zhao, Z., Zhang, Z., & Rauner, F. (2016). KOMET based professional competence assessment for
VET teachers in China. In M. Pilz (Ed.), Youth in transition from school to work – vocational
education and training (VET) in times of economic crises. Dordrecht: Springer.
Zhao, Z., & Zhuang, R. (2012). Research and development of the curriculum for the secondary
teachers’ qualification. Education and Training, 5, 12–15.
Zhao, Z., & Zhuang, R. (2013). Messen beruflicher Kompetenz von Auszubildenden und
Studierenden berufsbildender (Hoch)Schulen in China. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 109(1), 132–140.
Zhou, Y., Rauner, F., & Zhao, Z. (2015). Messen beruflicher Kompetenz von Auszubildenden und
Studierenden des Kfz-Service Sektor im internationalen Vergleich: Deutschland – China. In
M. Fischer, R. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand
(pp. 393–410). Münster: LIT.
Zhuang, R., & Ji, L. (2015). Analyse der interkulturellen Anwendung der COMET-
Kompetenzdiagnostik. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in
der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz:
COMET auf dem Prüfstand (pp. 341–352). Berlin: LIT.
Zhuang, R., & Zhao, Z. (2012). Empirische Forschung zum Messen Beruflicher Kompetenz der
Auszubildenden und Studenten. Peking: Verlag Tsinghua Universität.
Zimmermann, M., Wild, K.-P., & Müller, W. (1999). Das “Mannheimer Inventar zur Erfassung
betrieblicher Ausbildungssituationen” (MIZEBA). Zeitschrift für Berufs- und
Wirtschaftspädagogik, 95(3), 373–402.
Zöller, A., & Gerds, P. (Eds.). (2003). Qualität sichern und steigern. Personal- und
Organisationsentwicklung als Herausforderung beruflicher Schulen (pp. 333–355). Bielefeld:
Bertelsmann.

List of COMET Publications Bd. I

A+B 01/2008 Heinemann, L., Rauner F. „Identität und Engagement: Konstruktion eines Instru-
ments zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität“
Bibliography 539

A+B 01/2016 Rauner, F., Frenzel, J., Heinemann, L., Kalvelage, J., Zhou, Y. (2016). „Identität und
Engagement: ein Instrument zur Beschreibung und zum Messen beruflicher Identität und
beruflichen Engagements. A+B Forschungsberichte“ (2. vollständig überarbeitete Auflage des
01/2006)
A+B 02/2009 Rauner, F., Heinemann, L., Haasler, B. „Messen beruflicher Kompetenz und
beruflichen Engagements“
A+B 04/2009 Maurer, A., Rauner, F., Piening, D. „Lernen im Arbeitsprozess – ein nicht
ausgeschöpftes Potenzial dualer Berufsausbildung“
A+B 10/2012 Rauner, F. „Multiple Kompetenz: „Die Fähigkeit der holistischen Lösung beruflicher
Aufgaben“
A+B 11/2012 Rauner, F. „Messen beruflicher Kompetenz von Berufsschullehrern“
A+B 12/2013 Rauner, F. „Überprüfen beruflicher Handlungskompetenz. Zum Zusammenhang von
Prüfen und Kompetenzdiagnostik“
A+B 14/2014 Rauner, F., Piening, D. „Heterogenität der Kompetenzausprägung in der beruflichen
Bildung“
A+B 15/2014 Fischer, M., Huber, K., Mann, E., Röben, P. „Informelles Lernen und dessen
Anerkennung aus der Lernendenperspektive – Ergebnisse eines Projekts zur Anerkennung
informell erworbener Kompetenzen in Baden-Württemberg“
A+B 16/2014 Rauner, F., Piening, D. „Kontextanalysen im KOMET-Forschungsprojekt: Erfassen
der Testmotivation”
A+B 17/2014 Rauner, F., Piening, D., Frenzel, J. „Der Lernort Schule als Determinante beruflicher
Kompetenzentwicklung“
A+B 18/2014 Rauner, F., Piening, D., Zhou, Y. „Stagnation der Kompetenzentwicklung – und wie
sie überwunden werden kann“
A+B 19/2015 Rauner, F., Piening, D., Scholz, T. „Denken und Handeln in Lernfeldern. Die
Leitidee beruflicher Bildung – Befähigung zur Mitgestaltung der Arbeitswelt – wird konkret“
A+B 20/2015 Rauner, F., Piening, D. „Die Qualität der Lernortkooperation“
A+B Forschungsberichte: Forschungsgruppe Berufsbildungsforschung (I:BB) (Hg.), Universität
Bremen. KIT – Karlsruher Institut für Technologie, Institut für Berufspädagogik und
Allgemeine Pädagogik. Carl von Ossietzky Universität Oldenburg, Institut für Physik/
Technische Bildung. Pädagogische Hochschule Weingarten, Professur für Technikdidaktik
Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher
IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und
Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13(2011),
473–505.
Brüning, L., & Saum, T. (2006). Erfolgreich unterrichten durch Kooperatives Lernen. Strategien
zur Schüleraktivierung. Essen: Neue Deutsche Schule Verlagsgesell schaft mbH.
Fischer, M., Rauner, F., & Zhao, Z. (Eds.). (2015b). Kompetenzdiagnostik in der beruflichen
Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem
Prüfstand. Münster: LIT.
Hellpach, W. (1922). Sozialpsychologische Analyse des betriebstechnischen Tatbestandes
„Gruppenfabrikation“. In R. Lang, & W. Hellpach (Hg.), Gruppenfabrikation (pp. 5–186).
Berlin: Springer.
Lehberger, J. (2013). Arbeitsprozesswissen - didaktisches Zentrum für Bildung und Qualifizierung.
Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Münster: LIT.
Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMET-
Messverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET
auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT.
Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-design I. Konstanz:
Christiani.
Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine
praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften.
Rauner, F. (2018b). Berufliche Kompetenzdiagnostik mit COMET. Erfahrungen und
Überraschungen aus der Praxis. Bielefeld: wbv.
540 Bibliography

Rauner, F. (2019a). Ausbildungsberufe. Berufliche Identität und Arbeitsethik. Eine


Herausforderung für die Berufsentwicklung und die Berufsausbildung. Münster: LIT.
Rauner, F. (2019b). Kreativität. Ein Merkmal der modernen Berufsbildung und wie sie gefördert
werden kann. Münster: LIT.
Rauner, F. (2020). Berufliche Umweltbildung zwischen Anspruch und Wirklichkeit. Eine
Systemanalyse. Bielefeld: wbv.
Rauner, F., Haasler, B., Heinemann, L., & Grollmann, P. (2009). Messen beruflicher Kompetenzen.
Bd. 1. Grundlagen und Konzeption des KOMET-Projekts. Reihe Bildung und Arbeitswelt.
Münster: LIT.
Rauner, F., & Hauschildt, U. (2020). Die Stagnation der beruflichen Kompetenzentwicklung – und
wie man sie überwinden kann. Grundlagen der Berufs- und Erwachsenenbildung (Vol. 87).
Baltmannsweiler: Schneider Verlag Hohengehren.
Rauner, F., & Heinemann, L. (2015). Messen beruflicher Kompetenzen. Bd. IV. Eine
Zwischenbilanz des internationalen Forschungsnetzwerkes COMET. Reihe Bildung und
Arbeitswelt. Münster: LIT.
Rauner, F., Heinemann, L., Martens, T., Erdwien, B., Maurer, A., Piening, D., et al. (2011). Messen
beruflicher Kompetenzen. Bd. III. Drei Jahre KOMET-Testerfahrung. Reihe Bildung und
Arbeitswelt. Münster: LIT.
Rauner, F., Heinemann, L., Maurer, A., Haasler, B., Erdwien, B., & Martens, T. (2013). Compe-
tence development and assessment in TVET (COMET). Theoretical framework and empirical
results. Dordrecht, Heidelberg: Springer.
Rauner, F., Frenzel, J., & Piening, D. (2015a). Machbarkeitsstudie: Anwendung des KOMET-
Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB.
Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015b). Engagement und
Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer
Ausbildung. Eine Studie im Auftrage der Landesinitiative „Steigerung der Attraktivität, Qualität
und Rentabilität der dualen Berufsausbildung in Sachsen“. Bremen: Universität Bremen I:BB.
Rauner, F., Heinemann, L., Piening, D., Haasler, B., Maurer, A., Erdwien, B., et al. (2009). Messen
beruflicher Kompetenzen. Bd. II. Ergebnisse KOMET 2008. Reihe Bildung und Arbeitswelt.
Münster: LIT.
Rauner, F., Lehberger, J., & Zhao, Z. (2018). Messen beruflicher Kompetenzen. Bd. V. Auf die
Lerhrer kommt es an. Reihe Bildung und Arbeitswelt. Münster: LIT.
Zhuang, R., & Ji, L. (2015). Analyse der interkulturellen Anwendung der COMET-
Kompetenzdiagnostik. In M. Fischer, F. Rauner, & Z. Zhao (Hg.), Kompetenz diagnostik in
der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher. Kompetenz:
COMET auf dem Prüfstand (pp. 341–352). Berlin: LIT.

COMET-Berichte

Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und
Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer.
Brown, H. (2015). Competence measurement in South Africa: Teachers‘ reactions to feedback on
COMET results. In E. Smith, P. Gonon, & A. Foley (Eds.), Architectures for apprenticeship.
Achieving economic and social goals (pp. 91–95). North Melbourne: Australian Scholarly
Publishing.
Bundesministerium für Bildung und Forschung (BMBF) (Hg.). (2006). Umsetzungshilfen für die
Abschlussprüfungen der neuen industriellen und handwerklichen Elektroberufe. Intentionen,
Konzeption und Beispiele (Entwicklungsprojekt). Stand: 30.12.2005. (Teil 1 der
Abschlussprüfung); Stand: 09.01.2006. (Teil 2 der Abschlussprüfung). Manuskript.
Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der
Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt.
Bibliography 541

Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und
beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus:
Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der
Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang.
Fischer, R., & Hauschildt, U. (2015). Internationaler Kompetenzvergleich und Schulentwicklung.
Das Projekt COMCARE bietet neue Ansatzmöglichkeiten. PADUA Fachzeitschrift für
Pflegepädagogik, Patientenedukation und -bildung, 10(4), 233–241.
Forschungsgruppe Berufsbildungsforschung (I:BB). (2015). KOMET NRW – Ein ambitioniertes
Projekt der Qualitätssicherung und -entwicklung in der dualen Berufsausbildung. Bericht der
Wissenschaftlichen Begleitung. Bremen: Universität Bremen, I:BB.
Hauschildt, U. (2015). Me siento bien en mi centro de formación – I feel good at my training
institution: Results of an international competence assessment in nursing. In E. Smith, P. Gonon,
& A. Foley (Eds.), Architectures for apprenticeship. Achieving economic and social goals
(pp. 100–104). North Melbourne: Australian Scholarly Publishing.
Hauschildt, U., Brown, H., & Zungu, Z. (2013). Competence measurement and development in
TVET: Result oft he first COMET test in South Africa. In S. Akooje, P. Gonon, U. Hauschildt,
& C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls
(pp. 177–184). Münster: Lit.
Hauschildt, U., & Heinemann, L. (2013). Occupational identity and motivation of apprentices in a
system of integrated dual VET. In L. Deitmer, U. Hauschildt, F. Rauner, & H. Zelloth (Eds.),
The architecture of innovative apprenticeship. Technical and vocational education and train-
ing: Issues, concerns and prospects 18 (pp. 177–192). Dordrecht: Springer.
Hauschildt, U., & Piening, D. (2013). Why apprentices quit: A German case study. In S. Akooje,
P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Pre-
mises, promises and pitfalls (pp. 199–202). Münster: Lit.
Hauschildt, U., & Schumacher, J. (2014). COMCARE: Measurement and teaching of vocational
competence, occupational identity and organisational commitment in health care occupations
in Spain, Norway, Poland and Germany. Test instruments and documentations of results.
Bremen: Universität Bremen, I:BB.
Heinemann, L., & Rauner, F. (2011). Measuring vocational competences in electronic engineering:
Findings of a large scale competence measurement project in Germany. In Z. Zhao, F. Rauner,
& U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern
economy (pp. 221–224). Peking: Foreign Language Teaching and Research Press.
Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In F.
Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz. Bd.
III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT.
Ji, L., Rauner, F., Heinemann, L., & Maurer, A. (2011). Competence development of apprentices
and TVET students: A Chinese-German comparative study. In Z. Zhao, F. Rauner, &
U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern
economy (pp. 217–220). Peking: Foreign Language Teaching and Research Press.
Kunter, M. u. a. (2002). Pisa 2000 - Dokumentation der Erhebungsinstrumente. Berlin: Max-
Planck-Institut für Bildungsforschung.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014b). Berufliche Kompetenzen messen –
Das Modellversuchsprojekt KOMET NRW. 1. Zwischenbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014c). Berufliche Kompetenzen messen –
Das Modellversuchsprojekt KOMET NRW. 2. Zwischenbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., & Rauner, F. (2015a). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Elektroniker/-in/Abschlussbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., & Rauner, F. (2015b). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Industriemechaniker/-in/Abschlussbericht. Bremen: Universität
Bremen, I:BB.
542 Bibliography

Piening, D., & Rauner, F. (2015c). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Kaufmann/-frau für Spedition und Logistikdienstleistung und
Industriekaufmann/-frau/Abschlussbericht. Bremen: Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015d). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Kfz-Mechatroniker/-in/Abschlussbericht. Bremen: Universität Bre-
men, I:BB.
Piening, D., & Rauner, F. (2015e). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Medizinische/-r Fachangestellte/-r/Abschlussbericht. Bremen:
Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015f). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Tischler/-in/Abschlussbericht. Bremen: Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015g). Umgang mit Heterogenität. Eine Handreichung des Projektes
KOMET. Bremen: Universität Bremen I:BB.
Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur
reflektierten Meisterschaft. In: ZBW – Zeitschrift für Berufs- und Wirtschaftspädagogik, 3
(95), 424–446.
Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.),
Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W.
Bertelsmann.
Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische
Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches
Zentrum für die Förderung der Berufsbildung.
Rauner, F. (2013). Applying the COMET competence measurement and development model for
VET teachers and trainers. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.),
Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 181–184). Münster:
Lit.
Rauner, F. (2014). Berufliche Kompetenzen von Fachschulstudierenden der Fachrichtung Metall-
Technik – eine KOMET-Studie (Hessen). Abschlussbericht. Bremen: Universität Bremen, I:BB.
Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen
Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB.
Rauner, F., Frenzel, J., & Piening, D. (2015a). Machbarkeitsstudie: Anwendung des KOMET-
Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB.
Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015b). Engagement und
Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer
Ausbildung. Eine Studie im Auftrage der Landesinitiative „Steigerung der Attraktivität, Qualität
und Rentabilität der dualen Berufsausbildung in Sachsen“. Bremen: Universität Bremen I:BB.
Rauner, F., Heinemann, L., & Hauschildt, U. (2013). Measuring occupational competences:
Concept, method and findings of the COMET project. In L. Deitmer, U. Hauschildt,
F. Rauner, & H. Zelloth (Eds.), The architecture of innovative apprenticeship. Technical and
vocational education and training: Issues, concerns and prospects 18 (pp. 159–176). Dor-
drecht: Springer.
Rauner, F., Piening, D., & Bachmann, N. (2015). Messen und Entwicklung von beruflicher
Kompetenz in den Pflegeberufen der Schweiz (KOMET Pflegeausbildung Schweiz):
Abschlussbericht. Bremen: Universität Bremen, I:BB.
Rauner, F., Piening, D., Fischer, R., & Heinemann, L. (2014). Messen und Entwicklung von
beruflicher Kompetenz in den Pflegeberufen der Schweiz (COMET Pflege Schweiz): Ergebnis
der 1. Testphase 2013. Bremen: Universität Bremen, I:BB.
Rauner, F., Piening, D., Heinemann, L., Hauschildt, U., & Frenzel, J. (2015). KOMET NRW – Ein
ambitioniertes Projekt der Qualitätssicherung und -entwicklung in der dualen
Berufsausbildung. Abschlussbericht: Zentrale Ergebnisse. Bremen: Universität Bremen, I:BB.
Scholz, T., & Heinemann, L. (2013). COMET learning tasks in practice – how to make use of
learning tasks at vocational schools. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann
(Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 107–110).
Münster: Lit.
Index

A Blum, W., 261, 266


Abele, S., 332 BMBF, 78, 199, 200, 202–205
Achtenhagen, F., 13 BMWi, 208
Adolph, G., 53, 342, 424, 431, 432, 447 Böhle, F., 346, 455
Aebli, H., 70 Bombardier, C., 185
Akaike, H., 175, 178, 179 Borch, H., 194, 195, 201
Allen, N.J., 85 Boreham, N.C., 346, 373, 387
Anderson, L.W., 14 Bortz, J., 128, 134, 151
Asendorpf, J., 116, 152, 161 Bourdick, H., 482, 483
Bozdogan, H., 148, 149, 162, 163
Brand, W., 14
B Brater, M., 12, 73
Babie, F., 2, 332 Braverman, H., 5
Bachmann, N., 144, 313, 315, 339, 374, 390 Bremer, R., 19, 43, 44, 70, 249, 268, 342, 455
Baethge, M., 2, 5 Brödner, P., 6
Baethge-Kinsky, V., 2, 332 Brosius, F., 219, 235, 236
Baltes, D., 312 Brown, A., 80, 339
Baruch, Y., 82, 84 Brown, H., 69
Bauer, W., 299, 344, 391, 460 Brown, J.S., 70, 387
Baumert, J., 8, 225, 390, 401 Bruner, J.S., 70
Baye, A., 158 Brüning, L., 458, 459, 461
Beaton, D., 185 Bungard, W., 24
Beck, U., 16 Butler, P., 267
Becker, M., 44, 48 Bybee, R.W., 65, 66, 298, 400
Becker, U., 312
Becker-Lenz, R., 352
Benner, P., 43, 44, 50, 51, 425 C
Bergmann, J.R., 30, 34, 49, 387 Calvitto, P., 345
Bieri, L., 81 Campbell, D.T., 251
Blankertz, H., 3, 5, 42, 70, 79, 329, 338, 339, Carey, S., 331
346, 355, 425 Carstensen, C.H., 156
BLK, 8, 282, 397 Caspar, F., 152, 153
Blömeke, S., 392 Cohen, A., 82, 84, 355

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 543
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
544 Index

Connell, M.W., 22, 54, 55, 387, 394, 395 Freund, R., 387
Cooley, M., 20 Frey, A., 392
Corbett, J.M., 20 Frieling, E., 23, 34
Cramer, H., 70
Crawford, M.B., 351
Curcio, G.P., 393 G
Gablenz-Kollakowicz, S., 23
Ganguin, D., 6, 72, 344
D Gardner, H., 10, 22, 52, 54, 55, 140, 356, 362,
Daimler Chrysler, A.G., 267 387, 394, 395, 424
Davier, M. von, 150, 156, 162, 163 Garfinkel, H., 18, 49–52
Degen, U., 6, 48 Gäumann-Felix, K., 104, 144, 257, 336,
Dehnbostel, P., 46, 471 470–485
Deitmer, L., 282, 344 Georg, W., 84
Dengler, K., 345 Gerds, P., 344
Deutscher Bundestag, 40, 342, 344, 424 Gerecht, M., 267
Dewey, J., 347 Gerlach, H., 44
DFG, 14, 61, 335, 352 Gerstenberger, F., 5
Dick, M., 50 Gerstenmaier, J., 11
Döbrich, P., 267 Giddens, A., 16
Döring, N., 128, 134, 151, 262 Gille, M., 312
Dörner, D., 250 Gillis, S., 345
Drees, G., 268 Girmes-Stein, R., 70, 387
Drexel, I., 428 Glaser, B., 249
Dreyfus, H.L., 32, 43, 44, 73, 387 Granville, G., 7
Düggeli, A., 393 Gravert, H., 7
Dürrenberger, G., 81 Griffin, P., 345
Dybowski, G., 344 Grigorenko, E.L., 126
Grob, U., 11
Grogoll, T., 23
E Grollmann, P., 7, 53, 63, 84, 148, 249, 265,
Efron, B., 150 332, 411, 421
Emery, F.E., 21 Gruber, H., 66
Emery, M., 21 Grünewald, U., 6
Erdwien, B., 118, 120, 131, 148, 150–154, 161, Gruschka, A., 43, 54, 79, 355, 387, 425
291, 349, 405, 422 Gschwendtner, T., 332
Erpenbeck, J., 47 Guillemin, F., 185
Eugster, B., 267 Guldimann, T., 374
Euler, D., 13
Eyd, M., 156
H
Haase, P., 344
F Haasler, B., 54, 70, 150–154, 265, 332, 422
Fischer, B., 387 Hacker, W., 6, 47, 72, 346, 402
Fischer, M., 10, 46, 49, 70, 83, 312, 332, 346, Hackman, J.R., 21
387 Hartig, J., 54
Fischer, R., 1, 249, 257, 259, 333, 335, 344, Hastedt, H., 24
345, 382, 474 Hattie, J.A., 208, 336, 374, 380, 390, 407
Fleiss, J.L., 115, 116, 153 Hauschildt, U., 69, 257, 259, 336, 474
Flick, U., 250 Havighurst, R.J., 43, 70, 249, 425
Frank, H., 8 Hayes, A.F., 234
Frei, F., 23 Heeg, F.J., 129
Frenzel, J., 257, 313, 315, 339, 374, 378, 390 Heermeiner, R., 342
Index 545

Heid, H., 5, 351 Krathwohl, D.R., 14


Heidegger, G., 17, 342, 424 Krick, H., 6, 48
Heinemann, L., 69, 86, 257, 259, 265, 309, 312, Kruse, W., 6, 46, 48, 84
336, 339, 378, 405, 474 Kullmann, B., 312
Heinz, W.R., 79, 80, 83, 332, 339, 354 Kunter, M., 226
Heinze, T., 8 Kurtz, T., 17, 83, 345
Hellpach, W., 72, 402
Heritage, J., 346
Hirtt, N., 5, 7 L
Hoey, D., 9, 20, 142 Lafontaine, D., 158
Hofer, D., 104, 144, 257, 336, 470–485 Lakies, T., 40
Hoff, E.-H., 354 Lamnek, G., 23
Hofmeister, W., 14 Lappe, L., 354
Holzkamp, K., 53, 387 Lash, S., 16
Howe, F., 342 Laske, G., 342, 424
Hubacek, G., 312 Laur-Ernst, U., 20
Hüster, W., 7 Lave, J., 20, 43, 44, 70, 387
Lechler, P., 249
Lehberger, J., 391, 405, 419–421, 440, 468
I Lehmann, R., 66, 266
IHK Nordrhein-Westfalen (IHK NRW), 201, Lehmann, R.H., 14
203 Lempert, W., 17, 332, 339, 345
Lenger, A., 352
Lenk, H., 24
J Lenk, W., 24
Jäger, C., 81, 82, 85, 172, 355 Lenzen, D., 70
Jagla, H.-H., 44, 342 Leutner, D., 61, 335
Ji, L., 150, 185, 187 Lüdtke, G., 133
Johnson, D., 461 Lutz, B., 7
Johnson, R., 461
Jones, D.T., 197, 343
Jongebloed, H.-C., 353 M
Jungeblut, R., 70 Maag Merki, K., 11
Martens, T., 2, 63, 120, 130, 131, 147, 148,
150, 154, 156, 161, 163, 171, 249, 257,
K 265, 291, 332, 335, 349, 405, 422
Kalvelage, J., 86, 87, 171–178 Matthes, B., 345
Kanungo, R.N., 85 Maurer, A., 309, 312, 405
Karlinger, F.N., 251 McCormick, E., 23
Katzenmeyer, R., 14, 336, 403, 463 Meyer, K., 17
Kelle, U., 421 Meyer, P.J., 85
Kern, H., 5, 16, 48, 84, 197, 342 Meyer-Abich, K.N., 24
Kirpal, S., 80, 339 Meyer-Siever, K., 331
Kleiner, M., 44, 95, 414 Minnameier, G., 66
Kliebard, H., 81 Monseur, C., 158
Klieme, E., 7, 13, 54, 61, 63, 267 Müller, W., 37, 267
Klotz, V.K., 135, 136 Müller-Fohrbroth, G., 250
Kluge, S., 421 Müller-Hermann, S., 352
Klüver, J., 352
KMK, 17, 20, 31, 39, 40, 42–44, 344, 361, 383
Kohlberg, L., 298, 352 N
König, J., 392 Nehls, H., 40
Kordes, H., 387 Neuweg, G.H., 10, 49, 66
546 Index

Newman, S.E., 70, 387 Rose, H., 346


Nickolaus, R., 332 Rosendahl, J., 331
Nida-Rümelin, J., 353 Rost, J., 2, 130, 131, 147, 150, 154–156, 162,
171, 249, 332, 422
Roth, H., 13, 14, 40, 42
O
Oehlke, P., 6
Oesterreich, R., 23 S
Oldham, G.R., 21 Sabel, C.F., 16, 84
Oser, F., 392–394 Samurçay, R., 373, 387
Ott, B., 5 Sattel, U., 84
Saum, T., 458, 459, 461
Schecker, H., 290, 400
P Schein, E., 30, 346, 399
Parchmann, I., 290, 400 Schelten, A., 16, 134
Pätzold, G., 53, 268 Schiefele, U., 8, 390, 401
Petersen, A.W., 194, 196, 197, 201 Schmidt, H., 20
Peukert, U., 387 Schneider, W., 8, 390, 401
Piaget, J., 298 Schoen, D.A., 44, 50, 52, 53, 126, 346, 387,
Piening, D., 144, 250, 257, 270, 271, 313, 315, 398
339, 344, 374, 378, 380, 384, 386, 390, Scholz, T., 336, 377, 387
458, 461 Schreier, N., 194
Pies, I., 354 Schumacher, J., 257, 259, 336, 474
Polanyi, M., 10, 49 Schumann, M., 5, 48, 197, 342
Posner, M., 331 Schwarz, H., 195
Preacher, K.J., 234 Seeber, S., 14
Prein, G., 421 Sennett, R., 5, 10, 16, 17, 428
Prenzel, M., 66, 261, 266 Sheridan, K., 22, 54, 55, 387, 394, 395
Przygodda, K., 344, 391 Shrout, P.E., 153
SK Arbeit und Technik, 40
Skowronek, H., 331
Q Skule, S., 267
Quittre, V., 158 Spöttl, G., 7, 17, 45, 48, 103, 414
Stanley, J.C., 251
Steffen, R., 70
R Stegemann, Ch., 344, 384
Rademacker, H., 16, 131, 134, 135, 332 Stein, H.W., 70
Ramirez, D.E., 162 Steinert, B., 267
Randall, D.M., 84 Sternberg, R.J., 126
Rasch, G., 149, 155, 159, 161–163, 171 Steyer, R., 156
Rasmussen, L.B., 20 Straka, A., 331
Rauner, F., 6, 17, 63, 94, 148, 203, 309, 332, Strauss, A.L., 249
390, 424 Suhl, U., 392
Reckwitz, A., 50 Suppes, P., 147, 156
Reichborn, A.N., 267
Reinhold, M., 44
Renkl, A., 66 T
Resch, M., 23 Taylor, I.A., 81, 343
Ripper, J., 267 Tenorth, H.-E., 13
Röben, P., 20, 103 Terhart, E., 389
Römmermann, E., 70 Thiele, H., 268
Roos, D., 84, 197, 343 Tibshirani, R.J., 150
Ropohl, G., 24 Tiemeyer, E., 384
Index 547

Tomaszewski, T., 72, 402 Weniger, E., 391


Tramm, F., 14 Wienemann, E., 5
Wild, K.-P., 267
Winther, E., 13, 135, 136
U Wirtz, M., 152, 153
Ulich, E., 20, 21, 23, 72, 402 Witzel, A., 83, 312, 332
Ulrich, O., 16, 23 Womack, J.P., 84, 197, 343
Wosnitza, M., 267
Wyman, N., 352
V
VDI, 23, 24
Volkert, W., 23 Y
Volpert, W., 6, 72, 402 Yates, C.R., 208, 336, 374, 380, 407
von Eerde, K., 344 Young, M., 7, 8
Vosniadou, S., 331

Z
W Zhang, Z., 78, 414, 415
Walden, G., 268 Zhao, Z., 1, 78, 100, 187, 188, 249, 255, 259,
Wallbott, H.G., 116, 152, 161 280, 335, 340, 344, 345, 362, 386, 412,
Weber, M., 17 414, 415, 418, 422
Weber, S., 2, 332 Zhou, Y., 86, 87, 100, 171–178, 250, 255, 259,
Wedekind, V., 69 280, 386, 418
Wehmeyer, C., 194, 196, 197, 201 Zhuang, R., 150, 185, 188, 255, 418
Wehner, T., 50 Zimmermann, M., 267
Weinert, F.E., 15, 62, 70, 247 Zinnes, J. L., 147, 156
Weiß, R., 136 Zöller, A., 344
Weisschuh, B., 267 Zutavern, M., 374
Weißmann, H., 194, 201 ZVEI, 81
Wenger, E., 20, 43, 44, 70, 387
Subject Index

A C
Ability Capability model, 332
implicit, 32 Career aspirations, 80
professional, 10, 11, 18, 49, 65, 131, 134 Certification systems, 7, 15, 427
Action China, 100, 150, 185–187, 192, 244, 255, 260,
artistic, 12, 74 280, 281, 308, 309, 375, 376, 412–416,
competence, 83, 193, 194, 372, 383, 406, 520
410, 420, 458 Chinese teachers, 187, 189, 260, 376
complete, 6, 55, 59, 72, 73, 396, 402, 406, Classification systems, 18, 260
455 Coefficient of variation, 216, 217, 272, 299,
professional, 6, 16, 28, 51, 57, 59, 71–73, 300, 349, 354, 445
96, 100, 102, 103, 105, 109, 128, 131, COMET, see Competence development and
137, 142, 203, 264, 352, 383, 402, 410, assessment in TVET (COMET)
412, 414, 425, 426, 431, 438, 458, 469 Commitment
types, 73–74 occupational, 85, 88, 248, 328, 360,
vocational, 55, 57, 72, 73, 102, 213, 216, 517
224, 426, 430, 451 organisational, 85, 88, 184, 313, 319, 328,
Applied Academics, 52 358
Apprenticeship, 18, 70, 100, 126, 250, 320, professional, 311, 314, 319, 327, 328, 333,
321, 387, 433, 446, 471 358–360
Architecture of parallel educational paths/ research, 82, 84, 85, 312, 338, 339
pathways, 18, 19, 352 vocational, 184, 355
Assignment Communicativity, 51
company, 136, 196, 197 Community of practice, 12, 69, 70, 80, 83
work, 92, 195, 208 Company project work, 193–195, 197
Attractiveness, 276, 320, 329, 338, 339, 341 Comparative projects, 109, 127, 129, 144, 257,
258, 519
Competence
B to act, 41, 53, 77, 78, 157
BIBB, 134, 209, 282, 320 in action, 32
Bologna reform, 18, 352 assessment, 8, 15
Business process orientation, 66, 147, 161, 169, conceptual-planning, 136, 193, 221, 403,
268, 269, 327–329 410

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 549
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
550 Subject Index

Competence (cont.) 298, 333, 335–337, 340, 341, 349, 351,


criterion/criteria, 68, 75, 76, 120, 159, 161, 387, 388, 391, 394–396, 399, 400, 402,
162, 232, 249, 255, 285–287, 290, 298, 403, 410, 411, 414, 419, 421, 422, 426,
420, 421, 472, 473, 475, 479, 482, 484, 438, 442, 471, 473, 475, 479, 483, 485
485 model-based, 61, 63, 154, 294
development, 20, 31, 41–45, 48–50, 55, 61, multiple, 54, 55, 126, 249, 290, 387
62, 65, 66, 70–72, 79, 130, 159, 193, nominal, 67, 281, 298, 302, 381, 400, 401
201, 208, 213, 229, 241, 242, 247, occupational, 2, 15, 20, 22, 27, 32, 70, 74,
250–256, 260, 261, 263, 264, 268, 269, 75, 104, 130, 137, 150, 206, 294, 297,
275, 277, 279, 287, 290, 297, 298, 301, 331, 332, 389, 400, 402
307–310, 325, 331, 335–338, 340–342, practical, 1, 31, 46, 48–52, 54, 78, 136, 149,
349, 355, 357, 361, 363–365, 370, 203, 205, 212, 270, 281, 291, 368, 398,
372–374, 378, 380–384, 386, 389, 390, 484
394, 395, 400, 402, 407, 412, 418, 422, procedural, 49, 74, 121, 122, 166, 169, 190,
425, 426, 432, 433, 437, 439–446, 191, 220–224, 281, 286, 287, 289–291,
450–454, 462, 472, 483 293, 297, 298, 401, 405
diagnostics, 1–4, 8–10, 19–20, 30, 31, 61, processual, 66, 121, 206
62, 68–70, 74, 76, 86, 87, 97, 98, 100, professional, 1, 9, 19–20, 27, 30, 41, 43, 55,
104, 126, 131, 141, 142, 146, 193–202, 58, 59, 64, 66, 67, 69, 70, 74, 75, 78, 97,
208, 218–224, 243, 246–248, 258, 260, 122, 127, 129, 136, 147, 187, 244, 256,
262, 263, 266, 277, 278, 282, 332, 333, 280, 281, 296, 299, 310, 332, 337,
335, 338, 340, 344, 348, 354, 361, 372, 348–349, 351, 373, 377, 397, 405, 416,
373, 376, 377, 387, 389, 391, 392, 394, 418, 422
403, 410–412, 421, 422, 428, 444, 519 profiles, 4, 9, 11, 55, 59, 62, 64, 78, 96, 98,
functional, 19, 41, 44, 66, 206, 224, 287, 100, 117, 120, 121, 124, 129, 130, 140,
297, 298 143, 145, 148, 149, 159, 163–169, 193,
holistic, 55, 63, 66, 67, 75, 129, 147, 166, 194, 201, 206, 212, 216–218, 224, 251,
169, 171, 194, 206, 215, 219, 220, 244, 252, 254, 256, 262, 278, 280–282, 287,
264, 280, 286, 287, 289, 298, 301, 337, 290, 291, 296–301, 337, 338, 345,
376, 386, 387, 389, 398, 401, 409, 483, 347–349, 351–354, 363, 364, 372, 388,
484 390, 393–395, 412, 414, 416–418, 422,
interdisciplinary, 187, 392 442, 443, 445, 451, 461
large scale, 2, 4, 8, 63, 69, 131, 258, 260, research, 9, 13, 61–63, 249, 394, 442
386, 391, 404, 422 social, 11, 13, 14, 23, 32, 40, 47, 51, 59, 67,
level, 9, 11, 14, 55, 62–66, 71, 74–76, 86, 126, 268, 345, 352, 392, 426
96–98, 117, 121, 122, 137, 140–144, technical, 1, 14, 19, 23, 28, 30, 31, 47, 48,
146, 148, 154, 159, 191–193, 201, 206, 50, 52, 74, 75, 120, 129, 137, 144, 194,
216, 219–221, 228, 230, 241, 251, 203, 228, 252, 253, 256, 278, 299, 301,
253–256, 274, 279, 281, 282, 287–299, 337, 347, 348, 352, 354, 363, 370, 371,
302, 306–308, 310, 312, 337, 344, 376, 380, 396, 401, 406, 412, 418, 442
347–349, 363–365, 367, 369, 371, 380, vocational, 2–4, 6, 9, 11, 14, 16, 18, 20, 23,
384–386, 389, 390, 393, 400, 402, 405, 27, 40, 42, 43, 46, 54, 55, 62, 67–69, 71,
409, 410, 415, 416, 441, 442, 449, 451, 76, 78, 86, 100, 104, 107, 121, 122, 126,
454, 480 129, 137, 142, 143, 191, 203, 206, 218,
measurement, 4, 66, 157, 158, 168–169, 224, 249, 254, 255, 260, 265, 268, 277,
186, 188–190, 192, 276, 356, 362, 485 279, 287, 290, 291, 294, 296, 298, 299,
and measurement model, 2, 10, 75, 78, 129, 301, 309, 310, 331, 332, 336, 338, 340,
131, 154, 389, 399, 402, 416, 423–485 344, 347, 348, 353, 354, 360, 365,
methodological, 1–4, 13, 27, 33, 48, 126, 371–373, 381, 386, 391, 402, 406, 419,
335, 344, 392–394, 421 421, 422, 452
model, 1, 13–15, 61–64, 68, 70–73, 75, 78, Competence development and assessment in
86, 100, 122, 131, 137, 154, 157, TVET (COMET), 1, 3, 9, 15, 56, 61–89,
167–169, 171, 187, 190, 192, 194, 199, 99, 100, 102, 107, 109, 110, 112–117,
200, 206, 209, 212, 215, 218–220, 224, 127–129, 131, 137, 143, 145–192, 194,
249, 264, 279, 285, 287, 288, 290, 297, 198–202, 206–210, 212, 215, 216,
Subject Index 551

218–226, 232, 234–236, 241–249, 251, Core occupations, 396, 428


254, 255, 257, 260, 264, 265, 267, 274, Core professions, 361
276–279, 282, 287, 288, 290–293, Core workforce, 84, 86
297–300, 302, 305, 308, 310, 314, Covariance matrix, 148, 149, 175, 179
331–389, 396, 400, 402, 413, 416, 418, Creativity, 39, 58–59, 107, 121, 147, 161, 162,
421, 422, 426, 438, 442, 470–485, 490, 169, 190, 192, 199, 214, 249, 345, 348,
497, 519, 520 401, 405, 409, 425, 442, 465, 479–481,
competence and measurement model, 4, 483, 485, 490, 492, 498, 500, 501, 504,
107, 131, 206, 207, 209, 218, 224, 244, 505, 508
279, 282, 341, 375, 384, 405, 418, 519 Criterion validity, 128, 130–131, 136
competence diagnostics, 4, 103, 185–192, Cross-over design, 71, 225, 274
194, 333, 334, 362 Cross-over test arrangement, 65
consortia/consortium, 248, 260, 288 Curriculum development, 7, 41, 43, 44, 70, 186,
dimensions, 477 257, 258, 342, 344, 424, 425
examination, 209, 216, 221, 224
method, 4, 9, 193, 254, 257, 336, 376, 472,
483, 485 D
methodology, 257, 282 Degree of difficulty, 16, 64, 102, 110, 123, 124,
projects, 1–3, 10, 12, 63, 69, 71, 72, 75, 86, 134–136, 138, 140, 143, 146, 195, 196,
100, 101, 103, 107, 111, 113, 117, 123, 451
124, 129, 137, 138, 142, 146, 150, 154, Design a curriculum (DACUM), 27
158–160, 171, 185, 188, 218, 219, 224, Deskilling thesis, 5
225, 232, 242, 244, 248–283, 300–302, Determinism, 6, 25
309, 310, 335–337, 339, 341, 349, 355, Development
362, 372–375, 377, 382, 384–387, 389, of competence, 2, 15, 61, 62, 70, 126, 142,
422, 440, 462, 472, 473, 485 279, 309, 311, 339, 412
rating, 79, 159–160, 211, 212, 421, 453 of identity, 313, 374
testing, 219, 247 organisational, 6, 17, 21, 25, 46, 129, 248,
test procedure, 2, 3, 76, 111, 112, 117, 127, 258, 269, 297, 328, 333, 355, 397, 439
131, 143, 154, 155, 194, 196, 201, 206, theory, 43–45, 70
207, 212, 219, 232, 234, 235, 242, 243, Developmental logic, 70
247, 274, 279, 299, 354, 372, 373, 376, Developmental process, 28
377, 387, 402, 441, 482, 519 Developmental tasks, 26, 41, 387
Competence-oriented practical examination, 77 Diagnosis of knowledge, 21
Competence-oriented training, 480 Didactic concept, 44, 49, 309, 310, 406, 433,
Competence-oriented vocational education and 471, 483
training, 224 Didactic reduction, 30, 52
Competency model, 3, 13–15, 63, 70, 125, 137, Didactic research, 331
154, 171, 187, 194, 198, 216, 224 Difficulty level, 146
Concept of competence, 13, 42, 51, 52, 54, 62, Dimension of competence, 325, 392, 393
208, 290, 393, 394 Duality
Confirmatory factory analysis, 171–185 alternating, 101
Consolidation, 17, 21, 53, 249, 439 informal, 18
Content validity, 128, 131, 160, 257, 259 integrated, 18
Context analysis, 39, 144, 219, 220, 224, 229, Dual system, 323, 380
264, 265, 268, 271–272, 279, 362, Dual vocational education, 252, 254, 267–268,
378–382 336, 362, 364, 382, 383
Context data, 238, 282, 320, 334, 340–342, Dual vocational training, 18, 39, 69, 89, 104,
373, 390 129, 132, 141, 198, 225, 251, 258, 266,
Contextuality, 51 270, 329, 344, 362, 363, 372, 373, 380,
Contextual learning, 52 382, 384, 389, 396, 408, 428, 468
552 Subject Index

E Expert discussions, 30, 34, 42, 77–79, 206, 211,


Economic action, 16, 353 212, 215, 472, 479, 480, 484
Education Expertise research, 20, 42, 44, 54, 70
academic, 6, 18, 19, 40, 49, 301, 423 Expert knowledge, 21, 65, 214
competence-oriented, 480 Expert opinion, 342
Educational research, 7, 28, 61, 66, 70, 126, Expert specialist workshops, 27, 44, 91–95,
247, 249, 250, 262, 267, 374, 390 103
Electrical engineering, 71, 72, 138, 139,
187–189, 254, 265, 398
Electricians, 117, 135, 275, 501–504 F
Electronics engineers, 138, 225, 226, 240–242, Feedback discussions, 144
274, 280, 300, 375 Feedback workshops, 238, 241, 279, 282
Electronics technicians, 78, 96, 101, 131, Field of work, 82, 85, 478, 480
137–139, 141, 160, 202, 210, 226, 227, Finn coefficient, 115–117, 151–153, 161, 384
229, 244, 251, 260, 263–265, 274–276, Formative sciences, 20
299, 303, 304, 308, 349, 350, 356, 358, Form of examination, 195, 196, 200–203
363, 369, 378, 380, 389, 390, 418, 431, Functional competence, 65, 67, 121, 122, 166,
433, 446, 463, 512, 513 169, 190–192, 220–222, 244, 281,
Employees to their company, 354 286–292, 294, 297, 298, 401, 402, 405,
Environmental compatibility, 2, 58, 68, 98, 99, 418
106, 121, 122, 147, 161, 162, 169, 190, Functional literacy, 65
192, 207, 213, 214, 348, 465, 479, 482, Functional understanding, 66
483, 490, 498, 500, 501, 504, 505
Environmental responsibility, 508
Ethnomethodology, 51 G
European Qualifications Framework (EQF), 7, German research foundation (DFG), 14, 61,
18, 428 335, 352, 353
Evaluation Germany, 18, 44, 62, 69, 82, 150, 185, 187,
criterion/criteria, 68, 76, 112, 188, 202, 213, 225, 244, 260, 261, 265, 281, 336, 373,
249, 372, 375, 453, 461, 466, 467 383, 384, 389, 412, 428
didactic, 14, 30, 110, 112, 213, 279, 309, Group learning, 457
335, 355, 392, 410, 415, 418, 483
items, 121, 161, 163, 185, 186, 188,
190–192, 201 H
objectivity, 127 Health care, 471, 492
procedure(s), psychometric, 2, 131, 135, Health protection, 23, 58, 99, 214, 408, 464,
147–192, 291, 422 478, 482, 490, 492
psychometric, 2, 78, 86, 131, 135, 154, 287, Heterogeneity, 11, 62, 127, 143, 201, 202, 240,
335, 340, 349, 405, 416, 422 261, 266, 270, 272, 276, 279, 301–311,
of task solutions, 73, 110, 186, 337, 441, 369, 389, 391, 407, 442, 449, 459–461,
466 485
of teaching, 279, 404, 407, 410, 418, 419, Higher technical colleges in Switzerland, 257,
485 385
of the test tasks, 110, 112, 124, 151 Homogeneity, 117, 145, 151, 159, 161–163,
Examination 216, 217, 271, 299, 347–349, 354, 363,
extended, 77, 198, 208–213, 256, 419 372
final, 127, 135, 194, 197–199, 208–213, Humanisation of Working Life, 72
219–222, 225, 230, 242, 248, 250, 277,
372, 373, 402, 420, 480, 484
requirements, 200, 391 I
tasks, 16, 98, 132–135, 196, 208, 209, 212, Identity
213, 216, 224, 247, 277, 377 development, 1, 79–81, 332, 355
Experimental research, 251, 282, 336 occupational, 82–87, 171, 174, 179, 185,
Expert assessments, 75, 157, 158 265, 269, 313, 315, 320, 338
Subject Index 553

organisational, 80–82, 84, 86, 87, 171, 174, Knowledge


178, 181, 184, 265, 269, 279, 311, 312, action-explaining, 47, 294, 372, 381, 405
316, 319–321, 323–326, 329, 332–333, action-leading, 294, 393, 401, 405, 441,
335, 339, 351, 354, 357, 358 443
pre-professional, 80 action-reflecting, 9, 47, 142, 293, 294, 351,
professionals, 15, 26, 81, 85, 172, 315, 316, 372, 405, 441
321, 323, 324, 326, 329, 332–333, 339, disciplinary, 52, 353, 397, 398, 423
341, 351, 358, 488 implicit, 10, 11, 49, 50, 83, 311, 386
vocational, 1, 27, 44, 79, 80, 83, 85, 89, 174, level, 143, 294, 296, 298
179, 184, 279, 312, 325, 326, 329, 332, motivational-emotional, 50
338, 339, 346, 354, 355, 360 practical, 46, 49–52, 387
Industrial culture, 81, 84 professional, 9, 10, 19, 28, 30, 31, 46–49,
Innovation 52–54, 56–58, 65, 74, 97, 111, 136, 142,
projects, 282 195, 353, 387, 392, 398, 401, 402, 406,
technological, 39, 41, 91 430, 468
Input/output didactics, 8 specialist, 27, 28, 30, 49, 65, 91, 136, 143,
Intelligence 372, 386, 387, 392, 447, 471, 475
multiple, 52, 55, 140, 394 systemic, 491
practical, 52, 398 tacit, 10, 11, 32, 49–50, 52
profile, 55, 395 theoretical, 18, 27, 46, 49, 52–54, 66, 126,
Interdisciplinarity, 24 326, 353, 430, 451, 488
Intermediate examinations, 132, 198, 200, 207, work process, 9, 24, 30–32, 45, 48, 49, 58,
250, 254 65, 326, 331, 373, 381, 387, 405, 406,
International comparative competence 424, 430, 437, 443, 462, 467
diagnostics/projects/studies/surveys, 2, Know that, 47, 65, 293, 294, 372, 407
15, 68, 71, 87, 102, 109, 127, 144, 150, Know why, 47, 294, 372
154, 258
Internationalisation, 19, 96
International World Skills (IWS), 9, 20, 96, L
142, 264 Large scale assessement, 8
Interrater reliability, 105, 113, 127, 150–154, Large scale competence diagnostic (LS-CD), 9,
158–161, 185–192, 218, 224, 341, 348, 12
375–377, 387, 415, 419, 421 Learning
ISCET, 18 action-oriented, 27, 53, 407, 438
ISCO, 18 area, 14, 44, 61–63, 68, 70, 71, 80, 93, 95,
471, 487
climate, 261–263, 269, 270, 369–371, 374,
J 380, 408, 420
Job descriptions, 2, 9, 10, 18, 26, 69, 71, 80, 91, competence, 456
95, 98, 109, 120, 138, 142, 194, 215, contents, 25, 29, 267, 378, 406, 412
345, 352, 391 cooperative, 6, 21, 458, 459, 469
Job profiles, 69, 71, 109, 138, 141, 258, 433, decentralised, 46
449 field concept, 39, 43, 69, 73, 144, 146, 257,
280, 282, 335, 344, 348, 382, 387, 391,
398, 403, 406, 425–428, 431, 438, 446
K inductive, 53
Key competences, 11–13, 41 location cooperation, 268–271, 362
KMK (Conference of Ed. Ministers in methods, 465, 466
Germany), 7, 17, 20, 31, 39, 40, 42–44, outcomes, 15, 61, 282, 335, 336, 407, 408,
143, 299, 344, 361, 383, 389, 391, 433, 442, 453, 466–469, 473
395–400, 422, 424–427 processes, 4, 25, 43, 73, 262, 332, 339, 383,
Know how, 47, 50, 65, 294, 372, 453 386, 396, 405, 406, 420, 425, 436, 442,
554 Subject Index

447–449, 453, 455, 457, 459, 462–467, N


469, 475, 485 National Qualifications Framework (NQF), 7
school-based, 25, 224, 269, 271, 310, 333, Novice-expert-paradigm, 26, 44, 70, 79, 249,
363, 371, 373, 374, 378, 380, 383, 437 387, 425, 439
situation, 104, 263, 344, 374, 378, 387, 392, Novices, 27, 43, 80, 312, 487
409, 420, 426, 427, 429, 433, 437, 438, Nursing training, 109, 238, 240, 257, 336, 382,
440, 446, 452, 457, 458, 466, 485, 497 384–386, 471, 474
at the workplace, 46
Learning-outcome taxonomies, 442
Learning tasks O
design of, 386, 433 Objectivity of implementation, 127, 158, 274,
Learning time difference (LTD), 306–308 277
Learning venues, 310–311, 323–325, 327, 328, Occupational profile, 10, 19, 25, 26, 62, 80,
336, 361–364, 367, 372–374, 378–384, 260, 324, 339, 341, 396, 488
402, 408, 412, 419, 435, 436, 471, 484 Occupational research, 6, 329, 402, 403
Level of competence, 250, 281, 292, 380, 384, Operational order, 77, 202–216
400, 456 Operational projects, 77–78, 203, 206, 207
Level of competence development, 44, 65 Operational project work, 208
Level of difficulty, 93, 120, 123, 124, 133, 135, Organisation development, 16
138, 141–144, 356, 415, 451
Level of knowledge, 47, 78, 129, 142, 207, 280,
293, 298, 348, 372, 443, 451, 454, 475 P
Level of work process knowledge, 47, 56, 102, Pedagogical research, 421
285, 293, 294, 298, 438 Percentile bands, 294, 302, 305–307, 444
Longitudinal analyses, 159, 168–169 Performance dispositions, 9, 15, 55, 62, 70, 98,
Longitudinal study/studies, 143, 144, 225, 250, 402
253, 254, 264, 274, 276, 340 Peripheral workforce, 84
Personality development, 9, 23, 197, 333, 460
Personality theory, 41
M PISA, 7, 8, 65, 66, 69, 126, 226, 261, 266, 298,
Manual skills, 193 302, 332, 389, 401
Measurement model, 3, 14, 68, 77, 78, 86–87, Polarisation thesis, 5
131, 147–149, 154, 156–159, 163, 164, Post-SII test arrangement, 102, 103
171, 186, 187, 191, 192, 201, 249, 292, Practical assignment, 471
293, 299, 332, 338, 395, 403, 404, 412 Practical experience, 10, 50, 123, 259, 436, 454,
Mechatronics 468, 471
car, 137, 138, 194, 358 Practical tasks, 202, 211, 212, 432
motor vehicle, 17, 219, 242, 438 Practical terms, 53–54
Mediator analysis, 233–236 Practice communities, 51, 53–54
Mixed distribution models, 149, 163, 171 Pretest/pre-test, 68, 101, 104–125, 144, 226,
Mixed Rasch Model, 149, 162, 171 229, 258, 259, 277, 278, 348, 349, 375,
Model validity, 150 384–386, 415
Model verification, 86 Problem solution, 57, 73, 206, 221, 483
Motivation Problem solving-pattern(s), 337, 385
intrinsic, 83, 84, 312 Profession
primary, 230, 233, 235, 238, 242–244, academic, 98
247 commercial, 75, 145, 324, 328, 354, 426
professionals, 84, 238, 244, 312, 327 industrial-technical, 313
secondary, 225, 230, 233, 235, 238, 243, technical, 43, 263, 399
244, 247 Professional competences, 1–4, 10, 12, 13, 15,
test, 225, 226, 229, 230, 235, 238, 240, 16, 18–24, 28–32, 34, 40–45, 48–50, 52,
242–244, 247, 277 55, 56, 58, 59, 62, 63, 65, 67, 68, 70, 71,
Multiple-choice tasks, 102, 132–134, 201 74, 77–79, 83, 98, 100, 103, 120, 122,
Subject Index 555

126–131, 135, 136, 142, 143, 145, Quality control, 41, 77, 129, 211, 215, 269, 457
147–149, 154, 156–158, 171, 187, 190, Quality criteria, 4, 16, 70, 126, 128, 135, 200,
193–196, 199, 202, 203, 205, 206, 201, 218, 224, 259, 269, 271, 361–365,
208–210, 213–215, 250, 260, 261, 275, 375, 482
277, 286–290, 298, 299, 307, 310–312, Quality diagram, 268–272, 363, 364, 370, 371
320, 331–335, 338–342, 346, 351, 352, Quality profile, 272, 273, 339, 363
354–360, 362, 363, 368, 372, 380, 382, Questionnaires
387, 389–422, 424–426, 428, 429, 433, context, 186, 232
442, 444, 446, 455
Professional concept, 17, 468, 469
Professional development, 3, 15, 21, 92, 93, R
137, 249, 328, 333, 425 Rasch model, 155, 159, 161, 162
Professional ethics, 17, 81–83, 353, 354 Rater training, 79, 105, 110–115, 117, 118, 120,
Professional expertise, 48, 93, 148, 169, 372, 127, 128, 144, 145, 150, 153, 160,
489 188–190, 192, 201, 218, 224, 256, 278,
Professional identity, 1, 3, 15, 16, 79–81, 83, 337, 341, 348, 375–377, 384–388, 418
85, 174, 230, 269, 311–313, 315, 317, Rating procedure, 10, 78, 79, 112, 127, 150,
319, 323–326, 332, 333, 338–342, 346, 159–160, 162, 201, 206, 209, 212, 279,
351, 354, 355, 357–360, 382, 407, 467, 292, 299, 338, 341, 393, 403, 421, 453
472, 488 Rating results, 78, 79, 112–114, 117–122, 216,
Professionalism 341, 376
modern, 360 Rating scale, 68, 110, 112, 151, 161, 202,
open dynamic, 41 211–213, 257, 279, 341, 404, 405, 410,
Professional knowledge, 10, 11, 18, 30, 32, 48, 415, 419, 421, 489–496
49, 52, 53, 57, 194, 328, 331, 387, 393, Rating training, 79, 105, 111, 145, 162, 377,
405, 406, 430–432, 454, 487, 488 421
Professional learning, 11, 430, 432, 446, 455, Re-engineering, 17
456 Reflection on and in action, 53
Professional role, 80, 83, 311, 312, 355 Reliability, 78, 79, 105, 111, 120–122, 126,
Professional skills, 2, 10, 29, 40, 41, 49, 52, 63, 127, 135, 136, 150–153, 156, 159–162,
98, 99, 132, 134, 154, 193, 194, 196, 178, 182–186, 188, 191, 192, 201, 202,
346, 372, 381, 431, 433 209, 212, 219, 277, 335, 416, 421, 482
Professional typology, 316–319 analysis, 120–122, 178, 183, 191, 192
Project design, 257–264, 266, 282 calculations, 151, 153, 376
Project objectives, 257–258, 276 Requirement dimension, 56, 63, 200, 285, 394,
Project organisation, 258 404, 405
Psychometric modelling, 158 Research
designs, 249, 250, 336, 340
hypothesis-driven, 249, 250
Q hypothesis-led, 250
Qualification frameworks, 102 strategies, 248–256
Qualification levels, 97, 102, 103, 107, 129, Risk group, 67, 227, 281, 292, 298, 302, 303,
138, 140, 144, 257, 306, 309, 310 385, 401
Qualification requirements, 5, 6, 9, 10, 19, 20, Role distance, 80
25–27, 37, 41, 48, 49, 59, 62, 98, 103,
110, 129, 138, 141, 194, 203, 206–208,
397, 407, 429 S
Qualification research, 30, 41, 42, 44, 48, 51, Safety
54, 59, 71, 103, 342, 344 occupational, 58, 99, 135, 345, 464, 482,
Quality assurance, 1, 4, 142, 218, 224, 254, 505
257, 260, 264, 267, 282, 329, 335, 342, works, 345, 408, 490, 492, 498, 501, 504
343, 361, 374, 407, 484 Scale properties, 151
Quality competition, 72 School climate, 248, 269, 324, 327, 374
556 Subject Index

Scope for design, 99, 102, 104, 138, 214, 410, Tacit skills, 10, 32
437, 439, 451, 465, 481 Tasks
Scope of the task/test, 201, 225, 276, 411 holistic, 55, 59, 63, 99, 105, 138, 147, 149,
Selectivity index, 132–135 195, 201, 202, 212, 215, 249, 264, 401,
Shaping competence, 7, 39, 65–67, 75, 97, 121, 406, 438
122, 146, 166, 169, 190–192, 206, 220, solutions, 1, 28, 56, 57, 59, 66, 76, 97–99,
222–224, 244, 286, 287, 289–291, 293, 102, 104, 105, 107, 110, 111, 115, 123,
297, 298, 304, 346, 351, 384, 389, 401, 127, 128, 143–145, 147–149, 151,
405, 418, 425, 426, 428, 446, 469 159–166, 168, 169, 199, 201, 212–216,
Shaping the working world, 7, 41, 143 229, 236, 249, 256, 264, 272, 277, 282,
Shapiro-Wilk test, 151 288, 298, 341, 351, 375–377, 388, 401,
Situated learning, 261, 262, 387 406, 409, 411, 412, 433, 438–439, 443,
Situativity, 27, 51 444, 450, 451, 453, 461–462, 466–468,
SK Arbeit und Technik, 40 478, 491, 492
Skills Taylorisation, 6
implicit, 10, 49 Taylorism, 82, 343
practical, 9, 10, 49, 52, 187, 488 Teacher evaluation, 326, 327, 374
professionals, 9, 10, 468 Teachers assessment, 138, 269
social, 49 Teaching-learning process, 427, 447–459
technical, 10, 33, 381 Teaching-learning research, 332–338, 340, 342
vocational, 109 Teaching quality, 269, 368–370, 374, 380
Social compatibility, 28, 39, 58, 106, 120–122, Technical colleges (TA), 104, 144, 225, 226,
147, 161, 162, 169, 190, 192, 199, 299, 253, 264, 280, 291, 301, 376, 382, 385,
348, 351, 352, 404, 408, 428, 464, 490, 389, 415
492, 500, 504 Technicians mechatronics, 17, 101, 219–224,
Social responsibility, 42, 505 242, 251, 261, 262, 380, 396, 418
Solution spaces, 15, 59, 102, 104, 105, 107, Technological and economic change, 197
109–113, 144, 201, 214, 215, 277, 278, Technology assessment, 23
299, 332, 333, 347, 376, 399, 402, 440, Technology design, 23, 24
441, 454, 461, 462, 467, 482, 483, 490, Technology genetics research, 23
492, 497, 499, 502, 506, 508 Technology impact assessment, 23
South Africa, 118, 150, 244, 245, 490 Test arrangements, 3, 101–104, 107, 109, 257,
Specialisation, 17, 19, 103, 339, 354, 361, 419 395
S-R (behavioural) theory, 41 Test concept, 70, 99, 143, 144
Stagnation, 250–256, 261, 336, 341 Test design, 128, 261
Stagnation hypothesis, 251, 253 Test format, 1, 96, 99, 102, 104, 123, 132, 224,
Standard 335
educational, 13, 61, 427, 442 Test group
Studies primary, 102, 110, 129, 257
hypothesis-led, 44 secondary, 257
professionals, 25, 37, 44 Test motivation, 225–248, 276, 277, 340
of works, 52 Test participants, 64, 98, 102, 111, 112, 117,
Subject areas, 155, 392, 480 122–124, 127, 129, 130, 138, 142–144,
Subject didactics, 3, 61, 249 186, 202, 213, 225, 226, 229, 230, 235,
Sustainability, 39, 57, 59, 66, 68, 76, 106, 121, 238–240, 244, 247, 248, 251, 258, 261,
122, 147, 161, 162, 169, 190, 192, 214, 264, 265, 268, 274–280, 283, 285–287,
283, 351, 404, 407, 428, 465, 478, 479, 290, 300, 306, 340, 347, 348, 354, 356,
481, 483, 484, 490, 491 357, 362, 363, 372, 374, 376, 384, 385,
390, 410, 412, 418
Test population, 104, 120, 123, 125, 143, 144,
T 261, 264, 266, 384
TA, see Technical colleges (TA) Test quality criteria, 10, 126, 128, 132
Tacit knowledge, 10, 11, 32, 49–50 Test results
Subject Index 557

interpretation of, 64, 268, 276, 279, 340 contents, 69–72, 97, 98, 101, 102, 110, 112,
representation of, 290, 297, 354 125, 128–130, 135, 136, 155, 157, 159,
Test scope, 229, 276–277 160, 166, 168, 169, 187, 209, 258, 335,
Test tasks 406, 422
authenticity/reality reference, 98 curricular, 9, 19, 169, 259, 260, 264, 406
criteria-oriented, 99, 102, 136, 137 occupational, 110, 259
developing open, 91–146 professionals, 98, 112, 134, 142, 258, 259,
difficulty of, 64, 98, 102, 110, 120, 123, 406
124, 135, 138, 142–144, 146 vocational, 101
evaluation and choice of test tasks, 105–125 VDI, 98, 99
norm-based, 102 VET, see Vocational and educational training
representativeness of, 72, 98 (VET)
revision of, 110, 124, 125 Vocational and educational training (VET), 1,
selection and development, 72 22, 45, 62, 63, 68, 69, 71, 144, 155, 157,
Test theory 160, 169, 192, 198, 206, 224, 254, 258,
classical, 134, 156 260, 264, 300, 301, 309, 333, 334, 337,
probabilistic, 134, 156 344, 352, 381, 385, 418, 442, 445, 446
Total point values, 117, 138, 139, 202 Vocational development, 355, 425
Total score (TS), 120, 130, 138, 139, 202, 216, Vocational education, 1, 3–7, 11, 13–19,
218–221, 226, 228, 229, 234–237, 262, 24–26, 28, 30, 40–42, 44, 45, 49, 50, 59,
281, 286, 293, 294, 297, 299, 300, 306, 61–67, 69, 70, 72, 73, 80, 82, 126, 128,
308, 310, 337, 348–350, 356, 357, 363, 130, 131, 134, 136, 141–145, 154–157,
365, 371, 445, 517 163, 169, 171, 184, 185, 187, 197, 206,
Training 214, 216, 218, 224, 230, 248, 249, 257,
objectives, 67, 109, 192, 200, 260, 392, 451 258, 260, 262, 263, 268, 275, 276, 278,
paradox, 346, 430–432, 447, 455 282, 283, 293, 296–302, 306, 309–311,
practical, 18, 30, 45, 49, 53, 57, 63, 76, 79, 323, 331–333, 335, 336, 338, 342–346,
100, 140, 151, 158, 187, 326, 373, 382, 351, 352, 354, 355, 361–363, 371–373,
383, 398, 408, 428, 433, 435 383, 389, 391, 397–399, 403, 406–408,
programmes, 9, 100, 101, 103, 104, 253, 423–428, 438, 441, 442, 446, 455, 457,
263, 265–267, 282, 301, 306, 309, 310, 462, 470, 482, 488
323, 335, 382, 396, 397, 416, 425, 428 Vocational identity, 1, 62, 80, 172, 174, 181,
qualities, 238, 248, 268–271, 279, 324–327, 183, 184, 323, 333, 339, 354, 355
333, 334, 341, 362, 363, 365, 366, 371, Vocationalisation of higher education, 18, 352
378, 381 Vocational learning, 4, 16, 18, 25, 32, 43, 46,
regulations, 9, 20, 25, 29, 71, 80, 101, 120, 48, 56, 63, 126, 262, 263, 266, 268, 298,
142, 194, 199–202, 206, 207, 213, 224, 333, 339, 344, 346, 354, 374, 396, 405,
300, 339, 391, 396, 406, 433 426–428, 466, 488
support, 268, 269, 272, 325, 326, 328, 329, Vocational pedagogy, 29, 402, 431
367, 368 Vocational research, 17, 80, 339, 360
Typology of occupations, 86 Vocational schools, 7, 39, 40, 101, 102, 104,
123, 144, 151, 195, 208, 226, 247, 248,
251–253, 256, 258, 263, 264, 267–272,
U 274–276, 278, 286, 291, 297, 299, 301,
Understanding of (the) context, 20, 22, 100, 299 324, 339, 344, 356, 362, 364, 372–374,
Utility values, 57, 76, 77, 213, 214, 268, 345, 376, 378, 380–384, 391, 393, 396–399,
351, 352, 424, 453, 454, 461, 463 402, 405, 408, 416, 433, 436, 445, 468,
471
Vocational school teachers, 138, 186, 389, 391,
V 393, 394, 396–398, 401, 402, 406, 411,
Validity 413, 414, 416, 417, 421, 422
consensus, 91, 187 Vocational skills, 8, 16, 55, 110, 132, 151, 187,
constructs, 128, 130, 131, 134, 154 194, 207, 455
558 Subject Index

Vocational tasks, 55, 59, 128, 298, 398, 411, 343, 346, 347, 352, 360, 381, 433, 436,
414, 425, 438, 444 437, 453, 457, 469, 471, 477, 478, 481,
Vocational training practice, 49, 143, 260, 282, 483, 490–492, 507
336, 344, 401 process analyses, 39
Vocational training systems, 2, 19, 71, 88, 89, process knowledge, 30, 32, 45–48, 50, 51,
141, 185, 309, 342, 380, 428 56, 65, 66, 102, 136, 142, 207, 256, 285,
293, 294, 297, 298, 346, 381, 387, 398,
405, 406, 429, 438, 454, 455, 464, 468,
W 469
Work sample, 158
culture, 84 secondary technical, 12
design, 6, 16, 20, 59, 68, 72, 403, 408 situations, 10, 20, 31, 34–37, 39, 42, 44, 46,
design and organisation, 58, 59, 214 48, 50, 51, 98, 346, 387, 406, 425–427,
ethics, 17, 81–83, 85–89, 172, 184, 311, 431–435, 437–440, 488
313, 315, 319, 339, 346, 351–355, 357, systemic, 344
358, 361 tasks, 9, 11, 20–24, 26–28, 31, 33, 40, 41,
experiences, 27, 28, 30, 45, 50, 51, 67, 80, 43, 44, 48, 51, 56, 58, 59, 65, 71, 73, 80,
91, 343, 373, 430, 431, 434, 437, 447, 85, 91, 93–95, 98, 101, 128–130, 136,
449, 451, 453, 464, 468, 471, 490 198–200, 202, 207, 351, 361, 429,
industrial technical, 5, 51, 74, 91, 193 432–433, 436, 437, 441–443, 487, 488
morale, 81–82, 174 unpredictable, 488
organisation, 6, 11, 20, 21, 129, 466, 487, Work activity
490 complete, 23
organised skilled, 16 incomplete, 23
process, 6, 9, 18, 25, 27, 30–37, 39, 41, 44, Work and Technology, 6, 20, 24, 40, 41, 72,
46, 57, 58, 66, 72, 73, 106, 121, 122, 197, 312, 342, 429, 487
129, 142, 157, 162, 186, 187, 190, 193, Working contexts, 22, 26, 28, 36, 55, 100, 438
197, 199, 202, 203, 206, 214, 266–268,

You might also like