Raun Er 2021
Raun Er 2021
Raun Er 2021
Felix Rauner
Measuring and
Developing
Professional
Competences in
COMET
Method Manual
Technical and Vocational Education and
Training: Issues, Concerns and Prospects
Volume 33
Series Editor
Rupert Maclean, RMIT University, Melbourne, Australia
Associate Editors
Felix Rauner, TVET Research Group, University of Bremen, Bremen, Germany
Karen Evans, Institute of Education, University of London, London, UK
Sharon M. McLennon, Newfoundland and Labrador Workforce Inno, Corner Brook,
Canada
Advisory Editors
David Atchoarena, Division for Education Strategies & Capacity Building,
UNESCO, Paris, France
András Benedek, Ministry of Employment and Labour, Budapest, Hungary
Paul Benteler, Stahlwerke Bremen, Bremen, Germany
Michel Carton, NORRAG c/o Graduate Institute of International and Development
Studies, Geneva, Switzerland
Chris Chinien, Workforce Development Consulting, Montreal, Canada
Claudio De Moura Castro, Faculade Pitágoras, Belo Horizonte, Brazil
Michael Frearson, SQW Consulting, Cambridge, UK
Lavinia Gasperini, Natural Resources Management and Environment Department,
Food and Agriculture Organization, Rome, Italy
Philipp Grollmann, Federal Institute for Vocational Education and Training (BiBB),
Bonn, Germany
W. Norton Grubb, University of California, Berkeley, USA
Dennis R. Herschbach, University of Maryland, College Park, USA
Oriol Homs, Centre for European Investigation and Research in the Mediterranean
Region, Barcelona, Spain
Moo-Sub Kang, Korea Research Institute for Vocational Education and Training,
Seoul, Korea (Democratic People’s Republic of)
Bonaventure W. Kerre, Moi University, Eldoret, Kenya
Günter Klein, German Aerospace Center, Bonn, Germany
Wilfried Kruse, Dortmund Technical University, Dortmund, Germany
Jon Lauglo, University of Oslo, Oslo, Norway
Alexander Leibovich, Institute for Vocational Education and Training Development,
Moscow, Russia
Robert Lerman, Urban Institute, Washington, USA
Naing Yee Mar, GIZ, Yangon, Myanmar
Munther Wassef Masri, National Centre for Human Resources Development,
Amman, Jordan
Phillip McKenzie, Australian Council for Educational Research, Melbourne,
Australia
Margarita Pavlova, Education University of Hong Kong, Hong Kong, China
Theo Raubsaet, Centre for Work, Training and Social Policy, Nijmegen, The
Netherlands
Barry Sheehan, Melbourne University, Melbourne, Australia
Madhu Singh, UNESCO Institute for Lifelong Learning, Hamburg, Germany
Jandhyala Tilak, National Institute of Educational Planning and Administration,
New Delhi, India
Pedro Daniel Weinberg, formerly Inter-American Centre for Knowledge Develop-
ment in Vocational Training (ILO/CINTERFOR), Montevideo, Uruguay
Adrian Ziderman, Bar-llan University, Ramat Gan, Israel
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore
Pte Ltd. 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by
similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
In less than a decade, the methods of competence diagnostics in accordance with the
COMET test procedure have become an internationally established instrument for
quality assurance and quality development in vocational education and training. At
its core, the methodological instruments comprise the COMET competence and
measurement model as a basis for the development of test and examination tasks
and the evaluation of task solutions: the rating procedure. The insight that, when
solving and working on tasks in the working environment, there is always a
situational solution space as well as extensive scope for creativity on a social level
that must be exploited was translated into the form of open complex test tasks. The
insight that professional tasks must always be solved completely, if only for reasons
of occupational safety, health protection, environmental and social compatibility,
and not least for reasons of the qualitative competition to which the companies are
exposed, justifies the theory of holistically solving professional tasks.
After the psychometric evaluation of the COMET competence and measurement
model by Thomas Martens and Birgitt Erdwien had already been successful in 2009,
the COMET project developed into an international research and development
network with projects encompassing numerous industrial-technical, commercial
and personalised service occupations. In particular, the cooperation projects with
the COMET consortia in China, headed by Professor Zhao Zhiqun and in
South Africa, supported by the “Sector Education and Training Authority
merSETA” and a group of doctoral students, have contributed to expanding and
profiling internationally comparative vocational education research in intercultural
teaching and learning research.
Thanks to Professor Martin Fischer’s initiative, an interim report on COMET
research was presented at a conference at KIT in October 2013 in keeping with the
slogan: “COMET under the microscope”. The documentation of presentations and
discussions by COMET experts in practice, educational administrations and voca-
tional training research and, above all, the exchange of experience with colleagues
who evaluated the COMET project from an overarching external vocational educa-
tion and training perspective, contributed to a conference result that had a lasting
v
vi Preface
effect on the further development of the COMET methodology (Fischer, Rauner, &
Zhao, 2015). Above all, the criticism that the COMET competence model only
covers conceptual and professional planning competence but not practical skills,
decisively contributed to the further development of the competence and measure-
ment model. Meanwhile, an extended measurement model has been developed as a
foundation for conducting competence-based examinations, including their
“practical” part.
This manual serves as response to a frequently expressed request for a summary
of the methods developed and tested in the COMET projects in a method manual.
Such extensive work has only been possible with the participation of the large
number of colleagues who have contributed to the development of these methods.
The spectrum of the documented methods ranges from the development of test tasks
to the performance of pre-tests, cross-sectional and longitudinal studies, the devel-
opment and evaluation of scales for measuring professional and organisational
identity and the commitment based thereon, culminating in the development of
context analyses to form a procedure for measuring test motivation. Particular
importance is attached to the presentation and exemplary illustration of the methods
of psychometric testing of the competence and measurement model, as well as the
scales and models of context analysis.
In hardly any other field of vocational education and training research is the
participation of teachers and trainers in the research process as indispensable as in
competence diagnostics. This is one of the main findings of COMET research in
recent years.
I would, therefore, like to thank the numerous project groups that have so far
played a very decisive role in the implementation of projects in an increasing range
of occupations and specialist areas in initial vocational training, technical colleges
and higher technical schools, as well as tertiary vocational training courses. This
applies above all to the evaluation of the wide variety of solutions to the test tasks,
the didactic evaluation of the rating scales and the interpretation of the test results,
whereby the latter requires intimate knowledge of the respective teaching and
learning contexts.
My thanks also go to the Manufacturing, Engineering and Related Services
Sector Education and Training Authority (merSETA) in South Africa who supported
the translation of the handbook from German to English, and the Institute for Post-
School Studies at the University of the Western Cape, South Africa, under the
leadership of Prof Joy Papier who, together with Dr. Claudia Beck-Reinhardt,
managed the book translation project.
In the first part, the manual introduces the COMET competence and measurement
model in three introductory chapters. The fifth chapter describes the methods of
developing and evaluating test tasks. The sixth chapter provides a detailed insight
into the psychometric evaluation of the test instruments using practical examples.
Chapters 7 and 8 document the steps required for planning, conducting and
evaluating the tests.
Chapter 9 presents the contribution of COMET competence diagnostics to
teaching-learning research. Once the participation of teachers and trainers in the
Preface vii
“student” tests had led to new findings regarding the transfer of the professional
competence profiles of teachers/lecturers of vocational subjects (LbF [TPD]) to their
students, COMET competence diagnostics was also developed for LbF (TPD).
Chapter 10 shows those methods of competence diagnostics and development for
LbF (TPD) are available for the implementation of large-scale projects and for the
training and further education of LbF (TPD).
The concluding eleventh chapter deals with the issue of the application of
COMET instruments for the design, organisation and evaluation of VET processes,
which is regarded as crucial from the perspective of VET practice.
I hope that this methodological manual will provide a handy toolkit and therefore
a powerful boost to quality assurance and development in vocational education and
training.
Furthermore, I would like to thank several colleagues for (co-)drafting specific
chapters: Joy Backhaus (Sect. 5.6.2), Thomas Martens (6.1 and 6.3), Johanna
Kalvelage and Yingyi Zhou (6.4), Rongxia Zhuang and Li Ji (6.5), Jürgen Lehberger
(10.7, 10.8 and Chap. 11) as well as Karin Gäumann-Felix and Daniel Hofer (11.3).
Additionally I thank the many contributors who were involved in the realisation of
this book in various ways: Martin Ahrens, Nele Bachmann, Birgitt Erdwien, Jenny
Franke, Jenny Frenzel, Bernd Haasler, Ursel Hauschildt, Lars Heinemann, Dorothea
Piening and Zhiqun Zhao.
This ground breaking volume by Professor Felix Rauner, on Measuring and Devel-
oping Professional Competencies in COMET: Method Manual, is the latest book to
be published in the long-standing Springer Book Series “Technical and Vocational
Education and Training”. It is the 33rd volume to be published to date in the TVET
book series.
This is an important book on an important topic and will no doubt be widely read
and respected. Through its eleventh chapters, the volume comprehensively and
critically examines and evaluates key aspects of measuring and developing profes-
sional competencies (COMET). As Professor Rauner points out, in less than a
decade, the methods of competence diagnostics, in accordance with the COMET
test procedure, have become an internationally established instrument for quality
assurance and quality development in vocational education and training.
The book focuses particularly on examining what teachers and trainers can learn
from modelling and measuring vocational competence learning tasks related to each
other and vocational identify development for the design and organisation of voca-
tional training processes; whether test and learning tasks are related to each other and
what distinguishes them from each other.
Professor Felix Rauner is very well qualified to write this important and timely
book since he is widely regarded and respected as being an outstanding, widely
influential researcher, author and opinion leader working in the area of education,
with particular reference to technical and vocational education and training (TVET).
Professor Rauner is based in Germany, working for many years at the Institut
Technik und Bildung, University of Bremen. He has published very widely in the
field of TVET, including being co-author of the widely used and highly respected
comprehensive (1103 page) Handbook of Technical and Vocational Education,
published in the Springer International Library of Technical and Vocational Educa-
tion and Training. That Handbook is published in both English and German.
In terms of the Springer Book Series in which this volume is published, the
various topics dealt with in the series are wide ranging and varied in coverage, with
an emphasis on cutting edge developments, best practices and education innovations
ix
x Series Editors Introduction
for development. More information about this book series is available at http://www.
springer.com/series/5888
We believe the book series (including this particular volume) makes a useful
contribution to knowledge sharing about technical and vocational education and
training (TVET). Any readers of this or other volumes in the series who have an idea
for writing their own book (or editing a book) on any aspect of TVET, are enthu-
siastically encouraged to approach the series editors either direct or through Springer
to publish their own volume in the series, since we are always willing to assist
perspective authors shape their manuscripts in ways that make them suitable for
publication in this series.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Possibilities and Limitations of Large-Scale Competence
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Modelling Professional Competence . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Format of the Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Modelling and Measuring Professional Identity and Professional
Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Modelling and Measuring the Competence of Teachers in
Vocational Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 The Quality Criteria for Professional Competence Diagnostics
and the Design of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Professional Competence as a Subject of Competence
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Design Instead of Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 The Possibilities and Limitations of Large-Scale Competence
Diagnostics (LS–CD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Implicit Professional Knowledge
(Tacit Knowledge) . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Professional Competence (Employability) . . . . . . . . . 10
2.2.3 Craftsmanship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.4 Social and Key Competences . . . . . . . . . . . . . . . . . . 11
2.2.5 Abilities That Are Expressed in the Interactive
Progression of the Work . . . . . . . . . . . . . . . . . . . . . . 12
3 Categorial Framework for Modelling and Measuring Professional
Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 The Occupation Type of Societal Work . . . . . . . . . . . . . . . . . . 16
3.1.1 Employability or Professional Competence . . . . . . . . . 18
3.1.2 Architecture of Parallel Educational Paths . . . . . . . . . 18
3.1.3 Professional Validity of Competence Diagnostics . . . . 19
xi
xii Contents
1
Note on the spelling of KOMET/COMET: The spelling as COMET has been applied since the
international COMET conference organised by the European Training Foundation (ETF) in 2010.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 1
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_1
2 1 Introduction
possibilities, their environmental compatibility, their practical value and their invest-
ment and follow-up costs, and integrating this project into the operational work plan,
already shows that professional competence is characterised by exploiting the scope
for solutions and design in line with each specific situation, taking into account
competing criteria (and values). In general, this requires the development of com-
petence diagnostics guided by the realisation that professional specialists are
involved in the execution of large and small tasks in design and responsibility
processes that always involve the search for intelligent compromises. A bicycle
mechanic, for example, is distinguished by the fact that he is able to find out which
bicycle configuration might be the most suitable for a customer when talking to him.
Thomas MARTENS and Jürgen ROST therefore appropriately classify the mea-
surement of professional competence in the context of the competence-diagnostic
discussion, in which they state ‘In the COMET project, an ability model is examined.
The aim is to model how testees, whose solutions have different degrees of devel-
opment, can cope with open professional tasks’ (Martens & Rost, 2009, 98).
Whether and how it is possible to establish solid international comparative
competence diagnostics, both in terms of content and in accordance with psycho-
metric criteria, in a world with very different vocational training systems and, in
addition, with open test tasks, requires convincing answers. This already applies due
to the fact that the abundance of problems to be solved seems to be an insurmount-
able hurdle (cf. Baethge, Achtenhagen, Babie, Baethge-Kinsky, & Weber, 2006).
The successful empirical review of the COMET test procedure (2007–2012)
resulted in an available competence and measurement model, which opens up access
to the complex world of occupations, occupational fields and the almost conspicuous
diversity of vocational training courses and systems for competence diagnostics.
However, this methodological manual deals not only with the fundamental questions
of modelling vocational competence and the psychometric evaluation of the
COMET test procedure, but also with the following topics:
A mere glance at the job descriptions of occupations shows that numerous profes-
sional competences can be measured with an acceptable amount of effort using
methods of competence diagnostics. Those professional skills that can be easily
recorded empirically and those that can only be recorded empirically with greater
effort are described in the first chapter. The third and sixth chapters describe how
methods of competence diagnostics can be used to improve the quality of tests.
1.4 Modelling and Measuring Professional Identity and Professional Commitment 3
The concept of the test tasks and their development procedure is one of the acid tests
that prove in practice whether the test procedure can be applied beyond a national
framework. Prior experience with the COMET project shows that the participation of
the countries involved in international comparison projects primarily depends on
whether the subject didactics or the subject teachers and trainers assess these as
representative and as valid for the respective occupation or training course, even if
they have not participated in the development of the test tasks. Chapter 5 is devoted
in detail to the complex questions of the COMET test arrangement.
A special feature of the COMET test procedure is the modelling and ‘measuring’ of
professional identity and professional commitment. In addition to measuring moti-
vation as a variable that is used to interpret the measured competence, this is a central
concern (objective) of vocational education and training. In vocational education, the
development of professional competence and professional identity is regarded as an
interdependent, indissoluble relationship (Blankertz, 1983). The expansion of the
competency and measurement model by this aspect of professional development is
dealt with in Sect. 4.6.
4 1 Introduction
After the participation of teachers and trainers in the ‘student’ tests had led to new
insights into the transfer of teachers’ professional competence profiles to their
students, the COMET competence diagnostics method was also developed and
tested for teachers of vocational subjects (LbF [TPD]). The eighth chapter describes
and explains the COMET competence and measurement model. The current state of
research shows that a toolkit is now available for large-scale projects as well as for
the training and further education of LbF [TPD]).
The determination of the objectives of vocational education and training has always
been characterised by the tension between the educational objectives aimed at the
development of the personality and qualification requirements of the world of work
and the (training) objectives derived from these. In the vocational education discus-
sion, a large number of attempts can be made to resolve this tension in the form of a
holistic vocational training concept (Heid, 1999; Ott, 1998). The tradition of mastery
(in the broader sense) is often referred to as an example of holistic vocational
training. Richard Sennett has examined the social-historical and philosophical
roots of mastery in his book Handwerk and has attributed it a significance that
goes far beyond institutionalised craftsmanship by opposing mastery to the world of
fragmented skills (Sennett, 2008). The emphatic formula of ‘education in the
medium of occupation’ probably best represents the ever-new attempts to reconcile
education and qualification (Blankertz, 1972). With his deskilling thesis, Harry
Braverman classifies such attempts as idealistic misconceptions of reality in the
work environment, which is based on the principle of deskilling, at least in industrial
work with its processes of the progressive mechanisation of human labour—and
subject to the conditions of capitalist value realisation (Braverman, 1974). In the
sociological studies initiated by the Federal Institute for Vocational Education and
Training Research (BBF) in 1969 on changes in qualification requirements in
industrial technical work, the authors confirm this thesis or modify it to form the
so-called polarisation thesis, according to which the larger number of those deskilled
is contrasted by the smaller number of those more highly qualified: the winners of
rationalisation (Baethge et al., 1976; Kern & Schumann, 1970). This stance can
occasionally be found in more recent contributions to the discussion. Nico Hirtt sees
the Anglo-Saxon tradition of ‘competency-based education’ as an expression of the
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 5
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_2
6 2 Professional Competence as a Subject of Competence Diagnostics
1
Member of an IBM working group on the development of an open architecture for integrated
information systems in the manufacturing industry (1984/85).
2.1 Design Instead of Adaption 7
themselves and also address the scope of their project. ‘It cannot be overemphasised
that PISA has no intention of measuring the horizon of modern general education. It
is the strength of PISA in itself to refuse such fantasies of omnipotence. . .’ (Baumert
et al., 2001, 21).
A comparable attempt at the education system’s technological renewal was
already pursued by the federal and state governments with the educational technol-
ogy reform project of the 1970s. The development and testing of computer-
supported forms of teaching and learning determined the fantasies and attempts to
substitute and program teaching work for more than a decade (cf. BLK 1973, 75).
For a while, it seemed possible to objectify educational processes and make them
technologically available. The Society for Programmed Instruction (GPI) and Cyber-
netic Pedagogy promised the liberation of educational systems and processes from
pedagogy as an art that could not be unified by educational policy up until now, but
which many educators somehow possess to varying degrees, and which had so far
eluded all attempts at rationalisation (Frank, 1969). However, the attempts associ-
ated with every new information technology to cope with the change in educational
technology in pedagogy have lost their power, since the ever faster succession of the
failure of IT-supported educational reforms—most recently the interest was directed
towards the Internet—has contributed to the insight that the control of educational
technology in educational processes was possibly a fixed—and also an expensive—
idea from the very beginning (Heinze, 1972).
It is foreseeable that future attempts to control education systems through mea-
surable outputs and inputs will also fail, since the more important educational goals
and contents will evade ‘input/output didactics’ shaped by economic calculation
(Young, 2009).
The hastily concluded considerations of education experts to control educational
processes via standards, the success of which can also be measured in the form of a
large-scale assessment, reduce education to the measurable. This is where the affinity
to the educational technology reform project lies. The excessive expectations of
large-scale competence assessment as a comprehensive pedagogical reform idea are
also problematic because a realistic assessment of the pedagogical-didactical and
educational-political potentials of competence diagnostics can significantly enrich
the actors’ toolbox.
• Whether the skills defined in a job description and in the corresponding training
regulations are mastered in terms of qualification requirements,
• Whether the required competency is achieved.
This requires differentiation according to
• Abilities/qualifications that must be fully and safely mastered, as they may be
relevant for safety reasons,
• Skills/qualifications that have to be mastered to a certain degree and finally
according to,
• Skills/qualifications that are not core qualifications and are therefore classified as
more or less desirable (Table 2.1).
An examination must include all qualifications and requirements relevant to
employability. Practical skills must necessarily be tested in real professional situa-
tions (situational testing). In contrast, cognitive dispositions in the form of action-
guiding, action-explanatory and action-reflecting knowledge of work processes can
be tested with standardised examination methods.
The COMET method of competence diagnostics to identify competence levels
and competence profiles, to carry out comparative competence surveys with the aim
of comparing educational programmes and education systems, goes far beyond the
examination in the context of regulated vocational training programmes and the
examination of ‘learning success’ in relation to the learning objectives defined in a
specific curriculum.
In particular, international comparative LS-CD projects do not primarily define
the contextual validity of the competency survey in curricular terms, since it is an
essential goal of competence research to gain insights into the strengths and weak-
nesses of national educational structures (including curricula). In line with the
International World Skills (IWS), the contextual validity of the test tasks in the
10 2 Professional Competence as a Subject of Competence Diagnostics
Implicit skills can be observed, and their quality can be assessed in the performance
of professional activities and, above all, on the basis of work results. Although they
are largely beyond an explicit technical description and explanation, they are often of
central importance for professional ability and therefore also subject to examina-
tions. The rating procedures developed in the COMET project also allow the
determination of tacit skills.
2.2.3 Craftsmanship
Social skills play a very important role in vocational work and thus also in vocational
education and training. It is controversial whether social skills can be measured as
‘key’ skills across all occupations. According to Jochen Gerstenmaier, research into
learning and expertise disproves the thesis of devaluing knowledge in terms of
content in favour of general skills such as ‘problem-solving’. However, it can be
shown that the competence to solve problems is based on domain-specific knowl-
edge (Gerstenmaier, 1999, 66, 2004, 154 ff.).
Grob and Maag Merki have made an interesting attempt to approach the empirical
survey of interdisciplinary competences. They measured interdisciplinary compe-
tences on the basis of a large number of scales (Grob & Maag Merki, 2001), not as
‘key competences’, but rather as competences that promote the professional execu-
tion of work tasks at a general level. It is indisputable in this context that, for
example, professional work necessarily involves cooperation with other specialists
12 2 Professional Competence as a Subject of Competence Diagnostics
from the same community of practice and with experts from related fields and thus
represents a central dimension of professional competence. Within the framework of
the COMET project, the context survey provides information on the concepts of
professional cooperation among respondents.
These abilities are based on the type of creative action—in contrast to the type of
purposeful action (cf. Brater, 1984). According to Brater, artistic action is the
prototype of this form of action. The results of this type of action can only be
anticipated to a limited extent in terms of planning and concept.
Especially in the area of secondary technical work (maintenance, troubleshooting,
etc.), ‘. . . the situation must be turned into opportunities, ideas must be born,
solutions must be found. Here it is not adherence to plans but originality that is
required’ (Brater, 1984, 67). The established forms of measuring professional
competence reach their limits here, which are given by the open form of working
processes. This applies in particular to occupations with a highly intersubjective
share, e.g. in the education and healthcare sector. The interactive aspect of profes-
sional work can, to a certain extent, be covered by the open structure of the LS-CD
test tasks or by a rating based on observations.
Chapter 3
Categorial Framework for Modelling
and Measuring Professional Competence
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 13
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_3
14 3 Categorial Framework for Modelling and Measuring Professional Competence
Fig. 3.1 On the relationship between guiding principles of vocational education and the measure-
ment of vocational competence
In addition, Weinert proposes not to include personality traits that lie outside the
definition of competence as domain-specific cognitive performance disposition in
competency models or to model them separately.
Additional requirements for competency models are listed as follows:
• The suitability of the competence model for developing test procedures and for
recording and evaluating learning outcomes in relation to educational monitoring,
school evaluation and feedback to teachers/trainers and students/trainees on
levels and development of competence,
• Its suitability for international comparative competence surveys.
For vocational training, the following requirements are also indispensable:
• The application of open complex (holistic) test tasks, since only these can serve to
map real professional tasks;
• The solution or processing of professional tasks usually require weighing up
between alternative solutions, which can be assigned to a solution space staked
out by defined requirement characteristics—‘staked out’, since the solution
spaces must in principle be open for unforeseeable solutions (COMET Vol.I);
• The recording of professional commitment and professional identity as a dimen-
sion of the professional development process that is equivalent to the develop-
ment of professional competence;
• A competence assessment that also allows comparisons across educational
programmes. This results in the demand for a competency model that regards
vocational education and training as one area of learning and which at the same
time enables vocational and occupational field-specific implementation;
• A distinction from the tradition of assessment as a form of outcomes verification,
as used to determine credit points for the skills defined in the national certification
16 3 Categorial Framework for Modelling and Measuring Professional Competence
In the discussion on the professional organisation of societal work in the 1990s, the
assessments of social historians and sociologists in particular consolidated into the
thesis of the erosion of professionalism.
As early as 1972, Giddens predicted the reasonable expectation that new forms of
democratic participation would gradually emerge and that civil society would
replace the working society. Kern and Sabel publish a committed plea against
professionally organised skilled work, justified with the fact that the occupational
form of work, especially in industrial production, leads to company demarcations,
which hinder the necessary flexibilisation in modern companies. The adherence to
the tradition of professional skilled work at best enables companies to reproduce
what already exists, but not to innovate. As a way out, they recommend the Japanese
model of company organisation development, which does without the occupational
form of work and thus also without a vocational education system (Kern & Sabel,
1994). Finally, biographical research has coined the term ‘patchwork biography’ and
wants to draw attention to an erosion of professionally organised wage labour
(cf. Beck, 1993). Ulrich Beck and others argue in this context for a reflective
modernisation with which the division into instrumental economic action and
communicative political action can be overcome (Beck, Giddens, & Lash, 1996).
No later than with the publication of Richard Sennett’s book The Corrosion of
Character (Sennett, 1998), this discussion takes a turn. SENNETT deals with the
3.1 The Occupation Type of Societal Work 17
Under the heading ‘Perspectives of the rescue, regeneration and future consoli-
dation of an (also) professionally accentuated organisation of societal work’,
Lempert concludes
By including the academic professions, the nightmare vision of a total ‘disposal’ of the
professional principle would become absurd and be banished from the outset (ibid., 463).
The concept of open, dynamic careers and core occupations (Heidegger &
Rauner, 1997) has meanwhile also emerged in the European Vocational Education
and Training Dialogue as a guiding principle with an impact on vocational research
and development. One prominent example is the development of the ‘European’
profession of ‘motor vehicle mechatronics technician’ (Rauner & Spöttl, 2002).
18 3 Categorial Framework for Modelling and Measuring Professional Competence
teaching and research. The entitlement to award the degrees bachelor, master, PhD
lies—internationally—with the universities. These include—in a more or less dif-
ferentiated way—qualifications and training courses in vocational education and
training. The barrier between vocational and academic education is high; it almost
hermetically separates the two worlds of education: academic-scientific and
executive-oriented vocational education.
All attempts to make this educational architecture more accessible have led to the
vocationalisation of academic and vocational education and training and therefore to
a development that impairs the quality of both educational traditions. In contrast, an
architecture of parallel educational pathways holds the potential for a new quality of
vertical permeability and the realisation of the equivalence of vocational and aca-
demic education. At the same time, the establishment of a continuous path of dual
education creates a new dynamic in the interaction between the education and
employment system. The idea is a concept of modern professionalism, a necessary
basis for the implementation of an architecture of parallel educational paths. Even if
the constitutional freedom of teaching and research protects universities from
aligning their teaching with the qualification requirements of the employment
system, it can be expected that the occupational profiles developed in the processes
of vocational training planning will trigger a new discussion on professionalisation
in academic education. This could also contribute to a significant reduction in the
proliferation of specialisation in degree programmes and to the participation of
organisations in the world of employment in the design and organisation of (dual)
vocational training courses at universities (Rauner, 2015a), modelled on the Voca-
tional Training Act.
Assuming that the internationalisation processes cover not only the academic pro-
fessions, but also the professional organisation of work in the intermediary employ-
ment sector, then there is every reason to identify professional work as the reference
point for substantiating the validity of competence diagnostics in the field of
vocational education and training (! 4.7).
The curricular validity of tests would limit their function in the investigation of
different forms of vocational education and training, including the quality of voca-
tional curricula. On the other hand, test tasks whose contextual validity is based on
reference to vocational work (vocational validity) make it possible to identify
strengths and weaknesses of various vocational training systems and arrangements.
In particular, it is possible to check whether trainees/students have a vocational work
concept (Bremer, 2006) as well as vocational qualification upon completion of their
vocational training—and not just technical and functional knowledge and skills, as is
taught in typical (university) forms of vocational training or in traditional basic
vocational training. The professional fields of action therefore apply to
20 3 Categorial Framework for Modelling and Measuring Professional Competence
Table 3.1 Characteristics of task design based on Emery and Emery (1974), Hackman and Oldham
(1976) and Ulich (1994, 61)
Design feature Assumed effects Realisation by. . .
Holistic character Employees recognise the impor- . . . tasks with planning, executing and
tance and value of their work controlling elements and the possibility
Employees receive feedback on of checking the results of one’s own
their own work progress from the activities for compliance with
activity itself requirements
Variety of Different skills, knowledge and . . . tasks with different demands on
requirements abilities can be applied body functions and sensory organs
One-sided demands can be
avoided
Possibilities for Difficulties can be overcome . . . tasks, the accomplishment of which
social interaction together suggests or presupposes cooperation
Mutual support helps to cope
better with demands
Autonomy Strengthens self-esteem and will- . . . tasks with disposition and decision
ingness to take responsibility possibilities
Provides the experience of not
being without influence and
meaning
Opportunities for General mental flexibility is . . . problematic tasks for which existing
learning and maintained qualifications must be used and
development Vocational qualifications are extended or new qualifications acquired
maintained and further developed
Time elasticity Counteracts inappropriate work . . . creating time buffers when setting
and stress-free consolidation target times
adjustability Creates leeway for stress-free
thinking and self-chosen
interactions
Sense of purpose Makes you feel involved in the . . . products whose social benefits are
creation of socially useful prod- not questioned
ucts . . . products and production processes
Provides certainty that individual whose ecological harmlessness can be
and social interests are in checked and guaranteed.
harmony
advantages, these concepts have found their way into operational organisational
development (Ganguin, 1992).
The identification of professional work tasks must therefore consider the norma-
tive aspects of professional development and work organisation, as well as both in
their context. HACKER comes to a similar conclusion in his analysis of diagnostic
methods to expert knowledge:
As a preliminary consequence for the diagnosis of knowledge, it seems advisable to consider
a paradigm shift from [...] a reproducing to a (re-)constructing process of the task-related
performance prerequisites with individual and cooperative problem-solving and learning
offers for the experts (Hacker, 1986, 19).
22 3 Categorial Framework for Modelling and Measuring Professional Competence
Fig. 3.3 Professional work in the field of tension between work contexts and work practices
(Rauner, 2002a, 31)
Professional work tasks can be divided into subtasks. Subtasks are characterised
by the fact that their sense for the employee is not derived from the subtasks
themselves, but only from the context of the higher-level work tasks. If the subtasks
of a superordinate task are delegated to different persons who do not work together
in a working group, the employees lose sight of the working context. According to
this organisational model, the subtasks dissolve the work context not only
organisationally, but also in the subjective perception (as an understanding of
context) and in the subjective experience of the employees.
In this context, occupational science primarily deals with questions of order and
condition analysis, the division of human–machine functions and, above all, with
questions of stress and less with the aspect of professionally organised work as a
point of reference for educational processes. Therefore, a detailed subdivision of
work tasks into subtasks, work actions and occasionally beyond that in operations
can be quite appropriate when carrying out empirical work analyses. In VET
research, on the other hand, if tasks and work actions become context-free reference
points for the design of VET plans and processes—detached from the work context
(Fig. 3.3)—this induces decontextualised learning that stands in the way of teaching
VET competence aimed at understanding and shaping the world of employment (see
Connell, Sheridan, & Gardner, 2003).
3.2 The Design of Work and Technology: Implications for the Modelling of. . . 23
Scientific interest in the working process is also directed towards the structure of
the complete working process. The vocational pedagogical and occupational scien-
tific interest in this occupational scientific concept is based on its normative inter-
pretation through design-oriented occupational science: Employees should learn to
plan, carry out and evaluate their work (cf. Table 3.1). Accordingly, a professional
activity that is based on performance alone is an incomplete work activity. As a
pedagogical category, however, the term ‘complete work activity’ is only suitable if
the meaning or content-related aspect of the work activity is not excluded. In this
context, Frieling refers to the limited range of standardised analytical methods as
developed by McCormick (1979), Frei and Ulich (1981), Volkert, Oesterreich,
Gablenz-Kollakowicz, Grogoll, and Resch (1983) and other occupational scientists.
Although these instruments could be used as a structuring aid for recording essential
aspects of work activity (Frieling, 1995, 288), the abstract formulation of the items is
unsuitable for the analysis and evaluation of concrete work contents in their signif-
icance for the working persons (Lamnek, 1988). This critical assessment is of central
importance for the design of vocational curricula and vocational training processes.
A further source for the educational theoretical development of a vocational
competence concept is the work of the VDI on technology assessment and the
corresponding philosophical discussion on the ethics of technology. An essential
aspect of technology assessment is the technology impact assessment, which is
oriented towards policy advice (Ulrich, 1987). The concept of technology assess-
ment already has the potential to be expanded by technology genetics research and
the concept of technology design (Sachverständigenkommission Arbeit und
Technik, 1986). Its guideline on technology assessment, the VDI committee ‘Fun-
damentals of Technology Assessment’ states: ‘Technology assessment here means
the planned, systematic, organised procedure that [...] derives and elaborates options
for action and design [from the assessment of technical, economic, health, ecolog-
ical, human, social and other consequences of technology and possible alternatives]’
(VDI, 1991).
In this guideline developed by the VDI, technology is understood as an objecti-
fication of values and related interests. In this case, the quality of ‘responsible’
technical development is assessed with reference to the overriding criteria of per-
sonality development and the quality of social development. Six ‘values in technical
trading’ can be assigned to these superordinate values (Fig. 3.4).
These are
• Functionality (usability, effectiveness, technical efficiency),
• Economic efficiency (in line with individual economic profitability),
• Prosperity (in line with macroeconomic benefit),
• Security (for individuals and humanity),
• (well-being, health protection)
• Environmental quality (natural and cultural components) (ibid., 7 ff.).
The second root of a ‘technical education’, used to establish the connection
between the technically possible and socially desirable (Rauner, 1986), is the
discussion on technology philosophy, which gained momentum in parallel with
24 3 Categorial Framework for Modelling and Measuring Professional Competence
Fig. 3.4 Relationship between goals and values for petrol engines (VDI, 1991, 3–5)
‘work and technology’ research (Hastedt, 1991; Lenk & Ropohl, 1987; Meyer-
Abich, 1988). Heiner Hastedt in particular deals with the possibilities of technology
design in his research on ‘basic problems regarding the ethics of technology’
(Hastedt, 1991, 138), whereby he defines very similar evaluation and design cate-
gories as the VDI. Technology design implies not only interdisciplinarity, but new
forms of participation, according to the motto formulated by Walter Bungard and
Hans Lenk: ‘Technology is too important, now and in the future, to be left to the
technicians alone’ (Bungard & Lenk, 1988, 17).
Vocational education and training are a form of education and qualification in the
world of employment as well as an intentional process of learning for the world of
employment, dependent on knowledge of the expertise and skills required in the
work process. Three questions need to be answered:
• What are the skills that enable ‘skilled’ workers to carry out their work
adequately?
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 25
• What other skills must they have in order to participate in the process of
operational organisational development—both within and beyond their own
area of responsibility?
• Which skills are developed in the work process itself or how should ‘learning’
and ‘qualifying’ work processes and work systems be designed?
In vocational education and training practice, some of these questions are rarely
asked because the teaching and learning content and the related educational and
qualification objectives are specified in organisational systems—the training regu-
lations with their framework training plans for in-company and (framework) curric-
ula for school-based vocational education and training. They represent the
occupational profiles in an operationalised form. However, since occupations are
now mostly traditional and fixed attributions of tasks for the organisation of social
work, embedded in the industrial-cultural development of regions and countries,
occupations represent general socio-economic and less the qualification require-
ments aimed at company organisational development. The increasingly rapid pace
of technological and operational innovations in industry, commerce and crafts
requires the examination of occupational profiles and occupational regulations
with regard to their topicality and prospectivity and to relate them to the reality of
work in their attributions of tasks. Here, the working reality is understood not only
as one which is empirically given, but also as one to be developed.
In the study of a professional field (occupational field science)1, vocational
scientific work studies therefore play a central role. The subject matter of profes-
sional scientific work studies is described in more detail below, and information is
provided on their methodological implementation.
The vocational sciences deal with the contents and forms of skilled work in
established and developing occupations and occupational fields, with vocational
learning processes for the world of employment and with implicit and explicit
learning in the work process. In the analysis, design and evaluation of vocational
training processes and work processes that promote learning, the link must be
established between the work processes of vocational working reality, the learning
and educational processes and the systems of vocational organisation (Fig. 3.5).
Figure 3.5 shows three correlations between the world of employment and
vocational education and training. The widespread idea that in a first step, the
means of vocational classification can be derived from the analysis of the reality
of work and that the contents and forms of vocational training processes result from
this in a linear connection is called qualification determinism. This deterministic
misunderstanding is as widespread in the everyday actions of vocational educators as
it is in vocational training planning and research. On closer inspection, this linear
relationship evaporates and gives way to a differentiated, non-deterministic concept
of correlations between the three poles of the outlined relationship. This is where the
studies and development tasks for design oriented vocational education and training
1
The more common term ‘professional science’ is used below.
26 3 Categorial Framework for Modelling and Measuring Professional Competence
Fig. 3.5 The relationship between professional work and education processes
can be found (! 2.4). Figure 3.5 illustrates two widespread reductions and deficits in
the vocational education activities of teachers and trainers and in vocational educa-
tion research.
Expert specialist workshops are suitable for the identification of professional work
tasks. This procedure is based on the ‘Design A Curriculum’ (DACUM) concept
developed by Bob Norton at NCRVE2 (Ohio State University) in the 1980s and on a
task analysis procedure tested in the Leonardo project ‘Car-Mechatronic’. A two-day
workshop for experts is the core component of this process (! 4.1).
2
National Center for Research in Vocational Education (at Ohio State University until 1988 and
then at the University of California, Berkley).
28 3 Categorial Framework for Modelling and Measuring Professional Competence
require a process of scientific reduction of empirical data and the associated content
analyses. Their real competence as experts of their work experience would fall by the
wayside.
When planning, implementing and evaluating the EFW, it must be considered
that teleological elements cannot be avoided in the description of professional tasks
and developmental processes from beginner to expert. Professional action always
includes dealing with and weighing the environmental and social compatibility of
professional task solutions (! 3.2). It is critical to note in this context that the logical
approach to educational research has so far been developed mainly with reference to
developmental psychology or even merges into it. For vocational education and
training and for all forms of technical education, in which the technical competence
to be imparted is expressed in educational goals, the logical approach to educational
research largely misses its central subject: the educational contents. Therefore, in the
further logical approach to the research and design of vocational work and educa-
tional processes, it is important to clearly work out the specifics of these develop-
ment processes, for example, in comparison with general education (Table 3.2).
The workshop is usually conducted by two researchers, at least one of whom has
relevant professional training—and if possible, also relevant work experience. The
‘second’ researcher acts as moderator and pays special attention to the methodical
approach and the realisation of a trusting and creative workshop atmosphere, which
enables all participating experts to contribute all their experience and competence to
the analysis process. The ‘first’ researcher leads the expert discussion, clarifies
technical contradictions and deepens the discussion through technical suggestions
and interventions.
Work process studies can be used to gain insights into the skills incorporated into
practical professional work. In the tradition of didactics in vocational education and
training, this question is rather undervalued. The method is widely used to derive
specialist knowledge from objective scientific knowledge in a process of simplifi-
cation—of didactic reduction or transformation—in order to teach it to students or
trainees in specialist instruction (Schein, 1973). It is assumed that this ‘knowledge’
must have a connection to professional action. In this tradition, knowledge contents
in the form of ‘subject theory’ are regarded as objectively given facts whose
objectivity is based on the specialist sciences. However, the real importance of this
knowledge for practical professional action remains unclear. What we do know is
that this context-free knowledge can only be used as a basis for professional
competence when it is incorporated into concrete professional activities. Parts of
this context-free theory are safely transformed into work process knowledge in the
process of professional work. The foundation of vocational education and training
on the basis of in-depth knowledge of work process knowledge marks a fundamental
change of perspective in vocational education and training practice: ‘If it is possible
to find access to what constitutes the practical skills, the incorporated knowledge of
vocational work, its findings will be invaluable and exert a lasting, if not revolu-
tionary influence in many areas—for example in curriculum and evaluation
research’ (Bergmann, 1995, 271). In this regard, vocational training and work
appear in a new light. There are interesting references to historical developments
in which, for example, the art of building was not based on engineering science, but
on the knowledge of the great master builders of the working process, which had
developed into a process of work experience over centuries.
Work process studies are therefore an important instrument of qualification
research for the identification of professional knowledge and skills as a basis for
competence diagnostics.
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 31
If a work process study is carried out with the aim of developing professionally valid
test tasks for competence diagnostics, then the identification of ‘important work
situations’ for professional competence development (KMK, 1999) is the focus of
research interest.
In addition to the criterion of qualitative representativeness or exemplariness of
the work process in terms of the purpose of the study, another selection criterion is
the clarity of the work process, which is given if it is possible for the researcher
qualified in vocational science to record the work situation in all essential objective
and subjective moments under the given operational framework conditions and
within the time available for examination. Finally, the work process should be
directly accessible to the researcher so that he can be present in the work situation.
The main criteria for the selection of the object of investigation are therefore
• Validity of content through qualitative representativeness and exemplariness,
• Manageability of the work process (limitation of the field of investigation while
maintaining its complexity of content),
• Accessibility of the work process (for an emphatically action-oriented process-
oriented research).
The analysis of professional work processes requires professional competence on
the part of the researchers, which enables them to conduct expert talks and discus-
sions at the level of domain-specific technical language. This includes knowledge of
the work process and context to be analysed, i.e.,
• The technical issues: work object, work equipment and tools, and work processes,
• Specifications for the professional or operational work tasks in which the work
process to be examined is integrated,
• The instructions and documentation available for the execution of the
corresponding work tasks,
• The subject-systematic (theoretical) connections as far as these are of importance
for the competent working action.
This preparatory step can also include the practical handling of work objects and
tools to such an extent that the work process to be examined is technically clear for
the researcher and he can fully concentrate on the research of the concrete work
action of the actors and the specific and general competences expressed therein. The
After the work process to be examined has been selected, justified by professional
science and analysed from the perspective of theory-based work and a suitable
examination situation (company, skilled worker, supervisor, etc.) has been selected,
the actors are informed and made interested in the project.
It should normally be in the interest of professionals and management to enable
and actively participate in vocational work studies, as they aim to improve the design
and organisation of work and vocational qualification.
When presenting the project, the researcher also points out his professional
qualifications in the area of responsibility to be examined. This not only favours
the relationship of trust always demanded for the question, but also defines the
examination situation as one ‘among experts’. The intention of the investigation, the
form of the investigation and the methodical procedure are presented to the partic-
ipants. The situations to be examined are commented on from a technical perspective
on the basis of the previous analysis of the objective side of the work and the
objective/interest of the investigation is justified. By emphasising the technical
content of the investigation, the investigator becomes somewhat of a participant in
the research process. The aim is to guarantee or promote:
• The acceptance of the researcher by the study participants,
• The greatest possible scope for action and design in the investigation;
• Extensive identification of the parties involved with the investigation project;
• A definition of verbal and non-verbal communication at the level and in the
quality of professional and work-related professionalism; those to be examined
know what they can expect of the researchers in terms of content and that they can
communicate without distorting their accustomed forms of expression.
• An emotional opening of the persons to be examined
• A climate of trusting cooperation based on specialist professional collegiality.
The key questions for the technical discussion are formulated in advance in a
discussion guideline. Interviewing different groups of people naturally also requires
different key questions. It is important to assign the main questions to the higher-
level research questions. Which questions and combinations of questions should be
used to cover which aspects of the study? The main questions rather have the
function of a ‘checklist’, which allows the researcher to get deeply involved in the
work process to be investigated, since he can always return to the reflection level of
the more detached analyser with the help of the main questions. The researcher takes
part in the work situation and encourages (e.g.) the skilled worker to voice his
thoughts about what he is doing at the moment by means of appropriate impulses
and questions. The interview is conducted according to the situation, very close to
the work process.
Paraphrasing
If the researcher has the impression during the ‘technical discussion’ that an utter-
ance remains on the surface of the work situation and is misleading or that it is even
incomprehensible to him, it makes sense to repeat the utterance interpretively, so that
the actor has the possibility of clarifying, deepening or even just clarifying the
previous utterance.
Example
Researcher: ‘I have now understood . . .’.
Dialogue partner: ‘Not quite, e.g. if I . . .’.
Enquiries usually lead to a certain interruption of the work situation. During work
situations, the researcher may also find himself in clarifiable technical situations,
which he may understand in their content but not in relation to the work action of the
skilled worker. If the researcher now assumes that a specific work action is of
particular importance for one of his research questions, this requires a more
in-depth enquiry and, if necessary, a special expert discussion about the specific
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 35
‘case’ and its processing possibilities. Such conversational situations usually mean
an interruption of the work action. An explicit interruption of the work situation is
achieved through an intervention that the researcher initiates with remarks such as
‘Just a moment, isn’t what they are doing risky?’ or ‘Couldn’t we solve the problem
this way?’ A technical intervention is appropriate,
• If the researcher cannot re-enact a work action that he understands from its
technical side;
• If only an intervention can clarify why the skilled worker prefers one of several
possible work steps;
• If it seems expedient to encourage the skilled worker to apply an alternative
procedure or to play through it mentally.
Qualitative Experimentation
The experiment has a dual function in work studies. First of all, experimental testing
is part of the repertoire of the theory- and experience-led work trade of skilled
workers. One very typical aspect is fault isolation in technical systems in which an
experimental approach promises success. This is the case, for example, if the
incremental elimination of error causes from a large number of possible error causes
leads to the identification of an error cause. This work action in the form of the
systematic and experimental cause of an error is underestimated in the relevant
investigations. Skilled workers very often refer to their ‘empirical values’, but
often acquire them by experimenting with their work. This includes the form of
thought experiments.
Another important aspect in the context of occupational scientific work studies is
the qualitative experiment. In contrast to the laboratory experiment, the researcher
creates a quasi-experimental situation in the form of explorative, heuristic experi-
mentation through his intervention in the process of action-oriented observation.
Often, detached observation and active experimentation are understood as two
opposite forms of researcher behaviour towards his ‘subject’. In reality, these two
knowledge-generating methods are mutually (dialectically) intertwined. The exper-
iment becomes significant only through precise observation and—conversely—
observation becomes significant through the systematic and systematising activities
of the researcher. There is a considerable need for development in advancing forms
of explorative and qualitative experiments for research into complex work situations.
Two forms of qualitative and explorative experiments are available.
In this case, the researcher influences the variation of two or more factors assumed to
be relevant for a typical, real work context in such a way that their effects on the
36 3 Categorial Framework for Modelling and Measuring Professional Competence
Only then does the content analysis begin. The procedures proposed here go back to
Mayring (1988) and can be distinguished by summarising, explicating and structur-
ing analyses.
The purpose of the summary content analysis is to reduce the material in such a
way that the meaningful value is retained and at the same time a compressed short
text is created. In several text passages: [original sentence is incomplete] (Fig. 3.6).
In explicative content analysis, it is important—rather in reverse to a summary
content analysis—to clarify the meaning of initially unclear text passages with the
aid of memory protocols, other materials and theoretical assumptions. Qualitative
content analyses in occupational science can be distinguished between
↓
The cleaned text is structured according to
First reduction: categories, which result from the research
Generalising text objectives and the material, according to
passages with the same corresponding text groups. Within the groups,
content redundant texts are omitted, as are the texts that do
not have the same content.
↓
In the text groups divided into categories, identical
and similar statements are now bundled and
Second reduction: summarized. Theoretical assumptions about
Summary of the texts objective conditions of the work activity can be
used as an aid, if the meaning of the statements
remains the same.
↓
The thus constructed and integrated text is
compared with the categories formed at the
Checking the categories
beginning and it is examined whether and how if
and the scientific
necessary, the categories would need a change or
relevance of the
further development. The short text can now be
compiled text
examined with regard to its importance in
vocational science.
• Explications along the skilled worker statements to determine the specific quality
of the skills expressed in the work actions. The observed work actions themselves
are used for the explication.
• An extended context analysis including all documents relevant to the work
situation (company data, socio-cultural background, geographical data, data on
technological innovations, etc.).
Structuring content analyses aim at examining cross-sectional aspects and filter-
ing out the corresponding texts from the statement material (core statements).
Structuring analyses usually require hypothesis-driven and therefore theoretically
justified work process analyses. This results in the categorial framework for the
structuring content analysis.
All forms of content analysis already lead to an interpretation of the statements
thus obtained, either during the analysis or after presentation of the evaluated
material, taking into account the corresponding professional and subject-theoretical
contexts. In the final step of the work process studies, this ultimately leads to the
review, reformulation, clarification and further differentiation of hypotheses, test and
development tasks for the design and evaluation of vocational training processes.
In the world of employment, skilled workers are confronted with more or less
pronounced scope for creativity and solutions when solving professional tasks.
When weighing up alternative solutions and solutions, a ‘good’ compromise must
always be found between the criteria of functionality, environmental and social
compatibility, efficiency and sustainability, as well as the design of work and
business processes to be related to a specific situation. The ability to exploit the
specific scope for solutions and design in everyday professional work is based on
professional ‘shaping competence’ (cf. KMK, 1991). In 1996/1999, the KMK
formulated this guiding principle as an educational mandate for dual vocational
training and based the introduction of the learning field concept on it: ‘Vocational
schools and training companies fulfil a joint educational mandate in dual vocational
training. The vocational school is an independent place of learning... [It] aims at
basic and specialised vocational training and expands previously acquired general
education. In this way, it aims to enable people to fulfil their professional tasks and
to play a part in shaping the world of employment and society with social and
ecological responsibility’ (KMK 1999, 3; 8). The Alliance for Jobs, Vocational
Training and Competitiveness (1999, 54) assumes this educational mandate for the
‘structural further development of dual vocational training’.
40 3 Categorial Framework for Modelling and Measuring Professional Competence
The concepts of the design of work tasks and technology developed in technology
evaluation practice and occupational science research found their way into the
establishment of design-oriented work and technology research in the mid-1980s,
in which education and qualification as an inseparable factor associated with this
research were considered and taken into account from the very beginning
(Sachverständigenkommission Arbeit und Technik, 1986, 1988).
The Enquete Commission of the German Bundestag, ‘Future Education Policy—
Education 2000’ included the concept of design-oriented vocational training in the
documentation of its recommendations: ‘If the humanity of our future society
depends decisively on whether it is possible to stop divisions and fragmentation
[...] then education must first and foremost help to develop the will to design [...] and
must strive for designability [...]’ (Deutscher Bundestag, 1990).
Professional Gestaltungskompetenz refers to the contents and scope of design in
the solution of professional tasks. The central pedagogical idea of the ‘ability to help
shape the world of employment’ presupposes the organisation of learning in the
process of vocational work in such a way that the work contexts to be mastered—the
work tasks—challenge this creative competence. This was already reflected in 1991
in an agreement of the KMK on the vocational school (KMK, 1991).
Here, vocational education and training with its educational mandate goes far
beyond ‘pure’ academic education. In a world that has become historic—especially
in the world of employment with its conditions, which basically arise from the
objectification of purposes and the interests and needs contained therein, the skilled
workers are always challenged to weigh up various technical, ecological and social
criteria when solving professional tasks. For example, every technology can there-
fore only be understood in its context of what is socially desirable and technically
possible. Table 3.5 compares central categories of vocational education and training
that are ideally suited to the prevailing technology and one that is aimed at the design
of work and technology.
The wider concept of competence is used in the debate on vocational education and training
policy. The general term competence initially refers to abilities, knowledge, attitudes and
values, the acquisition, development and use of which relate to a person’s entire lifetime.
Competence development is seen from the perspective of the subject, his or her abilities and
interests as well as his or her social responsibility. (...) Competence development should
create professional competence and a skill that enables working actions to be carried out
with extensive co-determination and participation in work and undertakings. Reflective
professional competence means the conscious, critical and responsible assessment and
evaluation of actions on the basis of experience and knowledge (Nehls & Lakies, 2006, 52).
The KMK agreement (1999) on the development of vocational curricula, the con-
tents of which are to be oriented towards ‘significant occupational work situations’
and company business processes, aims to replace the previous systematic structuring
and systematisation of vocational training plans with learning fields: ‘Didactic
reference points (for the design of vocational training processes) are situations
that are important for vocational training’ (ibid., 10). What is remarkable about this
3.5 Theories of Vocational Learning and Professional Development 43
The reconstruction of work tasks that are important for professional competence
development (KMK, 1996) is most successful on the basis of ‘expert specialist
workshops’ (! 5.1)3.
For the application of the methodological instruments of the expert specialist
workshops, this means above all that the respective work context in which the work
tasks are embedded must be consistently incorporated in the survey situation. Both
difficulties can be countered by professional scientific studies that address the
analysis of professional work processes and tasks in their situation (Lave & Wenger,
1991, 33; Becker, 2003; Kleiner, 2005).
The five levels of competence development were identified by Hubert L. Dreyfus
and Stuart E. Dreyfus and the corresponding four learning areas arranged in devel-
opment theory.7
For the application of the methodological instruments of the expert specialist
workshops, this above all means that the respective work context in which the work
tasks are embedded must be consistently taken into account in the survey situation.
Both difficulties can be countered by professional scientific studies that address the
analysis of professional work processes and tasks in their situation (Lave & Wenger,
1991, 33; Becker, 2003; Kleiner, 2005).
The five levels of competence development identified by Hubert L. Dreyfus and
Stuart E. Dreyfus and the corresponding four learning areas arranged in development
theory (Fig. 3.7) have a hypothetical function for identifying thresholds and levels in
the development of vocational competence and identity as well as a didactic function
in the development of work- and design-oriented vocational training courses.
Development tasks and their functional equivalents are also of central importance
for competence development in expertise research. Patricia Benner, for example,
highlights the paradigmatic importance of development tasks for the gradual devel-
opment of professional competence in that of nurses4. With Benner, these develop-
ment tasks refer to ‘paradigmatic work situations’ in line with cases that challenge
the skills of the nursing staff5.
It took almost two decades in Germany before the impetus given by the attempt to
justify competence development in vocational education and training in terms of
development theory was translated into didactic concepts. Over the last fifteen years,
extensive projects have been carried out to this end, both in educational theory and
empirical research. For the profession of car mechatronics, for example, a
3
In the practice of domain-specific qualification research, the expert-specialist workshops are
supplemented by management workshops and evaluating expert surveys, above all to increase the
prospective quality of the results.
4
Benner bases her domain-specific qualification research in the nursing field and its curriculum
development on the novice expert paradigm developed by Dreyfus and Dreyfus (Benner, 1997;
Dreyfus & Dreyfus, 1987).
5
Theoretically and practically, there is a difference between BENNER’s concept of ‘paradigmatic
work situations’, which she identifies in reference to the novice expert concept formulated by
Dreyfus and Dreyfus with methods of expertise research, and Gruschkas hypothesis-led studies of
beginners (cf. Rauner & Bremer, 2004).
3.5 Theories of Vocational Learning and Professional Development 45
Fig. 3.7 Professional competence development ‘From beginner to expert’ (Rauner, 2002b, 325)
Fig. 3.8 Work process knowledge as the connection between practical and theoretical knowledge
as well as subjective and objective knowledge (Rauner, 2002b, 34)
that guides practical work; as context-related knowledge, it goes far beyond context-
free theoretical knowledge. The pilot projects ‘Decentralised Learning’ and ‘Learn-
ing at the Workplace’ (cf. Dehnbostel, 1994) already incorporated this development
by shifting training back into the work process. Since then, however, the vocational
educational discussion on ‘Learning at the Workplace’ has been characterised by the
fact that terms such as workplace, work process, professional action, professional
activity and work situation have not been used very clearly. The phrase ‘Learning at
the Workplace’ has now been largely displaced by the phrase ‘Learning in the Work
Process’. Despite all the vagueness of the terms that characterise the relevant
discussion, the shift to the concept of the work process takes into account the
structural change in the organisation of operational work and business processes:
The principle of function-oriented organisation is increasingly superimposed by that
of orientation to operational business processes. This has increased awareness of the
process character of work and organisation into a technique that must only be
developed in the process of operational implementation and organisational
development.
Following the discussion on work process knowledge initiated by Wilfried Kruse
(Kruse, 1986), this central category for vocational learning was identified and
developed in numerous research projects as a fundamental form of knowledge for
vocational learning (cf. Fischer, 2000a, 2000b).
In a first approximation, work process knowledge can be characterised as the
connection between practical and theoretical knowledge (Fig. 3.8). The development
of a scientific and pedagogical knowledge framework used to model vocational
competence suggests the introduction of distinctions which enable the differentiation
3.5 Theories of Vocational Learning and Professional Development 47
social purposes and the interests and needs incorporated therein. ERPENBECK uses
this distinction into ‘pure’ knowledge and the knowledge representing the expedi-
ency of social facts for a four-field matrix (Erpenbeck, 2001, 113), with which he
illustrates the proof that explicit, pure knowledge, as it exists in the form of scientific
fact and legal knowledge, only contains very little knowledge relevant for compe-
tence development.
The differentiation of the category of practical knowledge as a dimension of the
work process enables domain-specific knowledge research, which allows more
detailed information about work process knowledge and therefore also promises
results about the mediation of work process knowledge in or for professional work
processes. However, this only partly answers the overriding question of whether the
disintegration of validity resulting from the accelerating change in the working
world fundamentally devalues this knowledge as a point of reference for profes-
sional competence development. According to a popular thesis, technical compe-
tences are devalued by the disintegrating validity of professional knowledge. The
professional dimension is therefore virtually shifted to a meta-level at which it is
only important to have appropriate access to the expertise documented in convenient
media, knowledge stores and knowledge management systems. The situational
development of the ‘knowledge’ required for the specific work tasks—knowledge
management—is therefore essential6. Studies on the exponential increase in ‘objec-
tive knowledge’ seem to confirm this assumption.
Professional competence would then evaporate as a form of domain-specific
methodological competence. However, this thesis was refuted in the extensive
studies on the change in skilled work and qualification requirements, especially in
the field of diagnostic work. On the contrary, relevant vocational-educational studies
have confirmed the thesis that professional work process knowledge, which provides
the basis for professional expertise, has tended to increase in importance7.
To the extent that domain-specific qualification research succeeds in regaining
ground under the feet of empirical curriculum research, the diffuse formula of key
qualifications loses its placeholder function. At the same time, expertise and qual-
ification research supports the concept of vocational learning in the context of
important work situations and thus the guiding principle of a curriculum structured
according to learning fields. The orientation of vocational learning towards
6
The thesis of the de-specialisation of vocational education and training has been supported at the
latest since the flexibility debate in the 1970s. According to the central argument, vocational
education and training that takes account of the accelerated technological change must urgently
strive to promote and maintain the associated, necessary basic scientific and social understanding
necessary and only impart knowledge and skills specific to activities on a secondary level (Kern/
Schumann, quoted by Grünewald, Degen, & Krick, 1979, 115). Wilfried Kruse comes to very
similar conclusions in his assessment of qualification research in the 1970s: ‘The expansion of
qualification in the state school system and the extensive separation of vocational training from
direct production are expressions of the increase in general, more theoretical elements in the
change in the production of the working capacity of young workers’ (Kruse, quoted from
Grünewald et al., 1979, 121).
7
Cf. Drescher (1996), Becker (2003), Rauner and Spöttl (2002).
3.5 Theories of Vocational Learning and Professional Development 49
With the theory of implicit knowledge (Tacit Knowledge) Polanyi has drawn
attention to a dimension of knowledge that Neuweg attributes a paradigmatic
meaning for professional ability. Since then, the concept of Tacit Knowledge has
been regarded as a key category for the development of the concept of professional
competence. This special weighting of implicit knowledge as the basis for competent
professional action can also be attributed to the fact that social science-based
attempts to approach the specificity of professional knowledge and skills had to
fail simply because the theoretical and empirical access to knowledge incorporated
in practical professional work is largely blocked (cf. Bergmann (1995); Garfinkel
(1986)).
Once the concept of Tacit Knowledge had been formulated, this triggered
approval far beyond the discussion of knowledge psychology, especially in educa-
tional practice, illustrated by numerous examples. It removed vocational training
practice and, to a certain extent, vocational training research from the requirement to
decipher and name the knowledge incorporated into practical vocational work. The
withdrawal of the surveyed experts to the position that ‘these are empirical values’
was and is often accepted as the last answer to the many unanswered questions about
qualification requirements. Georg Hans Neuweg has presented a differentiated
development of this knowledge concept and examined its didactic implications for
academic vocational education in German-speaking countries. With his comprehen-
sive theory of implicit knowledge, Neuweg characterises the didactic concept of
subject-systematic knowledge as a reference point for professional competence
development in the field of an ‘intellectualistic legend’. The widespread assumption
in vocational education that subject-systematically structured knowledge represents
some kind of shady professional action, which—in procedural terms—leads to
professional ability, is based on a fundamental category mistake (cf. Fischer, 2002;
Neuweg, 2000).
Using his own experience in dealing with Ohm’s law as an example, Matthew
Crawford illustrates the difference between theoretical and practical knowledge and
the limited relevance of theoretical knowledge for action (Crawford, 2010, 215 f.).
Theo Wehner in particular pointed out the danger of mystifying professional
skills with the category of Tacit Knowledge. A large proportion of the implicit
knowledge could be explicated if qualification and knowledge research were to
50 3 Categorial Framework for Modelling and Measuring Professional Competence
improve its research methods. Similar to Garfinkel, Theo Wehner and Dick (2001)
see the challenge of qualification and knowledge research in identifying work
process knowledge and not hastily qualifying this knowledge as ‘tacit’.
Professional competence is therefore developed in a process of reflected practical
experience (reflection-in-action). Schoen’s professional competence development is
based on the expansion of the repertoire of unique cases. In this context, at best, one
can speak of systematic learning. On the other hand, competence development
cannot be substantiated by technical system.
Table 3.6 The six dimensions of practical knowledge (based on Benner, 1997; Rauner, 2004)
Dimensions of practical
knowledge
Sensitivity With increasing work experience, the ability to perceive and evaluate
increasingly subtle and the subtlest differences in typical work situ-
ations develops.
Contextuality The increasing work experience of the members of the professional
practice groups leads to the development of comparable patterns of
action and evaluations as well as to intuitive communication possi-
bilities that go far beyond linguistic communication.
Situativity Work situations can only be adequately understood subjectively if
they are also understood in their genesis. Assumptions, attitudes and
expectations guided by experience lead to comprehensive awareness
and situational action and constitute an extraordinarily fine differen-
tiation of the action plans.
Paradigmaticity Professional work tasks have a paradigmatic quality in the sense of
‘development tasks’ if they raise new content-related problems in the
development process, which force us to question and newly establish
existing action concepts and well-coordinated behaviours.
Communicativity The subjective significance of the communicated facts is highly
compliant in a practice community. The degree of professional
understanding is far higher than that of external communication; the
context-related language and communication can only be fully
understood by members of the practice community.
Perspectivity The management of unforeseeable work tasks on the basis of the
fundamentally incomplete knowledge (knowledge gap) is character-
istic for practical work process knowledge. This gives rise to a meta-
competence that enables us to deal with non-deterministic work
situations.
reality both creates and allows to understand cannot be presented without corre-
spondingly complex competence8.
The proximity to the theory of multiple intelligence founded by Gardner is
obvious. Both, the debate on knowledge and competence, and the departure from
the concept of universal intelligence, refer to the diversity of human abilities. In the
preface of his work ‘Frames of Mind: The Theory of Multiple Intelligences’,
Gardner formulates his central thesis: ‘If we want to grasp the entire complex of
human cognitions, I think we have to consider a much larger and more comprehen-
sive arsenal of competencies than we are used to. And we must not deny the
possibility that many and even most of these competences cannot be measured
with those standard verbal methods that are predominantly tailored to a mixture
of logical and linguistic skills’ (Gardner, 1991, 9).
Almost a decade before Gardner, Donald Schoen’s analysis of the problem-
solving behaviour of different professions provides comparable insights into profes-
sional skills and cognitive requirements to Gardner. Gardner’s analyses are
concerned with the psychological (cognitive) performance requirements for compe-
tent action (Professional Knowledge Systems). Schoen’s merit is to prove,
corresponding to the category of practical intelligence, the fundamental importance
of practical competence and professional artistry as an independent competence not
guided by theoretical (declarative) knowledge. At the same time, this leads him to a
critical evaluation of academic (disciplinary) knowledge as a cognitive prerequisite
for competent action. Schoen summarises his findings on practical competence in the
following insight:
I have become convinced that universities are not devoted to the production and distribution
of fundamental knowledge in general. There are institutions committed, for the most part, to
a particular epistemology, a view of knowledge that fosters selective inattention to practical
competence and professional artistry (Schoen, 1983, VII).
8
With the ethnomethodological research concept of ‘Studies of Work’, Harold Garfinkel has
established a research strand that can be made fruitful in many ways in vocational education and
training research. The theories of ‘Tacit Knowledge’ and ‘Studies of Work’ assume a multiple
concept of competence without it already unfolding in its dimensions.
3.5 Theories of Vocational Learning and Professional Development 53
According to Klieme and Hartig (2007, 17), the reference to ‘real life’ is regarded as
a key feature of the concept of competence. In this context, Andreas Gruschka
considers a concept of competence necessary that is not limited to individual actions:
‘Competences are not bound to a specific task content and a correspondingly
narrowly managed application, but allow for a variety of decisions. They certainly
have this in common with education, since in the acceptance and solution of such
open situations and tasks it is preferably updated as a progressive movement of the
subject’ (Gruschka, 2005, 16).
In this sense Connell, Sheridan and Gardner (2003) succeed in fundamentally
contributing to the categorical differentiation between abilities, competencies and
expertise, an important step towards establishing a theory of multiple competence.
The concept of multiple competence, based on Howard Gardner’s concept of
multiple intelligence, takes account of the state of competence and knowledge
research, according to which several relatively autonomous competences can be
distinguished in humans, and which can vary greatly among individuals—depending
on their professional socialisation and qualification.
The concept of multiple competence can be based on the results of expertise
research and vocational qualification research, which have shown that vocational
competences are domain-specific and, above all, that vocational-specific practical
knowledge has its own quality (Haasler, 2004; Rauner, 2004). According to this,
practical knowledge does not arise from theoretical knowledge as it exists in the
objectified form of subject-systematic knowledge in the system of sciences. It has its
own quality, which is based on its mode of its origin.
In this context, Gardner points out that theories and concepts with which cross-
vocational (key) competences are assumed cannot be supported on the basis of his
theory. He exemplifies this with the term ‘critical thinking’: ‘I doubt whether this
critical thinking should be seen as a process of thinking in its own right. As I have
3.5 Theories of Vocational Learning and Professional Development 55
Fig. 3.10 The criteria of the complete (holistic) solution of professional tasks (COMET Vol III, 22)
The criteria for the complete (holistic) solution of professional tasks represent the
partial competencies of professional competence. Their expression according to the
levels of work process knowledge can be included in the modelling of the require-
ment dimensions (Fig. 3.10).
Eight overriding demands are placed on the processing or solution of professional
work tasks, which can be differentiated according to the three levels of work process
knowledge. In each specific case, the professionals must ensure that all or a subset of
these requirements are relevant to the specific task.
These criteria can be described in more detail as follows (COMET Volume III,
pp. 56)9:
Clarity/Presentation (K1)
The result of professional tasks is anticipated in the planning and preparation process
and documented and presented in such a way that the client (superior, customer) can
communicate and evaluate the proposed solutions. In this respect, the illustration and
presentation or the form of a task solution is a basic form of vocational work and
vocational learning. A central facet of professional communication represents the
ability to communicate clearly structured descriptions, drawings and sketches. The
9
For the operationalisation of the criteria in the form of a rating scale, see Appendix B.
3.5 Theories of Vocational Learning and Professional Development 57
Functionality (K2)
Sustainability (K3)
Finally, professional actions, procedures, work processes and work orders always
refer to a customer whose interest lies in the sustainability of the work result. In
production and service processes with a high degree of division of labour, the
sustainability and utility value aspects of the division of labour in the execution of
subtasks and in vocational training reduced to the action aspect often evaporate. In
addition to direct use by the user, the avoidance of susceptibility to faults and the
consideration of aspects of easy maintenance and repair in industrial and technical
occupations are important for the sustainable solution of professional tasks. To what
extent a problem solution will remain in use in the long term and which expansion
options it will offer in future are also central evaluation aspects for the criterion of
sustainability and practical value orientation.
Efficiency/Effectiveness (K4)
It comprises solution aspects that refer to the upstream and downstream work areas
in the company hierarchy (the hierarchical aspect of the business process) and to the
work areas in the process chain (the horizontal aspect of the business process).
Especially under the conditions of working with and on program-controlled work
systems in networked operational and inter-company organised work processes, this
aspect is of particular importance. The conscious and reflected perception and
execution of professional work tasks as part—and embedded in—operational busi-
ness processes are based on and promote contextual knowledge and understanding
as well as the awareness of quality and responsibility based on it.
It concerns above all the aspect of humane work design and organisation, health
protection and, if necessary, also the social aspects of professional work that go
beyond professional work contexts (e.g. the frequently different interests of clients,
customers and society). Aspects of occupational safety and accident prevention are
also taken into account, as well as possible consequences that a solution of profes-
sional tasks has on the social environment.
It has become a relevant criterion for almost all work processes. This is about more
than the aspect of general environmental awareness, namely, the professional and
technical requirements for professional work processes and their results, which can
be assigned to the criteria of environmental compatibility. The extent to which
environmentally compatible materials are used in solutions must be taken into
account, as well as the environmentally compatible design of work in coping with
the task at hand. Furthermore, energy-saving strategies and aspects of recycling and
reuse are aspects that must be taken into account for the environmental compatibility
of a solution.
Creativity (K8)
The creativity of a solution variant is an indicator that plays a major role in solving
professional tasks. This results from the highly varied scope for the solution of
professional tasks depending on the situation. The ‘creative solution’ criterion must
be interpreted and operationalised in a special way for each profession. Creativity is
a central aspect of professional competence in the design trade. In other professions,
the ‘creative solution’ criterion is relatively independent as a concept of professional
3.5 Theories of Vocational Learning and Professional Development 59
work and learning. The creativity of a solution variant also shows sensitivity to the
problem situation. Competent experts are looking for creative and unusual solutions
in their professional work that also serve the purpose of achieving goals.
It is implicitly assumed with professional competence that the professionally
competent person is not only able to carry out professional actions completely, but
also able to classify and evaluate the professional actions in their professional and
social significance, hence the relevance of the relevant criteria.
For example, the legal regulation that came into force in 2009 prohibiting the use
of incandescent lamps—for reasons of efficient use of electrical energy—has a direct
impact on the design and operation of electrical lighting systems. In the implemen-
tation of heating systems, for example, the objective conditions include not only a
wide variety of heating technologies, but also the equally diverse controls for their
efficient use and design of heating systems in the specific application situations in
accordance with environmental, safety and health requirements. The objective
circumstances, together with the customers’ subjective requirements for practical
value, sustainability and aesthetic quality as well as the subjective interests of the
employees in a humane and socially acceptable work design and organisation, form
the solution space in which the specific solutions of professional work tasks can be
located. On the basis of the eight criteria shown, the dimension of requirements can
be determined in terms of content in the sense of a holistic action and design concept.
Completeness is required in that the solution of professional tasks in all sectors of
social work always refers to not overlooking any of these solution aspects. For
example, if the aspect of the technological solution level is over-estimated in a work
order and the aspect of financial feasibility or user-friendliness is underestimated or
forgotten, then this can mean the loss of a work order. If safety and environmental
aspects are overlooked in order processing and work design, this may even have
legal consequences.
If one refers the steps of the complete work action to the criteria of the holistic
solution of vocational tasks, then the concept of the complete work action results
from the basic concept of the complete (holistic) task solution for the organisation of
vocational education processes and the modelling of vocational competence.
The objective of domain-specific qualification research is to determine which
qualification requirements and which content-related characteristics are included
with which weight in the processing and solution of professional tasks and how
the respective requirement profile can be described as a domain-specific qualification
and competence profile.
This can also form the basis for describing the scope for solving and shaping
professional tasks.
Chapter 4
The COMET Competence Model
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 61
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_4
62 4 The COMET Competence Model
1
The COMET project follows the path proposed by Weinert (2001) to separately capture motivation
in the form of professional commitment (! 4.7).
4.2 The Levels of Professional Competence (Requirement Dimension) 63
2
Cf. Rauner (2006), Rauner, Grollmann, and Martens (2007).
64 4 The COMET Competence Model
Fig. 4.1 The COMET competence model of vocational education and training
example, does a more or less highly competent, skilled worker solve a professional
task? Of interest here are the qualitative and quantitative competence differences that
exist between the competence levels as well as the competence profiles of the test
groups that result from the recording of the eight competence components (! 8.3).
The evaluation of the test results allows a criterion-oriented interpretation of the
quantitative test results (performance values).
The eight criteria (partial competences) of the competence level model with its
four competence levels serve as an interpretation framework (s. Fig. 3.10). The
criteria-oriented interpretation of quantitative values includes a pragmatic justifica-
tion of rules, since quantitative limit values must also be defined for the transitions
between two competence levels and rules according to which a test participant is
assigned to a competence level (! 8.1, 8.2). This distinguishes the COMET
diagnostic procedure from standards-oriented test procedures, which justify the
gradations between competence levels by the complexity or degree of difficulty of
the test tasks.
A level model implies that the competence levels represent a value in the form
of increasingly high-quality competences. In the COMET concept, the first level of
competence is the lowest and the third level of competence is the highest level of
competence to be achieved. The competence levels that a trainee achieves or can
achieve apply irrespective of the time of his training.
This established competence model facilitates the qualitative and quantitative
determination, on the basis of open test tasks, to which competence level a test
4.2 The Levels of Professional Competence (Requirement Dimension) 65
Table 4.1 Competence levels in scientific and industrial-technical vocational education and
training
PISA, basic scientific
Competence levels Bybee (1997) COMET 2008 literacy
Nominal I nominal literacy: I nominal competence/ I. Nominal compe-
Some technical terms literacy: tence:
are known. However, Superficial conceptual Simple factual knowl-
the understanding of a knowledge that does edge and the ability to
situation is essentially not guide action; the draw conclusions does
limited to the level of scope of the profes- not go beyond every-
naive theories. Slim sional terms remains at day knowledge.
and superficial the level of their collo- II functional compe-
knowledge. quial meaning. tence I:Everyday sci-
Functional II functional literacy: II. Functional compe- entific knowledge
In a narrow range of tence/literacy: justifies the ability to
situations and activi- Elementary specialist assess simple contexts
ties, scientific vocabu- knowledge is the basis on the basis of facts
lary is used for technical- and simple rules.
appropriately. The instrumental skills. III functional compe-
terms are not very well ‘Professionalism’ is tence II (scientific
understood and con- expressed as context- knowledge):Scientific
nections remain free expert knowledge concepts can be used
incomprehensible. and corresponding to make predictions or
skills (know that). give explanations.
Conceptual- III conceptual and pro- III procedural compe- IV conceptual-
procedural cedural literacy:Con- tence/literacy: procedural compe-
cepts, principles and Professional tasks are tence I:Elaborated sci-
their connections are interpreted and entific concepts can be
understood as well as processed in relation used to make predic-
basic scientific ways of to company work pro- tions and give
thinking and working. cesses and situations. explanations.
Work process knowl-
edge establishes pro-
fessional ability to act
(know-how).
Multidimensional, IV multidimensional IV holistic shaping V conceptual-
holistic literacy:At this level, competence/literacy: procedural compe-
an understanding of the Professional work tasks tence (models):
nature of science, its are completed in their Analysing scientific
history and its role in respective complexity studies with regard to
culture and society is and, taking into design and the tested
achieved. account the diverging assumptions, simply
requirements, solved in developing or apply-
the form of wise ing conceptual
compromises. models.
3
In contrast to the didactics of general education, ‘literacy’ has not yet found its way into vocational
education.
4.2 The Levels of Professional Competence (Requirement Dimension) 67
Fig. 4.2 Professional competence: levels, partial competences (criteria) and dimensions
The solutions to tasks that can be assigned to this level of competence show that
the competences that must be considered as a matter of priority from a professional
and company perspective are given.
The third level of competence, one of holistic shaping competence, is defined by
skills that go beyond the perspective of company work and business processes and
refer to solution aspects that are also of social relevance. In this respect, this results in
a hierarchisation of the competence components or solution aspects as an extension
of the professional competence scope of the test persons in accordance with their
problem-solving horizon. Operational and company-related solution competences
are based on a purely functional competence.
Nominal competence is not part of vocational competence if, as here, the devel-
opment of vocational competence is introduced into modelling as a characteristic
criterion for the success of vocational education and training. Trainees who only
reach the level of nominal competence are assigned to the risk group. If one
considers the definition of the first level of competence (functional competence), it
is highly likely that trainees who do not achieve this level of competence will fail to
achieve the training objective: i.e. after completing their training, they will indepen-
dently carry out specialist professional tasks in accordance with the rules typical of
the profession. They are only competent at the level of unskilled and semi-skilled
workers. This does not preclude them from developing into skilled workers in
professional practice on the basis of reflected work experience.
68 4 The COMET Competence Model
Table 4.2 Two examples: Rating scales for the sub-competences ‘functionality’ and ‘environmen-
tal compatibility’ (Appendix B)
The requirement is. . .
not rather
fulfilled not rather completely
at all fulfilled fulfilled fulfilled
Functionality/professionalism
Is the solution working?
Is the ‘state of the art’ taken into account?
Is practical feasibility taken into account?
Are the professional connections adequately
represented and justified?
Are the illustrations and explanations correct?
Environmental compatibility
Are the relevant provisions of environmental
protection taken into account and justified?
Does the solution use materials that meet the
criteria of environmental compatibility?
To what extent does the solution take into
account an environmentally sound work design?
Does the proposed solution take into account and
justify the aspects of recycling, reuse and
sustainability?
Are energy-saving aspects taken into account?
The content dimension of a VET competence model refers to the vocational fields of
action and learning as a basis for the construction of test tasks. In international
comparative competence diagnostics projects, it is important to identify content that
is considered characteristic of a subject or learning area in line with a ‘world
4.3 Structure of the Content Dimension 69
curriculum’ (PISA). This necessarily abstracts from the specific national or local
curricula. Deriving the test content from vocational training plans is therefore ruled
out for vocational education and training for several reasons.
1. One of the reasons for comparative large-scale competence diagnostics in VET is
that the test results can also be used to compare the weaknesses and strengths of
established VET programmes and systems with their specific curricula
(Hauschildt, Brown, Heinemann, & Wedekind, 2015, 363 f.). For the COMET
project, professional validity was therefore justified as a criterion for determining
the contents of test tasks. The test tasks for the respective professional fields must
prove to be valid. For example, the professional groups manage to agree on job
descriptions (job profiles) for the respective professions and, above all, on the
project tasks for ‘vocational competitions’ with a surprising matter of course. For
the representatives of the respective ‘Community of Practice’, it is almost obvious
what true mastery in their profession looks like.
2. Vocational curricula are geared to specific forms and systems of vocational
education and training. A comparative competence survey cannot therefore be
geared to a specific form of training—e.g. dual vocational training. The voca-
tional curricula in countries with developed dual vocational training, such as
Switzerland, Denmark and Norway, would already be too different. Above all,
the relationship between the definition of higher (national) standards and their
local structure in the form of concrete education plans is regulated very differ-
ently. In both Switzerland and Denmark, responsibility for the implementation of
lean national vocational regulations in concrete vocational training plans lies with
the actors ‘on site’. The structure of vocational training courses in terms of
content and time is based on very different systemisation concepts. In addition
to the systematic structuring of vocational training courses, the timing of the
training content is largely pragmatic. Scientifically based vocational training
concepts are the exception. In Germany, for example, the introduction of the
learning field concept was a move away from framework curricula with a
systematic structure. However, an alternative systematisation structure for the
arrangement of learning fields or training content was not explicitly specified. The
reference to the ‘factually logical’ structure of the learning fields leaves open what
distinguishes them from a subject-systematic content structure.
For vocational education and training, the establishment of a validity criterion for
the content of vocational education and training or the corresponding test tasks is
therefore of particular importance, as training is very different for the same field of
employment. Scholastic, vocational scholastic, in-company and dual forms of train-
ing compete with each other – nationally and internationally. It is indisputable that
vocational training aims at employability. This includes the qualifications that enable
students to pursue a profession. The terms ‘qualification’ and ‘competence’ are often
used synonymously in colloquial language and in vocational education and training
policy discussions. It was explained why it is necessary to distinguish between
the two categories in the scientific justification of testing and diagnostic procedures
(! 2.2). The degree to which different forms of training are capable of teaching
70 4 The COMET Competence Model
Fig. 4.3 Assignment of test tasks to the VET learning areas as a basis for cross-over design
(cf. COMET Vol. II, 27)
considers the design of work tasks from the aspect of personal development. The
programmatic significance that the concept of complete action (task design) has
acquired in vocational education has one of its roots here. Another is the degree of
average operationalisation in the form of differentiation of the complete work and
learning action into successive action steps. For the didactic actions of teachers and
trainers, this scheme offers a certain degree of certainty. In the meantime, this action
structure model has also been used internationally in connection with the introduc-
tion of the learning field concept in the development of vocational curricula.
The inclusion of the action dimension in the COMET competence model and its
differentiation in accordance with six action steps was undertaken with the intention
of establishing the concept of complete task and problem solution. This is formed by
the criteria of the requirement and action dimension. This further differentiates the
competence model as a basis for the development of test and learning tasks and the
evaluation of task solutions.
The description of the action dimension must be restricted, as the steps of the
complete working action lead to the implementation of a structure of rational
didactic action, which does justice above all to the action situations of beginners
and less to those at advanced and expert levels (cf. above all Dreyfus & Dreyfus,
1987). In this context, a distinction is made in the vocational educational discussion
between the rational and the creative-dialogical type of action (Brater 1984). Both
types of action are fundamentally significant in all occupations, each with a different
weight. Professional tasks with a clearly defined goal, e.g. in the form of a specifi-
cation for the solution of a technical task, are characterised by the fact that the
precisely specified goal suggests a well-structured procedure. The purpose deter-
mines the procedure for solving the task. The concept of complete working action
has a clear affinity to this type of rational action. This type of action is particularly
pronounced in specified work projects and processes in which the scope for action
and design is limited. If there is room for manoeuvre in the phase of order formu-
lation, then this is already restricted or eliminated in the work preparation processes
by precisely specified work steps.
An open objective and a course of action that can only be planned to a limited
extent are characteristic of the creative-dialogical type of action. The consequence of
the action steps only results in the work process itself. For example, educational
processes are largely open. Teachers and educators absorb the impulses, suggestions,
questions and answers of the children/students. As subjects of the learning process,
the learners participate in determining the course of the educational process. To a
certain extent, a teacher anticipates the possible reactions of his students when
planning the lessons – he mentally acts out the lesson with its different possible
situations. However, the actual course of lessons can only be anticipated to a very
limited extent. Actions in diagnostic work processes are very similar, for example in
74 4 The COMET Competence Model
In order to take all occupational fields into account, the definitions of competence
levels must be sufficiently general or supplemented by differentiating references to
the different employment sectors. Thus, for example, in a cross-professional descrip-
tion of procedural competence, terms referring to ‘company work’ are avoided,
since, for example, activities in educational institutions, in the health sector or in
administration are rarely associated with the category of ‘company’. Comparable
4.5 A Cross-Professional Structure of Vocational Competence 75
Fig. 4.4 Adjustment effort for the implementation of the competence model in different occupa-
tional fields (examined on the basis of 48 expert assessments)
editorial corrections are offered for the description of the competence level ‘holistic
shaping competence’ (! 4.2).
At the level of the eight competence components assigned to the competence
levels, the challenge is to define these components in such a way that they trigger a
sufficiently concrete idea among users in the different occupational fields of the
competences to be imparted. An analysis of the content of the explanatory descrip-
tions as part of an empirical review of the criteria for occupations in the education
and health sector and commercial occupations will then reveal the need for
adaptation.
If one differentiates the adjustment effort according to the sectors of industrial-
technical, commercial-service-providing and personal service occupations, then the
adjustment effort increases steadily in this order. If the professional effort for the
adaptation of the criteria and items of the competence and measurement model is
applied to the vertical plane of a two-dimensional diagram, then 100% corresponds
to a completely new version of the criteria and items. On the horizontal plane, the
imaginary distance between the training content and objectives and the electrical
professions involved in the COMET project can be deducted. The greatest assumed
distance in terms of content is to the professions in the education and health sector.
Content specialists estimate the effort required to adapt the formulation of compe-
tence criteria and evaluation to be a maximum of 20% in personal service occupa-
tions (Fig. 4.4).
The ‘subject’ of training is a technical one for industrial-technical occupations
and an economic one for commercial occupations. For pedagogical professions, on
the other hand, it is about the development of personality. This mainly explains the
76 4 The COMET Competence Model
Table 4.3 Adaptation of the evaluation criterion ‘Orientation towards utility value/sustainability’
to different occupational fields (deviating criteria are highlighted in grey) (Appendix B)
Industrial-technical
professions Commercial professions Personal service professions
Is the solution highly practical Is the solution highly practical What are the subjective bene-
for the customer? for the customer? fits of the solution for patients,
qualified medical employees
and doctors?
How user-friendly is the solu- How user-friendly is the solu- What are the objective benefits
tion for the immediate user/ tion for the immediate user/ of the solution for patients,
user/operator? user/operator? qualified medical employees
and doctors?
Is the aspect of avoiding sus- Is the aspect of avoiding sus- Is the aspect of avoiding sus-
ceptibility to malfunctions/ ceptibility to malfunctions/ ceptibility to malfunctions/
unpredictability taken into unpredictability taken into unpredictability taken into
account and justified in the account and justified in the account and justified in the
solution? solution? solution?
Are aspects of long-term Are aspects of long-term Are aspects of long-term
usability and expansion possi- usability and expansion possi- usability and expansion possi-
bilities considered and justi- bilities considered and justi- bilities considered and justi-
fied in the solution (for fied in the solution (for fied in the solution (for
example, creating a reusable example, creating a reusable example, creating a reusable
template)? template)? template)?
Is the proposed solution easy Is the solution adaptable/flexi- Is the task solution aimed at
to maintain and repair? ble? (e.g. quick reactions to long-term success (avoiding
disturbance factors) the revolving door effect)?
differences in the description of competence criteria and items with which the
competence levels are defined (Table 4.3)4.
4
For processing the criteria and items assigned to the competence levels when assessing solutions to
tasks in personal services, see appendix.
4.6 Extending the Competence Model: Implementing Planned Content 77
The expert discussion, which takes place after the rating of the project documen-
tation, can be based on the rating results and clarify whether the examinee knows
more than he has described and justified in his documentation. The result of the
expert discussion then either leads to confirmation of the rating result: the candidate
has already documented and justified his project according to his competence or this
results in corrections for individual rating criteria.
For the application of the COMET rating procedure, a double rating is
recommended – as is customary in examining practice: two examiners indepen-
dently evaluate the project result and then agree on a joint rating (for all items) or a
team rating is carried out right from the start. In both cases, this contributes to a
higher degree of consistency in the assessment of examination results. A change in
the composition of the examiner/rating teams is one form of implicit rating training.
The novice-expert paradigm describes how beginners become experts from the
perspective of developing professional competence. Herwig Blankertz and Andreas
Gruschka can be merited with having introduced an extended understanding of
development in their work on the logical structuring of professional curricula.
Vocational training is always about a coherent process of competence and identity
development. Herwig Blankertz explained that, without the development of profes-
sional identity, no competence development would be conceivable (Blankertz,
1983,139).
In this context, Walter Heinz points to another aspect of professional identity
development, that of shaping one’s own biography: ‘In the industrialised service
society, the gravitational point of professional socialisation processes shifts (...)
from socialisation (in line with learning conventional social roles) to
individualisation. For professional socialisation, this means that the internalisation
of labour standards is gradually giving way to the formulation of subjective
80 4 The COMET Competence Model
identity as a passive or active role identity. For example, early participation in the
processes of company organisational development with an emphatically business
process-oriented training concept will promote the development of an active role
identity.
• The interest in the content of professional tasks is a fundamental determinant for
the development of professional identity. If this interest is very pronounced, then
the other determinants of professional identity development lose importance.
In his article ‘The Cultural Embedding of the European Market’, Carlo Jäger (1989)
explains the need to distinguish between work morale and professional ethics, as
both categories refer to different normative fields. Work has lost the odium of curse
in modern culture. With the emergence of wage labour, a normative field has
emerged on a global scale that has been experienced and accepted as one of the
driving forces behind the success story of industrial society. Since then, the central
value of work has been supported in industrial culture by a wreath of different work
ethics, which were later (in the twentieth century) critically described as secondary
ethics (diligence, discipline, punctuality, etc.).
Industrialisation was accompanied by a large exodus of workers from agriculture.
Migration movements and flows reinforced the emergence of a labour market for
everyone’s work (mass work). The development and rapid expansion of mass
production required mass training of the workforce.
Kliebard suggests that performance-related wages have become a characteristic of
mass industrial work and not only the consistent hierarchical and vertical division of
labour. Job satisfaction should be ensured by increasing wages, while the basic
source of motivation was a performance-related work ethic.
Jäger, Bieri and Dürrenberger (1987, 75) understand working morale as ‘a
constitution of conscience that demands that the work—no matter whether laborious
or misunderstood in essence—be carried out in accordance with the contract,
obediently, promptly, precisely, punctually, etc.’. This confirms that scientific man-
agement, as formulated by Taylor, had also found its way into European industry.
For example, a manual from the Central Association of the German Electrical
Industry explains the industrial electrical professions ordered in 1972: ‘The task of
the communication device mechanic is to assemble modules and components, to
assemble simple device parts and devices, and to perform and connect these
according to samples and detailed instructions. He carries out simple tests of
electrical components, assemblies and device parts with the corresponding mea-
surements according to precise testing and measuring instructions. His area of
responsibility also includes simple maintenance and repair tasks’ (Zvei, 1973,
82 4 The COMET Competence Model
13). Until the 1970s, vocational training planning in Germany was clearly influenced
by Taylorism and the normative field of work ethics.
With reference to a series of industrial sociological studies, Carlo Jäger shows how
work ethics deteriorated in the second half of the twentieth century. He explains this
process with the wage explosion in combination with the fact that unskilled migrants
are not available in any number, from which he derives the thesis that a European
labour market solely oriented to the normative field of work morale would inevitably
result in mass unemployment and sluggish productivity development (Jäger, 1989,
566). Based on his theoretical and empirical studies, he concludes: ‘Regardless of
work ethics, there seems to be a normative field that emphasises the qualities of
cooperation and communication rather than the character of deprivative duty in
professional life. We call this normative field ‘professional ethics’ (ibid., 567).
In summary, Carlo Jäger comes to an interesting result for vocational education
and vocational training research, which challenges them in their creative tasks:
‘European culture, understood as a comprehensive normative field, developed a
new form of social differentiation and personal identity formation with professional
ethics at the end of the Middle Ages. The social system of the European labour
market, which has been crystallising for several decades, has so far hardly taken this
into account and instead referred to normative fields with their work ethics, which
have become significantly less important in the same period’ (ibid., 570).
The erosion of work ethics is directly associated with the rise and fall of commitment
to organisations, as investigated by commitment research. Since the 1950s, various
forms of commitment have been empirically researched in management and
behavioural research (especially in the USA). Despite all the differences in the
theoretical location of the research approaches in different sciences, there is one
striking commonality.
The categorical distinction between work ethics and professional ethics corre-
sponds to the distinction in commitment research between organisational and occu-
pational commitment (Baruch, 1998; Cohen, 2007). This differentiation can be
interpreted as one between organisational and professional commitment. In commit-
ment research, metastudies have shown that operational commitment has been
declining steadily since the 1970s. As this is based on the employees’ emotional
attachment to the company, this means that these ties are gradually becoming less
strong. The volatilisation of stable relations between companies and employees
confronts commitment research with the erosion of its basic category and opens up
4.7 Identity and Commitment: A Dimension of Professional Competence Development 83
3. To what extent do professional identity, emotional loyalty to the company and the
willingness not to (obediently) question predefined work tasks contribute to
professional motivation?
Comprehensive approaches from commitment research are available for the
empirical recording of organisational and occupational commitment, which can be
described as bonding—mainly affectively conceptualised—as a result of which
commitment in the work activity is expected. There are further attempts to empiri-
cally conceptualise different forms of employee bonding. However, approaches such
as the Job Involvement Scale (Kanungo, 1982) mix precisely the reference fields of
commitment, which here are to be kept as distinct as possible.
Preliminary work in organisational psychology was used to determine
organisational commitment. Among the existing scales for measuring organisational
commitment, the generally accepted scale of Meyer and Allen (1991) was used,
among others.
It is based on identification with the company and the underlying emotional attach-
ment to the company: ‘I am committed to the company’.
It makes sense to design the reference field of work ethics pursuant to JÄGER—as
an extrinsic work motivation that undoubtedly accepts external guidelines. A scale
designed in this way should be limited to abstract working virtues. This is shown by
the factor analyses carried out so far.
The term ‘work ethics’ is therefore used to describe a willingness to perform
based on a more or less ‘blind’ execution of instructions. Following Carlo Jäger, it
is an identification with the work ‘in itself’, without consideration of concrete
contents.
The scales used to record occupational identity, occupational commitment,
occupational identity, organisational commitment and work ethics have been
86 4 The COMET Competence Model
In two extensive studies (A: n ¼ 1121; B: n ¼ 3030), the model extended by the
component ‘organisational commitment’ was evaluated using both a confirmatory
and an explorative factor analysis. For the psychometric evaluation of the identity
engagement model, this means defining the possible fields of I-E research as
precisely as possible so that this can be taken into account in the development and
evaluation of the scales.
How are occupational identity and commitment as well as organisational com-
mitment and work ethics connected?
There are many interactions between occupational and organisational identity as
well as occupational commitment, organisational commitment and work ethics. In
4.7 Identity and Commitment: A Dimension of Professional Competence Development 87
Fig. 4.5 Extended theoretical model on the relationship between commitment, identity and work
ethics
Researchers acquire the most precise insights and knowledge possible about the
objective prerequisites and conditions constituting the field of activity to be analysed
on the basis of the state of the art in occupational scientific research, relevant
specialist publications on operational and technological innovations and other
sources. In industrial-technical specialist work, this includes the technical work
process-relevant expertise on technical systems, tools and working processes, as
well as the corresponding documentation and working documents. Similar require-
ments apply to commercial occupations and occupations in the health sector.
Additional work experience or relevant professional studies form the basis for a
checklist of questions that can be used if researchers feel that additional questions are
necessary.
The study of the objective side of the professional field of activity to be analysed
should not lead to the formulation of differentiated hypotheses on the job description
and the fields of activity, in order not to limit the dialogue with the experts from the
outset and to draw attention to the framework provided by the formulation of the
hypotheses. This would unacceptably limit the chances of a high internal validity of
the investigation in terms of consensus validity.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 91
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_5
92 5 Developing Open Test Tasks
→ Please name the most important stages (no more than five) of your
professional development as an “expert in skilled work”.
→ For each professional position, please provide three to four typical examples
of the tasks you have carried out in your professional practice.
→ Please note the professional stations and the examples of tasks on the prepared
overhead slide for the presentation of the results.
→ After 15-20 minutes, we will ask you to present your professional career in
plenary.
The workshop roughly follows the following temporal and content organisational
scheme.
Assignment 1: Individual Professional Career
The work assignment ‘individual professional career’ contains a list of the most
important stages of professional development, from training to expert level in skilled
work. To avoid too fine a breakdown of the career, the number of stations to be
described is limited to a maximum of five examples. Participants whose professional
development consists of more than five stations must combine several stations or
make a selection of the most important ones. Each of the stations of professional
development mentioned above should assign 3–4 professional tasks to examples
from their professional practice, which they have performed there (Fig. 5.1).
Assignment 2: ‘Challenging and Qualifying Professional Tasks’
After the participants have formulated their individual professional careers, they are
asked to mark the professional examples of tasks which they found particularly
challenging in their current professional practice and in the course of which they
have further qualified themselves.
This additional assignment can also be set during the presentation, so that the
participants can specify the particularly challenging and qualifying professional
tasks at the moderators’ request.
Assignment 3: Presentation of Individual Professional Careers
Participants are given the opportunity to present their professional careers on the
basis of the documents they have prepared.
5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks 93
What was the challenge in the professional examples of tasks you mentioned?
Did these tasks challenge your professional expertise and were you yourself
not yet sufficiently prepared for these tasks?
Difficult task: What was the difficult thing about the tasks? At what point
did you realise it was difficult? How did you overcome the difficulty? How
would you deal with such a difficulty today?
Insufficiently prepared: How come you had to take on a task for which you
were not yet sufficiently prepared? What did you find difficult about the task?
When did you realise that you were insufficiently prepared for the task? How
did you overcome these difficulties?
The subject matter of external validation is the result of the ESWs: the characteristic
professional tasks identified for a profession and their assignment to the four learning
areas. As a rule, occupational scientists, employer and employee representatives as
well as trainers and teachers of vocational fields participate in external validation.
The aim of external validation is to check the professional work tasks outside the
operational context in which the expert specialist workshops are held. The possible
influence of company or industry-specific peculiarities on the description of the
skilled work can thus be uncovered and corrected if necessary. Therefore, the
participants in external validation should have sound knowledge of the profession
to be examined in companies of different sizes and in different industries and
regions.
In a first step, averages are calculated for the categories ‘significance’ and ‘fre-
quency’ as well as for the assigned development trends from the assessments of the
participants in the validation of the individual occupational tasks. The mean values
are entered in a diagram the axis designations of which correspond to the two criteria
(Fig. 5.4). The averaged values of the development trends can also be translated in
the form of vector arrows, which indicate the future development of professional
tasks in terms of their significance and frequency.
This diagram can be used to determine the core area of a profession. In this
example, the core area is limited by a minimum frequency and significance of 40%.
Professional work tasks outside of this core area can be assigned to a specific
company or sector. The future development of the job description can be estimated
using the trend arrows. For example, professional work tasks that are not yet part of
the core area of the profession at the time of the analysis but will become more
important and frequent in the future can already be taken into account in the job
description.
The evaluation of the item ‘difficulty’ can be used to check the assignment of the
professional work tasks to the four learning areas.
Approximately the same number of tasks are planned for each of the four areas of
responsibility. Particular attention must be paid to ensuring that the tasks fit into the
logic of the regulatory scheme (Fig. 5.2) (cf. in detail Kleiner, Rauner, Reinhold, &
Röben, 2002).
96 5 Developing Open Test Tasks
The obvious choice as reference points for the development of test tasks is
established professions/professions. Another obvious choice is a pragmatic proce-
dure as established in the International World Skills (IWS). A relatively large
number of occupations are internationally established. This applies not only to the
craft and health professions such as carpenter, chef or nurse, but also to modern
professions such as electronics technician, computer scientist and numerous com-
mercial professions. The internationalisation of economic development and the
emergence of a European labour market have led to greater harmonisation of
professional activities and occupations. Occupations with similar professional titles
therefore also include comparable fields of professional activity. It is therefore
advisable to compare occupations on the basis of their fields of action in international
comparisons.
The competence levels and the characteristic competence profile of test groups
are measured with the test format of the open complex test tasks (Fig. 5.5) (! 8).
5.2 An Open Test Format 97
Fig. 5.5 Open test tasks for assessing process and shaping competence
These can represent local, regional, national and international courses and systems of
the same or formally different levels of qualification. Test items are therefore
developed according to the following criteria.
Test tasks are open for different task solutions. They are authentic for professional
reality. This is the prerequisite for recording the different competence levels and
profiles.
The format of the open test tasks and the associated requirement to justify the task
solution in detail (experts should be able to understand, explain and take responsi-
bility for their task solutions) increase the scope for solutions and offer the possibility
of participating in a test of relevant technical training courses at various qualification
levels. The prerequisite for participation of training courses in a competence diag-
nostics project is that the validity of the content of the test tasks is assessed as given.
As these are open test tasks that can be solved at different levels of knowledge or
justification, there is a wide range of participation in comparative competence
surveys by courses at different levels of qualification and training organisations
(dual, school-based), insofar as these pursue the goal of qualifying for the exercise of
relevant professional activities.
98 5 Developing Open Test Tasks
5.2.1 Representativeness
The criterion of the representativeness of the test tasks determines whether and to
what extent test tasks cover a profession’s fields of action. Professional competences
are open to application as cognitive performance disposition specific to the field. On
the other hand, qualifications that are examined in the examination procedures are
objectively given by the work tasks and processes and the resulting qualification
requirements. When assessing professional competence, the qualification require-
ments must be fully reviewed for safety reasons alone. Nevertheless, both forms of
professional skills overlap. The decision on the representativeness and validity of
test items for related programmes concerns both the vertical and horizontal structure
of the education system and the degree of its scholastic (academic) and dual
(occupationally qualifying) structure of the curricula. In contrast to an examination,
competence diagnostics aims to record the competence levels and competence pro-
files of test groups (! 8). A complete review of the professional qualification
requirements defined in the job descriptions is not necessary. In practice, the teachers
and trainers decide on which and how many complex test tasks are required to cover
the fields of action characteristic of a profession or to record the competence levels
and profiles of the test participants.
The test tasks represent authentic work situations. It is taken into account that the
partial competences corresponding to the requirement criteria are challenged in their
complete solution (! 4). This ensures that not only partial competences such as
environmental compatibility or the functionality of a task solution are measured. A
restriction of the complexity of professional tasks in reality would limit or call into
question the validity of the content of the test tasks.
5.2.3 Difficulty
When assessing the difficulty of test and examination tasks, a distinction must
always be made in ‘the degree of training’. Thus, beginner tasks are easier to solve
for experts and advanced users than for beginners.
In principle, professional tasks are not solved correctly or incorrectly, but are
always more or less expedient. The criterion of correctness also applies to partial
aspects of professional tasks if, for example, the relevant VDI [Association of
German Engineers] safety regulations and electrophysical laws are to be observed
when planning office lighting—and above all when installing it.
5.2 An Open Test Format 99
The standards and regulations to be observed when solving an occupational test task,
e.g. accident prevention, health protection and occupational safety as well as the
relevant VDI or DIN regulations are not specified in the situation description, since
the test task is used to check whether and to what extent the test persons are familiar
with the subject-related standards and rules and how they apply them in relation to
the situation.
The test authors base the development of the test questions on examples of related
professions and the general criteria for the test questions development (Table 5.1).
Table 5.1 Guidelines for the development of test tasks (Appendix C: Examples of test tasks)
The test tasks
• Entail an authentic problem of professional and company work practice,
• Define a profession-specific—rather large—scope for design and thus enable a multitude of
different solution variants of varying depth and width,
• Are open to design; i.e., there is no right or wrong solution, but requirement-related solution
variants,
• Require the consideration of aspects such as economic efficiency, practical value orientation and
environmental compatibility (see the concept of holistic task solution) in addition to technical-
instrumental competences,
• Require a typical professional approach to their solution. The solution of the tasks concentrates
on the planning-conceptual aspect and is documented using relevant forms of presentation,
• Can also include the practical solution if the test tasks are to be used to test concrete professional
skills,
• Challenge the test persons to solve, document and justify the tasks in the sense of professional
professionalism (at the respective development level) without excluding reduced solutions.
100 5 Developing Open Test Tasks
Fig. 5.6 Competence profiles of college students (China) (Zhou, Rauner, & Zhao, 2015, 400; for
calculating and presenting competence profiles ! 8) (In an earlier version of the competence
profiles, the three dimensions of functional, process-related and holistic shaping competence were
still indicated as competences—the terms KF, KP and KG thus correspond to the dimensions DF, DP
and DG)
5.4 Test Arrangements for Related Vocational Training Courses with Different. . . 101
test tasks first for SII and post-SII training programmes that clearly qualify for
vocational training, and then in a second step to check whether the test tasks are
assessed as representative and valid in content by the subject teachers/lecturers of
related (higher) academic training programmes.
The decision on the representativeness and validity of test tasks for related
programmes concerns both the vertical and horizontal structure of the education
system and the degree to which the work tasks of the training regulations are oriented
towards ‘subjects’ or lead to vocational qualifications.
Initial experience and research results are now available for the inclusion of
vertically consecutive courses of education from upper-secondary level to the level
of higher education vocational training courses.
These test arrangements are divided into primary and secondary (associated) test
groups (Table 5.2). Primary test groups represent training courses for and with which
the test tasks are developed. Typical examples of this are the COMET projects for
the training occupations of electronics technician, automotive mechatronics techni-
cian, industrial mechanic and other training occupations regulated by BBiG, related
vocational school and vocational training courses that are regulated according to the
model of alternating duality at SII level.
Once the set of test items has been developed and tested in a pre-test (see below),
it makes sense to check whether these test items can also be used to measure
vocational competences that are taught in courses building on initial vocational
training (associated test groups).
These are, for example, technical school programmes, further training to become
a master craftsman as well as relevant technical university programmes.
Whether such a test arrangement is possible depends solely on how the represen-
tativeness and validity of the content of the test tasks are evaluated by the teachers in
these courses. If the test tasks represent the main fields of action of the occupations
Table 5.2 Test arrangements for primary and associated test groups
Test arrangements
Formal quality level 1 2 3
Tertiary programmes at bachelor Associated test Associated test Primary test
level group 2 group 1 group
Post-S II Associated test Primary test Associated test
Technical schools/master craftsman group 1 group group 1
qualification
Sec II Primary test Associated test Associated test
Dual vocational training group group 1 group 2
Vocational schools
102 5 Developing Open Test Tasks
for which the training courses qualify and if the validity of the test tasks in terms of
content is assessed as appropriately high, then nothing stands in the way of partic-
ipation of this test group.
The degree of representativeness and validity of the content of the test tasks
determines the possibility and design of the test arrangement.
In addition to the primary test group, the S II test arrangement identifies two
associated test groups that are formally assigned to higher qualification levels.
Vocational schools are upgraded by one qualification level and bachelor courses
by two qualification levels in accordance with international and national qualifica-
tion frameworks. A frequently asked question about this test arrangement is: Are the
technical college students (and master students) systematically underchallenged by
the test tasks of the primary test group (here trainees in the second year and third year
of training) and therefore cannot prove their real competence?
In the case of closed test tasks (multiple choice tasks), such a test arrangement
would not be possible, or only to a very limited extent, since norm-based test tasks
are always assigned school levels or school years or a defined professional qualifi-
cation level. A decisive allocation criterion is then the degree of difficulty of the test
tasks. The COMET test format is based on the concept of open and complex test
tasks. These are criteria-oriented test tasks throughout. This gives each test task a
scope for solutions (scope for design) that offers room for solutions of different
quality and quantity requirements (simple to very professional). Even if a trainee in
the second or third year of training presents a task solution of comparable quality to
that of a technical college or university student, students have the opportunity to
justify their solutions in great depth and range of subjects. The ‘range’ and ‘depth’ of
the explanatory statement are indicators of the level of work process knowledge
incorporated into the task solutions that the test persons have. In test practice, this
leads to the solution spaces for the test tasks developed for SII training courses being
exhausted to a higher degree on average by the test participants in higher education
courses. Since the solution spaces also include the knowledge that guides and
reflects action, they are usually ‘exploited to the full’, which is only rarely the case.
Formally, the post-SII test arrangement differs from the first and third test arrange-
ments in that the formal qualification differences to the subordinate and superior
courses of study each constitute only one level. The professional fields of action of
the post-SII graduates are the reference point for the development of the test tasks. A
certain difficulty in international comparative studies is that the same vocational
5.4 Test Arrangements for Related Vocational Training Courses with Different. . . 103
fields of action are trained in vocational training courses that are formally assigned to
different qualification levels. For example, the vertical range in the training of
nursing staff (child, nursing, elderly care) extends from the ‘unskilled workers’
level through SII training courses to the bachelor’s level. Training at all three levels
of qualification usually goes hand in hand with the development of level-related
professional fields of action. Whether and to what extent these differ in their content
and qualification requirements must be examined empirically in each case.
The typical vocational fields of action for technical college graduates are initially
identified during the development of test tasks. Domain-specific qualification
research has relevant research methods (Rauner, 2006; Röben, 2006). The educa-
tional plans of the post-SII educational programmes are of secondary importance, as
the ability to work is usually only achieved in a phase of familiarisation with the
profession—following the relevant studies at a technical college. The reference point
for the content of COMET competence diagnostics is therefore the professional
competence, which is the focus of the curricula at the technical college, but which
can often only be achieved in the practical phase following the studies at the
technical college. The situation is different with higher technical schools such as
those established in Switzerland. They are organised in dual manner, and their
content and objectives are therefore based on the training content and objectives
identified with the participation of organisations from the world of employment.
If no results of the relevant qualification research are available, it makes sense to
identify the characteristic fields of professional tasks and activities on the basis of
expert specialist workshops (Spöttl, 2006).
The test tasks are developed by the lecturers of the bachelor’s degree programmes at
universities. Here, too, the rule applies that the authors of the test tasks take the
professional fields of action as a basis, which are considered representative for the
graduates of the degree programmes. One difficulty that arises for this test arrange-
ment is, on the one hand, the very broadly designed courses of study, the contents of
which are based more on traditional concepts of basic academic studies. The
contrasting study programme concept is based on a high degree of specialisation
in content and corresponding ‘tailor-made’ university-based vocational training. For
numerous professionally qualifying bachelor degree programmes (subjects), there is
a more or less pronounced correspondence on the content of vocational training
programmes at SII and technical college level. COMET projects based on this test
arrangement have not yet been conducted. The COMET project Nursing (Switzer-
land) has a certain proximity to this test arrangement, since the dual course of study
104 5 Developing Open Test Tasks
In competence diagnostics projects, it is more the rule than the exception that
different vocational training programmes such as dual vocational training, voca-
tional schools and technical colleges as well as bachelor’s programmes qualifying
for vocational training take part in a test. The concept of open test tasks facilitates
this form of comparative competence surveys. The validity of the test tasks is
determined in projects spanning different educational programmes with reference
to the higher-level occupational fields of action of the primary test population
(occupational validity). The test tasks developed in pre-test procedures are then
evaluated by the project groups of the educational programmes involved in the test
according to their validity for ‘their’ educational programmes (Fig. 5.7).
During the evaluation of the individual test tasks, the project groups (of the
participating training courses) evaluate the
The solution scope of a test task defines the possibilities of a task solution
under the basic conditions specified in the situation description. The wishes
and requirements of the client (customer) limit the (theoretical) scope for
design. Therefore, in the context of a (higher) school learning situation, it is
more likely to assume room for manoeuvre and, in a test format related to the
context of company work orders, to assume a solution space. Scope for
solutions and design can only illustrate possible solutions in their structures
in exemplary manner. In this respect, scope for solutions and design is also
open to unforeseeable solutions.
The authors of the test tasks have an idea of the spectrum of possible solutions to
the test tasks. The theoretically possible solutions form an almost unlimited design
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 105
Fig. 5.7 The professional fields of action as reference point for determining the professional
validity of the test tasks
When dealing with the solution spaces within the framework of rating and
rating training, it must be avoided that solution spaces are misunderstood as
ideal-typical solutions.
The use of the solution space when evaluating the task solutions is practiced
within the framework of rater training. Practice shows that the raters only occasion-
ally (initially) use the solution spaces to evaluate the solutions after rater training.
They are able to apply the rating items in task-specific manner and are able to think
of the solution space virtually automatically. This phenomenon finds its expression
in a correspondingly pronounced inter-rater reliability.
The development of test tasks for a profession or a specialist area is carried out
according to a defined procedure (Fig. 5.8).
106 5 Developing Open Test Tasks
The first step is to determine which test groups are to be involved in a COMET
project. As COMET tests are generally designed as international comparative tests or
it must be assumed that national projects will expand into international projects, the
educational and study programmes to be included in the tests are defined. Three test
arrangements are differentiated (! 5.3). These result from the definition of the
primary test group. These can be (1) vocational training at upper-secondary level
(initial vocational training), (2) continuing vocational training at the level of techni-
cal school programmes and (3) higher education programmes that qualify for a
profession.
The primary test groups can be extended in an extended test arrangement by
courses with a lower and higher formal qualification level (secondary test groups).
The associated prerequisite is the classification of the test tasks by the subject
lecturers (teachers) as valid in content for the test groups to be involved.
The authors of the test questions are usually subject teachers/lecturers and trainers
(content specialists) who are qualified for the vocational training of the trainees
(students) to be examined. As a rule, a one-day training course is sufficient to qualify
these teachers/lecturers for the development of test tasks. The subject of the training
is an introduction to the COMET competence and measurement model as well as the
test procedure. The criteria for developing test tasks are explained using examples of
tasks from related COMET projects. The development of test tasks includes the
development of solution spaces. These are used for the task-specific interpretation of
the rating items by the raters of the task solutions.
108 5 Developing Open Test Tasks
Identification of occupational
3. fields of action
9. Analysis of results
The development of the test tasks requires the identification of the professional fields
of action for the respective profession. For each professional field of action
(Table 5.4), two to three test questions (drafts) including the solution spaces are
developed by the teams of authors (groups of two or three). It must be taken into
account whether the same fields of action apply to all the test groups to be involved
or whether specific technical characteristics of training courses have to be taken into
account (Fig. 5.9).
For such test arrangements, the common competences are covered by a set of test
tasks and the specific competences by supplementary test tasks.
In the previous COMET test practice, especially against the background of the
requirements for the international comparative projects, test tasks are developed
which aim at the end of the educational programmes. This serves to record the
competences defined in the job descriptions (job profiles), on the basis of which the
employability or the training objective is described. Vocational (university)
education and training courses can also be included. Although vocational skills
cannot be taught in these programmes, it is possible to measure the degree to
which these programmes succeed in teaching their pupils/students vocational skills.
The authors’ intended ‘degree of difficulty’ results from the qualification require-
ments placed on the primary test group. The aim is to assess the difficulty of the test
tasks by the (primary) test group with values between 6.5 and 7.5 on a scale of 0 to
10. These values are determined in the pre-test. The ‘difficulty’ of open test tasks
according to the COMET test task format should not be confused with the degree of
difficulty of normative test tasks (! 5.8).
An essential step in the development of test tasks is the evaluation of the task drafts
and solution spaces by the coordinating project group and the test experts involved in
the project. As a rule, this results in initial revision instructions and a corresponding
revision of the task drafts. A detailed didactic evaluation of the test tasks and rating
scale (if modified for a new professional field) is part of the rater training (testing the
test tasks).
The test tasks (drafts) are tested on a sample of the primary test group. Each test task
should be completed and evaluated by at least ten to 15 test persons. If the test group
is relatively homogeneous, the lower number of participants is sufficient. In the case
of more heterogeneous courses of education, the upper limit should be chosen.
The pre-test includes rater training immediately after the test. The project group
or the group of authors of the test tasks selects a task solution for each professional
field of activity—at least four sample solutions of medium difficulty. They form the
basis for rater training.
3. This includes the review of the author proposals for the list of rating items to be
considered.
The method of rating training is described below. Exact adherence to the method-
ical procedure ensures that good to very good reliability values are achieved after
approx. One day of rater training.
The Trainers
The trainers conducting the rater training should have assisted in at least one rater
training. You must be familiar with the COMET test procedure and have an exact
knowledge of the test tasks and their solution space for the respective project. It has
proven useful that training is carried out by teams of two and that one of the trainers
has relevant professional/technical and didactical competence.
Example For 600 test participants, each of whom solves a (complex) test task
(maximum processing time: 120 min), a double rating requires 300 h of rating
time.
• With a rating time of 10 h per rater, 30 raters are required for the rating, and
20 raters with a rating time of 15 h per rating.
• After approx. 4 h of rating (empirical value), each rater should take a half-
hour break, as rating requires a high degree of concentration.
112 5 Developing Open Test Tasks
Organisation
Rating is done online. A joint one- or two-day rating schedule has proven to be the
best. Upon completion of the online rating, the rating results are available for
feedback to the test participants (via the responsible teachers/trainers).
Each participant is provided with a rater manual to prepare for rater training.
Different coefficients are available for calculating the interrater reliability. Apart
from the question, the choice of a suitable coefficient depends primarily on two
factors:
1. the number of rating persons
2. the scale level.
114
Fig. 5.10 Example of a rating table from rater training for the profession ‘industrial mechanic’ (first trial rating)
This rating table shows the rating results of the first trial rating of twelve raters and three rating groups. The degree of agreement is therefore still very low.
In the course of rater training, the degree of agreement increases steadily and converges to values of Finn >0.75 (Fig. 5.11).
5 Developing Open Test Tasks
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 115
Fig. 5.11 Progress of the rater consensus forwarding and logistics merchants
In the case of COMET test instruments, we usually have more than two raters to
deal with during the pilot phase of the projects—especially during rater training. Up
to 40 raters will participate in the pilot phase. This means that they evaluate the same
task solutions. For more than two raters, the following three coefficients are suitable:
It is (also) suitable if more than two raters evaluate a person or task and an ordinal
scale structure is adopted. Both are the case with COMET instruments. Fleiss’
Kappa distinguishes between the exact, also called ‘Conger’s Kappa’ and Fleiss’
Kappa. In most cases, Conger’s Kappa is slightly higher, so a comparison between
Fleiss’ Kappa and Conger’s Kappa is recommended.
MSw
Fu ¼ 1
1=12 N 2 1
MSW ¼ average deviation square of the observed values per item within minutes
N ¼ number of measured values
Spearman–Brown formula for the rater group:
116 5 Developing Open Test Tasks
n Fu
F ug ¼
1 þ ð n 1Þ F u
n ¼ number of raters
Justification for the choice of the measure:
Asendorpf and Wallbott (1979) propose the Finn coefficient if the variance of the
mean values of the observation units is too small (as in our case).
To calculate the Finn coefficient correctly, a distinction must be made between a
‘two-way’ model and a ‘one-way’ model. The ‘one-way’ model assumes that only
the persons/tasks to be evaluated are selected at random. The ‘two-way’ model also
assumes that the raters are randomly selected. Since the latter is usually not the case,
the ‘one-way’ Finn coefficient is calculated for the COMET model. The advantage
of the Finn coefficient lies in the fact that it is suitable for calculation even if there is a
high degree of correspondence between the raters. In other words, it is sensitive to
small differences between persons.
The ICC is particularly popular due to its implementation in the SPSS statistics
software. As with the Finn coefficient, a distinction must be made here between a
‘one-way’ model and a ‘two-way’ model. As with the Finn coefficient, the one-way
model assumes that only the persons/tasks to be evaluated are randomly selected.
The reasoning is the same as for the selection of the Finn coefficient, so that for the
COMET instruments, the ICC is calculated for ‘one-way’ models. Another aspect to
consider when correctly calculating the ICC is whether the absolute or the average
agreement of the rating persons is of interest. This aspect also depends on how high
the agreement between the rating persons is, so it is advisable to calculate both the
‘absolute’ (¼ ‘consistency’) and the ‘relative’ (¼ ‘agreement’) ICC. The consider-
ation of this aspect is interesting in that it could be that there is a high degree of
agreement between the raters across all (averaged) items, but that these differ in
some important respects.
If only the relative ‘ICC’ is calculated, there is a risk that these differences cannot
be worked out. Accordingly, both the ‘conformity’ and the ‘consistency’ version of
the ICC are considered below.
Accordingly, the following interrater coefficients are calculated for COMET
instruments:
1. Fleiss’ Kappa
2. Conger’s Kappa
3. The Finn coefficient (‘one-way’)
4. The ICC (‘one-way’) to check the consistency
5. The ICC (‘one-way’) to check the relative conformity.
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 117
The following shows how these interrater coefficients differ from one another
using the example of electricians’ test tasks used in international comparative
COMET projects. The correct reading of the data was checked several times using
descriptive parameters.
The proximity of the unadjusted Finn coefficient (2009) to the (‘two-way’) Finn
coefficient with the statistical software ‘R’ is striking. However, the (‘two-way’) Finn
coefficient assumes a random selection of raters. This would mean that 14 out of
20 raters are randomly selected. The random allocation to the tasks is already given
by the (‘one-way’) calculation. The calculation (‘two-way’) increases the ‘degrees of
freedom’ and thus leads to a higher Finn coefficient. The table shows that the
calculation of the Finn coefficient for the reference values (2009) corresponds
exactly to the values of the comparative rating for the skylight control and the drying
space.
Results: The unadjusted Finn coefficient is the two-way Finn coefficient. This is
not suitable for the COMET test procedure, as the raters are not selected randomly
(Table 5.5).
Prospect
The evaluation for selecting a suitable interrater coefficient is based on more than
two raters evaluating a task. This is the case in rater training during the pilot phase of
COMET projects. In the actual test phase, however, the solutions of the pupils,
trainees and students are always evaluated by two independent raters, so that further
coefficients are available for the calculation of the appraiser agreement. These
coefficients and their benefits would still have to be demonstrated for COMET
instruments.
All test tasks (drafts) are tested in the pre-test. The test results are used to measure the
competence (competence level and competence profile) of the test participants. A
distinction is made between test tasks (Fig. 5.12). The profile of a test task and the
variation coefficient can be used to estimate the potential requirements of the test
task. If all task profiles for a profession have a varying degree of homogeneity, it
makes sense to strengthen the sub-competencies under-represented in the situation
descriptions of the test tasks with corresponding requirement-related information
without including specifications because these would already be part of the solution
The total point values (ATS) of the test tasks (drafts) for the carpenters show
consistently high values. Since four of the test tasks (A1, A2, A3 and A7) have a
homogeneous to very homogeneous (A2, A3) task profile, two conclusions are
obvious.
118
Fig. 5.12 Profiles of test task drafts from the carpenters’ pre-test (Figs. 5.17 and 5.18)
120 5 Developing Open Test Tasks
1. The team of authors was able to formulate situation descriptions suitable for the
collection of homogeneous competence profiles. The task profiles show which
sub-competencies are not challenged by the situation descriptions. For Test 8, for
example, this concerns the sub-competencies K3, K5, K6, K7 and K8.
2. With values of 40 and above, the TS represents a rather easy level of difficulty
with the exception of the A 4 tasks.
The test results tend to have an objective level of difficulty which corresponds
to the subjective assessment of the difficulty of the tasks by the test group and the
values of its self-assessment. For example, the values for ‘difficulty’ and ‘self-
assessment’ are 6, which reflects the objective level of difficulty of this task with
its TS ¼ 32.5.
In a first approximation, the total TS (of the pre-test participants) represents the
objective difficulty of a test item for the test population represented by the pre-test
group.
In a first approximation, the competence profiles of the test tasks represent the
competence profiles of the pre-test participants and, at the same time, the quality of
the test tasks. The variability coefficient V indicates whether a test item has the
potential to comprehensively test professional competence.
The authors of the test questions and the participating subject teachers decide—
taking into account all pre-test results—whether an inhomogeneous competence
profile of a test question is due to the competence of the test groups or to weaknesses
in the situation description of the test questions.
The pre-test results of SLS indicate a special feature. Although the criteria or
sub-competences environmental and social compatibility were applied in all test
tasks, the competence profiles show a pronounced competence gap in the trainees’
‘technical’ understanding (Fig. 5.13). However, both competence criteria
(sub-competences) are of fundamental importance for the SLS. This was confirmed
by the project group with reference to the job description and the relevant training
regulations.
It is remarkable here that the teachers, with their extended understanding of
professional competence, were able to identify very precisely the reduced profes-
sional understanding of their students when assessing the test results (rating): ‘The
test result is probably due to our own professional understanding’. However, this
had changed fundamentally with the rater training, according to the consistent
assessment of the pre-test experiences of the raters.
Reliability analyses (Erdwien & Martens, 2009, 70 f.)
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 121
Fig. 5.13 Competence profiles of trainees for shipping and logistics services (SLS) (n ¼ 6 left,
n ¼ 8 right)
Table 5.6 Reliability analyses for the eight criteria of the evaluation form
Criterion Rating items Alpha value
Clarity/presentation 1–5 0.88
Functionality 6–10 0.86
Sustainability 11–15 0.84
Efficiency/effectiveness 16–20 0.82
Orientation on business and work process 21–25 0.87
Social compatibility 26–30 0.84
Environmental compatibility 31–35 0.85
Creativity 36–40 0.90
As the aim is to maintain the eight criteria adopted in the further analyses or to
combine them into the competence levels ‘functional competence’, ‘procedural
competence’ and ‘shaping competence’, a reliability analysis was carried out once
again on each of the evaluation items belonging to one criterion in addition to the
factor analysis in order to check whether joint further processing of each of the five
evaluation items belonging to one criterion is appropriate.
The reliability analyses show the alpha values documented in Table 5.6.
If item 20 is excluded from the scale, as it does not meet the requirements of
sufficient cell occupation, the criterion ‘efficiency’ leads to a slight deterioration of
the alpha value to 0.80. By contrast, exclusion of item 35 from the ‘environmental
compatibility’ scale would lead to a slight improvement of the alpha value to 0.86
In a further step, it was examined which reliability values were achieved by the
competence levels ‘functional competence’, ‘processual competence’ and ‘shaping
competence’ on which the theoretical assumptions were based, and whether all
40 assessment items resulted in the overall construct ‘vocational competence’. The
relevant results are shown in Table 5.7.
1.Conclusion Overall, the results of the reliability analyses show a very satisfactory
scale stability for each of the eight criteria for the closer determination of the
122 5 Developing Open Test Tasks
Table 5.7 Reliability analyses for the three assumed competence levels
Competence levels Competence criteria (sub-comp.) Alpha value
Functional competence (DF) Clarity/presentation 0.93
Functionality
Procedural competence (DP) Sustainability 0.92
Efficiency/effectiveness
Orientation on business and work process
Shaping competence (DG) Social compatibility 0.93
Environmental compatibility
Professional competence All 40 rating items 0.97
competence model’s competence levels. The reliabilities for the competence levels
based on education theory and for the overall construct of vocational competence are
proving to be very high
Four questions are presented to the pre-test participants for evaluation of the test
questions. There is also the opportunity for additional comments.
How do you assess. . .
1. the comprehensibility of the test tasks
0. . .. . .. . .. . .. . .0.10,
2. the difficulty of the task
0. . .. . .. . .. . .. . .0.10,
3. the practical relevance of the task
0. . .. . .. . .. . .. . .0.10,
4. How well have you solved the task?
0. . .. . .. . .. . .. . .0.10.
Comprehensibility
When assessing the comprehensibility of test tasks, it must be noted that the
comprehensibility of a professional text also depends on the professional under-
standing and competence of the test participants. When evaluating the pre-test, the
project group must therefore assess whether the linguistic formulation or the com-
petence of the participants determines the degree of comprehensibility.
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 123
The level of difficulty of the test tasks is a criterion that is of secondary importance in
an open test format, as open test tasks allow the entire range from weak to very
elaborate task solutions. If the open test tasks are based on authentic descriptions of
the situation characteristic of the test population or the respective occupational field
of action, it can be assumed that the test tasks also have an appropriate level of
difficulty. This can be changed by the degree of complexity of the situation
descriptions.
2.Example Assessment of comprehensibility and own competence (How well have
I solved the task?) (Fig. 5.14).
Practical Relevance
When assessing the practical relevance of the test participants, it must be noted that
they should have relevant practical experience. If, for example, both trainees with
relevant practical experience and pupils/students of vocational school programmes
participate in a comparison project, a test group with practical experience should be
selected for the pre-test.
Fig. 5.14 Example: Student assessment of pre-test task 2: Training, guidance and counselling of
patients and relatives (COMET project Care professions/Switzerland)
Assessment of the degree of difficulty and the practical relevance of the test drafts (Fig. 5.15)
124 5 Developing Open Test Tasks
Fig. 5.15 Example: Assessment of students’ pre-test tasks, test 2: Training, guidance and counsel-
ling of patients and relatives (COMET project Care professions/Switzerland)
Only the combination of a subjective evaluation of the test tasks by the test
participants and the assessment of their own competence as well as the objective
test results provide a sufficient basis for the selection of suitable test tasks and, if
necessary, their revision.
The following shows how the appropriate test tasks are selected on the basis of
the pre-test results and according to which criteria they are finally corrected, if
necessary.
When evaluating the pre-test results, particular attention must be paid to any
contradictions between the subjective assessment of the trainees (e.g. with regard to
their own competence) and the objective test results (Figs. 5.16 and 5.17).
For example, the results for ‘shipping clerks’ show that they consistently assess
the degree of difficulty of the tasks as very low (Fig. 5.16). Their objective test
results give a clearly different picture: the competence profile is highly one-sided,
and the overall score is rather low. A completely different picture results from the
pre-test of the carpenters (Fig. 5.17). They also rate the level of difficulty of their test
tasks as low. This corresponds to the high overall score that the pre-test participants
achieve, as well as a considerably more homogenic competence profile. In this case,
it is necessary to increase the complexity of the situation description. This also
significantly increases the level of testing requirements for carpenters.
For a summary of the results and the proposal for the revision of the carpenters’
test tasks, see Fig. 5.18.
The difficulty of the test task can be increased by including further and higher
requirements in the situation description. It should always be borne in mind that this
5.6 Evaluation and Choice of Test Tasks: The Pre-Test 125
Fig. 5.16 Evaluation results pre-test 2013, profession: Shipping and logistics services
Fig. 5.18 Summary of the results and proposal of the scientific support for the selection and
revision of the test tasks, profession: carpenter
must be an authentic requirements situation for the test group (test population). This
is the only way to ensure the validity of the content of the test tasks. This can most
likely be achieved by teachers and trainers who already have rating experience
(e.g. by participating in the pre-test).
It is absolutely necessary to use the competency model as a basis.
126 5 Developing Open Test Tasks
In recent decades, especially since the establishment of the PISA project, empirical
educational research has developed and internationally established methods of
competence diagnostics, especially in mathematics and the natural sciences, which
now have high quality standards (measured by the test quality criteria). The method
of competence diagnostics for vocational education and training must be measured
against this. The special features of vocational education and training explained in
Chaps. 2 and 3 require an application and interpretation of the established quality
criteria, taking into account the special features of vocational education and training.
Excluding this differentiation and transferring the established test methods used in
the PISA and TIMMS projects to vocational training, there is a risk that precise
measurement results can be presented, but that these do not match the object of
measurement—professional competence. Robert STERNBERG and Helena
GRIGORENKO therefore also warn against misunderstandings in the
conceptualisation and implementation of ‘Studies of expert performance’: ‘Expertise
theorists have argued about what it is that makes someone an expert (. . . .). How
expertise is acquired, for example, through deliberate practice or skilled appren-
ticeship. They have failed to consider fully the role of expertise in the development
and maintenance of expertise, and indeed, few expertise theorists have used any tests
of abilities in their research’ (Sternberg & Grigorenko, 2003, VII).
This is a sobering balance which shows the height of the hurdle for the develop-
ment of competence diagnostics for vocational education and training that needs to
be overcome.
In his ‘Epistemology of Practice’, Donald SCHOEN unfolds the characteristics of
professional competence as an opposite pole of social knowledge to theoretical and
scientific knowledge. In contrast to abstract, context and purposeless scientific
knowledge, professional competence means ‘a way of functioning in situations of
indeterminacy and value conflict, but the multiplicity of conflicting views poses a
predicament for the practitioner, who must choose among multiple approaches to
practice or device his own way of combining them’ (Schoen, 1983, 17).
The peculiarities of vocational work and vocational learning developed in
Chaps. 2 and 3, as summarised once again here from a different perspective, make
special demands on the quality criteria of competence diagnostics in vocational
education and training.
When traditional test quality criteria are applied to the measurement of profes-
sional competence in the relevant methodological manuals of empirical social
research, the quality criteria for test procedures—in the tradition of experimental
scientific research—are generally listed in the following order: objectivity, reliability
and validity.
5.7 Test Quality Criteria 127
5.7.1 Objectivity
Objectivity of Implementation
Objectivity of Evaluation
The objectivity of the COMET test procedure is given by the rating procedure.
However, this only applies if it can be ensured that the raters do not evaluate the tests
of ‘their’ pupils/students. Evaluation objectivity requires not only the anonymisation
of the test documents, but also a sufficiently large sample of test participants.
Reliability (Credibleness)
The reliability of a test indicates the degree of accuracy with which profes-
sional competence is measured. This represents a special challenge for the
COMET test procedure, as professional competence can only be measured
with open, complex test tasks.
The consequence here is that as many different task solutions have to be evaluated
as persons take part in the test. A further complication is that the great heterogeneity
of the competence characteristics—especially in international comparative pro-
jects—places additional demands on the rating. Sufficiently high values of interrater
reliability are regularly achieved with the help of tried and tested rater trainings.
128 5 Developing Open Test Tasks
Validity (Significance)
A special feature of determining the validity of open, complex test tasks is that
these tasks can always be based on a different level of knowledge. If test subjects of
different formal qualification levels (e.g. skilled worker and technician levels)
participate in a test, it is necessary to name the primary test group for which the
test tasks were developed.
A special case applies if a comparative study involves both test participants in
dual, vocational and technical school programmes. While, in dual vocational training
courses, vocational competence is to be attained at the end of training, school-based
vocational training courses are always followed by a phase of familiarisation with
the profession. If school-based vocational training providers have an interest in
knowing to what extent pupils/students attain employability, participation in com-
parative projects is justified.
As the validity of a test task’s content is always assessed in relation to authentic
situations in the respective profession (professional validity) and not in relation to a
curriculum, the various forms of vocational training courses can participate in the
COMET projects if the representatives of the training courses want to find out to
what degree it is possible to convey vocational competence to the pupils/students in
a school-based training course.
For example, the results of a test for apprentices in the profession of industrial
mechanic, in which a test group of students in a dual Master’s programme for
mechanical engineers also participated, show that the students rated these test
tasks as valid in content. They justified this with the fact that, as future managers,
they were also responsible for the quality control of such tasks. The head of the study
pointed out that the students all had the professional qualifications of a master
craftsman, technician or a comparable profession. The aim of this course of study
would be to convey a holistic professional competence profile at the level of senior
executives.
A continuing education programme with the goal of developing management and junior
executives must ultimately enable them to consider the respective overall system in every
solution development (Heeg, 2015, 123; Fig. 5.19).
In order to conduct a COMET test in which this course would be the primary test
group, it would be important to adapt the competence and measurement model to the
qualification profile of managers (ibid., 123 f.).
Professional work tasks are not ‘given’ values from which test tasks can be
derived, but they are regarded as the reference points for the development of test
tasks. However, this plausible assumption proves to be a challenge for the test
developers. Professional work tasks are the result of different company organisation
and organisational development traditions. If one wants to grasp the specific quality
of a professional task and the competence incorporated in it, then this presupposes
regarding professional work processes as an expression of work structuring and
work organisation. Vocational qualifications and competences therefore result not
(only) from objectively given qualification requirements, but from work structuring
processes. This also includes the design of human–machine interaction, for example
in network- and computer-controlled work systems.
130 5 Developing Open Test Tasks
Criterion Validity
Construct Validity
1
From the protocol of the project coordinators of 2.12.2010 (COMET Vol. III, 233). The project
coordinators have long-term experience as examiners in carrying out examinations according to
BBiG (skilled worker, journeyman and master craftsman examinations).
132 5 Developing Open Test Tasks
X XR
X0 ¼ XR
m1
NR
P ¼ 100
N
NR stands for the number of participants who solved the task correctly and N for
the total number of participants.
The selectivity index T results from the relationship.
R0 Ru
T¼ 100
N
2
The achieved number of points X R (raw value) is reduced by a factor which is divided by the
difference between the total number of points X und X R, divided by the number of selection
answers m reduced by 1.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 133
R0 stands for the number of examination participants from the upper half of the
examination participants who have correctly completed a task and RU for the number
of examination participants in the lower half who also performed this task correctly,
and N is the total number of examination participants.
The upper and lower group is formed by sorting the participants of the examina-
tion (overall examination results) according to the increasing number of points and
dividing them into an upper and lower half of the same size. The level of difficulty
and the selectivity index of the MC exam tasks are directly related, as shown in
Fig. 5.20.
Maximum selectivity is achieved at a difficulty index of 50 (medium difficulty).
On the other hand, the selectivity index T ¼ 0 for test tasks when the difficulty index
is P ¼ 0 or 100, i.e. when the task is solved either by all examination participants or
by none. Since such tasks are not suitable for distinguishing between ‘good’, ‘less
good’ and ‘bad’ examination participants, i.e. they are not valid in the sense of
discriminatory validity, they are considered unsuitable examination tasks according
to this examination concept.
In addition to the ideal line ‘1’, curves ‘2’ and ‘3’ show a course that can be
achieved empirically. This is due to the fact that in the practical application of
multiple-choice tasks, there are always exam candidates in the subgroup who are
also able to solve individual difficult tasks and, conversely, members of the super-
group are occasionally unable to solve even minor tasks.
The standard work on ‘Forschungsmethoden und Evaluation in den Sozial- und
Humanwissenschaften’ [Research methods and evaluation in social and human
sciences] by Bortz and Döring (2003) states: ‘For very easy and very difficult
items one will have to accept [...] losses in selectivity. Items with medium difficulties
now have the highest selectivity’ (ibid., 219).
Their conclusion is ‘In principle, the greatest possible selectivity is desirable’
(ibid., 219). And these high selectivity values are achieved if the test tasks are
constructed in such a way that they ‘ideally’ lie at a medium degree of difficulty
(P ¼ 50) or have a degree of difficulty between 30 and 70 or also between 20 and
80 (ibid., 218).
For examination tasks that are more difficult or easier, the selectivity index would
be too low to distinguish between ‘good’ and ‘weak’ examination participants.
SCHELTEN therefore comes to the conclusion that test tasks that fall outside the
framework thus defined ‘must be completely revised or replaced by new ones’
(Schelten, 1997, 135). It is therefore not important in this form of standardised test
questions to check whether a participant has a specific professional ability—in this
case, it would depend on the validity of the test question—but to construct test
questions in such a way that the given bandwidth of the degree of difficulty and a
correspondingly high selectivity value are achieved. These values can be achieved,
for example, by adjusting the distractors (the wrong response specifications) for
multiple-choice tasks. This principle of test construction applies to classical test
theory as well as to probabilistic test methods.
following display of the artificial horizon, please indicate the flight status of your
aircraft!’. Correct answer: ‘Descending in a left turn’ (Rademacker, 1975).
All participants in pilot training regularly solved this task correctly. This is not
surprising at all, as the artificial horizon reading is trained on a large number of test
flights as well as in the aircraft simulator.
The instructors (experienced pilots) were (always) very satisfied with this test
result. All student pilots had demonstrated an essential aspect of professional
competence (as pilots).
The psychometric evaluation of the established test procedure came to the
conclusion that this task should be removed from the test or reformulated, as it
would not meet the quality criteria of the relevant test theory in its present form. The
degree of difficulty and the selectivity index would be outside the observable limits.
The task was changed in such a way that a higher degree of difficulty and therefore
also a sufficiently high selectivity value were achieved. The reworded task was:
‘Please draw the position of the artificial horizon in the illustration (an empty circle
symbolising the artificial horizon), which indicates when you are flying a left turn on
your plane while ascending’.
A sufficiently large proportion of the prospective pilots now solved the task
incorrectly, although they had all demonstrated the safe and error-free handling of
the artificial horizon during their ‘training flights’ and in the flight simulator.
This example shows that standardised tests are unsuitable for testing professional
competence, as the level of difficulty of the test tasks does not result from the
complexity of the task to be tested, but from the manipulation (e.g. by skilful
formulations) of the wrong answer options. When checking occupational skills,
especially those that are safety-relevant, the use of standardised test tasks is not
only unsuitable, but also not permissible, since the validity of the contents of the test
or examination tasks is not given. For example, it is essential that the VDE safety
regulations for the installation of electrical systems are safely controlled by qualified
electricians. An examination practice that does not verify this involves incalculable
risks, as a successful examination also entails the authorisation to install electrical
systems.
The examination of professional competence therefore necessarily requires valid
forms of testing and examination in terms of content (see Rademacker, 1975).
If it is not about individual items but about tests with a large item pool, for
example in multiple-choice tests, probabilistic modelling with the help of the Item
Response Theory allows precise statements to be made about the selectivity of entire
tests. At the Chamber of Industry and Commerce’s final examinations, for example,
there was a lack of reliability, particularly in the lower part of the results, which is
however the decisive factor for passing or failing the final examination (Klotz &
Winther, 2012) (Fig. 5.21).
The lack of reliability in the lower range of personal competence is due to the use
of only a few very difficult or very easy items. A higher selectivity for the respective
degrees of difficulty could be achieved by using a correspondingly higher number of
items in these areas. It is also possible to select items from a sufficiently large item
pool individually for each test person, whose level of difficulty is adapted to the
personal competence based on the results of the previous items (‘adaptive testing’).
136 5 Developing Open Test Tasks
Fig. 5.21 Capability-specific reliability sum for all test items (Klotz & Winther, 2012, 9)
While this requires the use of a large number of individual items, such items
cannot depict work process knowledge and context understanding. This splits the
solution of complex work tasks into the knowledge of individual activity steps. Not
even specialist knowledge can be validly tested in terms of content. Especially, the
low confidence of employers in the professional relevance of this type of examina-
tion (Weiß, 2011), as well as the trend of modern examination practice to rely on
open exemplary tasks in the form of company assignments, is clear arguments
against the use of multiple-choice examinations in vocational education and training.
Conclusion The use of standardised test tasks leads to problems with selectivity. If
the degree of difficulty of individual tasks is adjusted accordingly, the task thus
optimised fails to achieve essential contents of professional competence. The opti-
misation of entire test batteries allows good selectivities, even if individualised or
according to difficulty ranges. However, this requires the division of tasks into a
large number of individual items. This in turn leads to a survey of specialist
knowledge, but not to the measurement of professional competence. Here again, a
fundamentally problematic test procedure is not improved by its optimisation.
Criteria-oriented test tasks must first of all be valid in terms of content. The criterion
validity of a test is therefore measured by whether the test result correctly predicts the
subsequent behaviour (see above). If, for example, the ability to solve a professional
task in conceptual-planning manner is tested with open, valid test questions, it is
assumed that the test person can also solve this task practically—in the working
world or within the framework of a practical examination. Test items that meet this
criterion are referred to as criteria-oriented test items. In this respect, the validity of
the criteria concretises the overriding criterion of the content’s validity. The test-
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 137
theoretical literature refers to the fact that the validity of the content cannot be
determined numerically and that one is therefore ultimately dependent on the
technical or didactic competence of the developers of the test tasks to assess the
validity of the test task’s content.
In the COMET projects carried out and planned so far, vocational competence is
measured towards the end of vocational training—with reference to the occupational
competence to be imparted.
The content dimension of the competency model differentiates between profes-
sional fields of action for beginners, advanced beginners, advanced learners and
experts. This is the basis for the development of test tasks with which professional
development can be recorded (! 5.1, 5.2).
As COMET projects for different occupations are based on the same competence
model as the design of test tasks (see this chapter), this leads to the assumption that
this should also enable a comparison between the test results across occupations. If
one compares the test results of the model tests (COMET) for the occupations
electronics technician (industrial engineering and energy and building technology),
industrial engineer and car mechatronic (Fig. 5.22), one is shown clear differences in
the competence of the occupation-related test groups.
Fig. 5.22 Competence level of trainees in electronics, industrial engineering and car mechatronics
(COMET projects in Hesse)
138 5 Developing Open Test Tasks
Table 5.8 Evaluation of the difficulty of the test tasks for electronics technicians, industrial
engineers and car mechatronics by vocational teachers in the fields of electrical engineering and
metal technology on a scale from 1 to 10
Teacher of the
subject Electrical Metal Automotive
Test tasks for engineering technology engineering Σ Variance
Electronics 7,4 6,1 5,0 7,1 2,4
technicians
Industrial 6,0 6,0 5,8 6,1 1,3
mechanics
Car mechatronics 5,0 5,9 5,9 5,4 1,7
On the basis of ‘All teachers evaluate all tasks according to their degree of difficulty’, a clear
weighting is given for the ‘degree of difficulty’ of the test tasks. The highest level of difficulty is
found in the test tasks for electronics technicians (7.1). The test tasks for industrial engineers have a
lower value (6.1). The test tasks for car mechatronics are classified as significantly less difficult. The
relatively high variance is striking as the ratings vary considerably. This applies both to the
evaluation of each individual test item and to the evaluation of each non-occupational test item
A comparison in levels of difficulty of the individual test questions assessed by the teachers with the
objective test results a) on the basis of all test participants and b) on the basis of comparable test
groups results in further conspicuities. The test results on the one hand and the assessment of levels
of difficulty of the test tasks by the teachers on the other hand are relatively far apart (Table 5.9)
Industrial engineers achieve significantly higher test values than, for example,
electronics engineers. This also applies if the test groups of all three occupations are
compared with each other on the basis of comparison groups (with comparable
previous training). This finding triggered an evaluation by the vocational school
teachers involved in these COMET projects of the difficulty of all test tasks across all
three occupations (Table 5.8).
While the total point values (TS) for the individual test tasks in the respective
groups vary by a maximum of 5.5 points, i.e. are relatively close together, the range
of the teachers’ assessment of the level of difficulty is considerably wider.
At the same time, the teachers’ assessments of the difficulty and the actual
number of points achieved by trainees (e.g. T6) differ.
The teachers involved attributed this, among other things, to the fact that the
‘difficulty’ was primarily assessed from the limited perspective of subject-systematic
criteria. However, the concept of ‘holistic task solving’ allows the scope for design
to be exhausted and thus a higher number of points, especially for complex tasks
The inclusion of other professions such as industrial tradesmen, medical special-
ists and carpenters in the comparisons of the difficulty of the test tasks results in
further differentiations (Fig. 5.23)
The very different levels of competence initially confirm the thesis that a cross-
professional comparison must take into account what is to be compared. The
‘difficulty’ of a test or an examination results from the qualification requirements
in the professional fields of action as defined in the job descriptions (the job profiles).
These also represent the profession-specific qualification level. For example, at the
end of their training, the qualification level of industrial tradesmen is rated as equally
high or even higher than that of graduates of relevant bachelor’s degree programmes,
Table 5.9 Total point values (TS) for all test tasks
Professions Electronics technicians Industrial mechanics Car mechatronics
Test tasks T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14
Teacher of the subject 23.1 22.1 26.6 27.1 35.5 40.3 36.3 41.8 34.5 35.8 38.7 32.8 33.2 37.7
Electrical engineering 6.5 7.1 7.6 8.5 5.8 6.4 6.8 5.4 5.4 6.8 4.4 6.2 4.3 3.9
Metal technology 4.8 6.0 7.0 6.8 5.8 6.2 6.8 5.3 5.8 6.4 6.4 7.0 5.6 4.4
Automotive engineering 6.0 7.6 7.4 6.6 6.2 7.4 7.2 5.4 8.2 5.7 5.1 6.6 5.1 3.9
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .
139
140 5 Developing Open Test Tasks
Fig. 5.23 Competence level of apprentices in industrial mechanics, industrial tradesmen, medical
assistants and carpenters, COMET projects in NRW
which are formally rated higher according to European standards. Practical training
shows that occupations predominantly chosen by high-school graduates have a
higher level of qualification than occupations predominantly chosen by secondary
school students (Fig. 5.24).
The preparatory training of trainees in a profession therefore also represents their
level of requirements (qualification levels). This is reflected by the idiom that
industrial clerk is a typical occupation for high-school graduates. From this perspec-
tive, the training occupations can be distinguished according to their ‘difficulty’. The
wording ‘degree of difficulty’ is avoided, as the calculation of a degree of difficulty
for professions would not do justice to the complexity of the occupational concept.
Howard Gardner has pointed out that each profession has its own quality: ‘Take a
journey through the world in spirit and look at all the roles and professions that have
been respected in different times and cultures. Think of hunters, fishermen, farmers,
shamans (...) sportsmen, artists (...) parents and scientists (...). If we want to grasp
the whole complex of human cognition, I think we must consider a far more
comprehensive arsenal of competencies than usual’ (Gardner, 1991, 9). With the
concept of multiple intelligence, Gardner tries to meet the variety of different
abilities, which also find their expression in the competence profiles of different
professions. The various profiles of intelligence and skills expressed in the
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 141
Fig. 5.24 Competence level of apprentices for industrial clerks and electronics technicians for
energy and building technology
a profession with a three-year training period. The reference point for the develop-
ment of test tasks, with which the employability is examined, is the competences
defined in the job description.
In international comparisons such as the International World Skills (IWS), the
professional experts of the participating countries agree on the fields of action relevant
to professional practice and the criteria of professional competence for the respective
profession. These are the basis for the formulation of complex competitive tasks
(Hoey, 2009, 283 ff.). A similar procedure was developed for the international
comparative COMET projects. On this basis, the professional capacity can be checked
at the end of the vocational training. The possibility of measuring competence during
the course of vocational training (for beginners, advanced beginners, advanced
learners and experts) is also possible in principle. However, difficulties always arise
when vocational training courses are included which differ in the content structure of
the curricula/training regulations. This is the case, for example, if the development of
competences during a training course structured according to learning fields is to be
compared with a training course structured according to a subject system.
This difference is irrelevant for the verification of employability at the end of
training. Competence diagnostics makes it possible to compare programmes with
different curricular structures if they pursue the goal of promoting the trainees/
students on their way to professional competence.
2. The ‘level of difficulty’ of the test items is of secondary importance for open test
items.
The decisive criterion for the quality of open test tasks is their professional
validity and therefore their authenticity and their representativeness for a profes-
sion’s fields of action.
The competence level of a test participant therefore does not depend on the level
of difficulty of the test tasks, but firstly on the ability to solve the open (complex) test
tasks completely and secondly on the professional justification of the solutions of a
test task. A distinction is made between the level of action-guiding, action-declara-
tive and action-reflecting work process knowledge (! 3.5.4).
This difficulty component is also not a characteristic of the test item, but an ability
characteristic. The test task therefore always contains the request to provide detailed
and comprehensive reasons for the task’s solution.
This concept of difficulty is realistic because the test tasks can be solved not only
at different competence levels, but also at different levels of knowledge.
At the first level of knowledge, it is only important that the future specialists
(completely) solve or process the tasks assigned to them on the basis of the rules and
regulations to be observed. In companies with a flat organisational structure, in
which a high degree of responsibility and quality assurance is shifted to the directly
value-adding work processes, it can be assumed that the specialists can understand
and explain what they are doing.
Equally typical are situations in which, for example, journeymen from an SUC
company advise their customers on the modernisation of a bathroom or heating system
(at the level of action-reflecting knowledge) in such a way that they have the oppor-
tunity to make a well-founded choice between alternative modernisation variants.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 143
Fig. 5.25 Competence distribution of nursing occupations Switzerland, second main test 2014
(On differentiating competence levels according to knowledge level ! 8.2.2)
144 5 Developing Open Test Tasks
Students and lecturers had the opportunity to reflect on the weaknesses of their
training identified in the first main test—1 year earlier—and to introduce more forms
of learning according to the learning field concept.
The context analysis, the project results, the longitudinal study and the results of
the feedback discussions with this project group (the coordinators of the VET centres
involved in the test) were the basis for the interpretation of the test result.
1. The test tasks are based on an authentic, valid and representative description of
the situation (result of the pre-test).
2. The test tasks are classified as adequate and demanding by the lecturers/subject
teachers and the students.
3. The high level of competence that is achieved with the three-year dual technical
college course is an expression of the high quality of this course (Gäumann-Felix
& Hofer, 2015; Rauner, Piening, & Bachmann, 2015d).
This also means that the very high proportion of test participants (representative
of the test population), which achieves a high and very high level of competence,
cannot be interpreted as an indicator of a low level of difficulty of the test tasks. In a
capability-based test concept, the test authors must agree on the formulation of the
solution space and the raters on the rating criteria for the rating items.
When formulating the solution spaces, it is important to define the space of
possible task solutions in relation to all relevant solution criteria. The authors of
the test tasks and the solution spaces are oriented to their picture of the primary test
population to be tested. If, for example, the authors (teachers) teach both trainees and
technical college students and if the primary test population is not explicitly defined,
a requirement level can subjectively arise that is at the level of the technical college
rather than that of the dual courses of education—or vice versa. Therefore, it is
necessary to accurately define the primary test population. Difficulties arise in
international comparative projects if, for the same professional activities (e.g. for
nursing professionals) in the participating countries, training is provided in upper-
secondary (dual and vocational schools), post-secondary (technical colleges) or
tertiary education programmes.
In these cases, inaccuracies can only be avoided by carrying out the rater training
on the basis of the solution examples and the reference values of the rater training for
the primary test group. Only then is there certainty that the raters apply the same
standards in interpreting the rating items in relation to the individual test items.
If this prerequisite is met, then different courses of education at different quali-
fication levels which qualify (are to qualify) for the same or a comparable profes-
sional activity can be compared with each other.
Cross-professional comparisons of competence levels as a yardstick for the
difficulty of the test tasks, on the other hand, are only possible to a very limited
extent.
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 145
The empirical data available on this subject show that assessment standards,
which are characterised by different vocational training traditions, often lead to a
wide divergence in the assessments of the raters at the beginning of rating training.
The rater training allows a final evaluation of the task solutions on a high level of
agreement.
The quality of a COMET test task is proven in its potential to measure the degree of
completeness and homogeneity of professional competence.
To quantify more or less complete task solutions, the variation coefficient V is
determined.
Fig. 5.26 Example of a homogeneous and inhomogeneous competence profile of two commercial
professions
146 5 Developing Open Test Tasks
5.8.4 Conclusion
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 147
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_6
148 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
a priori. This statement can be supported by two main arguments. There is always
a—possibly very small—chance that a better measurement model can be found.
Moreover, it is not possible to define clear criteria for the fit of measurement models.
These criteria are also subject to normative assumptions which, however, cannot be
discussed here (cf. Bozdogan, 1987). Therefore, if the theoretical assumptions
require it, at least an active search should be made for a model that can mathemat-
ically represent the ordered higher-order interactions.
If higher-order interactions can actually be assumed, then this would mean that
simpler models pretend comparability, for example, of the persons investigated,
which is not given. Higher-order interactions result in qualitatively different capa-
bility profiles that are no longer directly comparable (Erdwien & Martens, 2009). In
summary, it can be said that the interaction of the eight criteria of the complete task
solution determines the search direction for the measurement model to be identified:
from simple to complex.
A further argument that specifies this search direction results from the rough
allocation of the eight criteria to three levels of professional competence (Rauner,
Grollmann, & Martens, 2007). This allocation gives rise to two possible develop-
ment paths for professional expertise: a gradual development in which the three
levels develop in succession and a simultaneous development with a more or less
continuous increase in all three areas. If both development paths are to be allowed,
this also has implications for the selection of a suitable measurement model. In
particular, the idea of the successive training of competence levels gives rise to
different qualitative competence profiles, which in turn can only be described by
allowing higher-order interactions.
It must be noted at this point that there can be no clear identification of a
measurement model. Ultimately, one has to decide on a suitable model with a
transparent presentation of the corresponding selection criteria. It is often necessary
to balance the contradictory forms of selection criteria against each other.
In the following, we will therefore examine which statistical approaches and
measurement models 0are statistically suitable to meet the requirements
discussed here.
Factor analyses and many related procedures are usually based on an analysis of the
covariance matrix, i.e. on a correlative relationship of two variables each. Histori-
cally, the continuing popularity of these methods is mainly due to the fact that they
could be calculated easily and without the help of computers. However, this math-
ematical simplicity is achieved with a severely limited informative value of the
resulting models. The data structure based on the covariance matrix, the bivariate
network of relationships, excludes higher-order interactions between the variables a
priori. The corresponding models are therefore only conditionally suitable for the
6.1 What Makes it So Difficult to Measure Professional Competence? 149
invested parameters is of course a normative setting. A possible way out is the use
of bootstrapping (Efron & Tibshirani, 1994; von Davier, 1997; von Davier &
Rost, 1996), in which a test distribution is generated using artificial data sets
assuming the validity of the model. Thus, the fit of the empirical data set can be
roughly estimated in comparison with the artificial data sets under model validity.
Ultimately, however, one should not rely on a criterion for the quality of the
model fit. Careful consideration of several criteria with reference to the theoretical
foundations certainly makes sense. For example, the mean class affiliation prob-
ability is a good measure of the stability of a solution, even if this measure can
only be used to compare different solutions with each other to a limited extent.
In order to put the proof of interrater reliability on a solid basis, a sample was drawn
beforehand from the test person solutions of the main test, which was submitted to all
18 raters for evaluation. From the set, consisting of the four test questions, two test
person solutions were used for the rating. Each rater from the team was therefore
confronted with 8 solution variants of test persons, which had to be evaluated and
assessed. The advance rating therefore consisted of a database of 144 individual
ratings.
6.2 Ensuring the Interrater Reliability of the COMET Test Procedure 151
As a result of the response format for the evaluation of the test tasks and the
number of raters, the Finn coefficient was chosen as a quality measure for the
evaluation of the interrater reliability. Strictly speaking, this is a measure that
requires an interval scale level and for its use, the data must meet the requirements
for calculating variance analyses. Although the available data are ordinally scaled
rating data, a rating scale can be treated as an interval scale, provided that the
characteristic values are equidistant, and the numbering of the different characteristic
values therefore differs equidistantly. In addition, Bortz and Döring (2002, 180 f.)
point out that the mathematical requirements for the analysis of variance say nothing
about the scale properties of the data. This means that parametric procedures can be
used even if the data are not exactly interval-scaled, provided that the other pre-
requisites for carrying out the procedure are met.
The criteria—(a) independence of observations, (b) normal distribution of obser-
vations and (c) homogeneity of variances—are the main prerequisites for the feasi-
bility of a variance analysis. The violation of the criterion of independence leads to
serious consequences, whereas the analysis of variance is robust against a violation
of the normal distribution or variance homogeneity criterion.
In the present case, the observations are independent: vocational schools are only
the functional units in which all students can be tested together. However, practical
vocational skills are primarily developed in training companies due to the dual
organisation of training, so that independence is ensured due to the allocation of pupils
to different training companies. In addition, the vocational students of the various
vocational school classes solve the tasks assigned to them individually, whereby all
four test tasks are distributed equally among the students within a respective vocational
school class, which prevents attempts at cooperative work or ‘copying’. Also, the rates
evaluate the task solutions strictly independent of each other. They are never in
exchange with each other at any time during the assessment process.
An explorative data analysis also showed that 33 of the 40 items meet the criterion
of variance homogeneity. The seven non-variance homogenous items were neverthe-
less included in the main analysis; however, when the overall data set was available,
they were subjected to another critical examination with regard to their variance
homogeneity before further, constructive analyses were carried out. The interrater
reliabilities were calculated both including and excluding these seven items.
To test empirical data for the presence of a normal distribution, various graphical
(e.g. histograms, P-P plots) as well as statistical (e.g. Shapiro–Wilk test,
Kolmogorov–Smirnov test) methods can be used. In this study, so-called P-P plots
were generated. These represent the expected cumulative frequencies as a function
of the actual cumulative frequencies. The normal distribution probability was cal-
culated using the Blom method. Figure 6.1 gives an example of the PP plots
generated in this way.
While the P-P plots indicate the existence of normally distributed data, this could
not be proven by statistical methods. Due to the robustness of a variance analysis
against the violation of the criterion of the normal distribution, the use of the Finn
coefficient for the interrater reliability calculation was, however, not discarded.
With regard to the use of the Finn coefficient, it should be noted that it is generally
considered to be of little value. It poses the danger that the proven reliability can
152 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Fig. 6.1 P-P-Plot for the evaluation of the item ‘Is the solution functional?’
positively distort the degree of actual agreement between the raters. Therefore, the
calculation of the intraclass correlation (ICC) as a stricter valuation method had to be
considered to prove interrater reliability. However, here again it proves to be a
problem that only a small variance of the item mean values leads to the fact that
‘no or no significant reliability can be measured with the ICC for a measuring
instrument’ (Wirtz & Caspar, 2002, 161). Although a lower ICC value is considered
acceptable in this case, it is difficult to determine a clear anchor point of the boundary
between satisfactory and inadequate interrater reliability. Compared to the ICC, the
Finn coefficient is ‘apparently independent of the variance of item averages’
(Asendorpf & Wallbott, 1979, 245). The variances of the item averages in the
available data prove to be small to very small. They range between 0.00 and 1.02.
The mean dispersion is 0.37. Therefore, the Finn coefficient is useful here as a
reliability measure.
Several models are available for calculating reliability with the Finn coefficient.
For the purpose of reliability determination,
1. each subject is assessed on the basis of 40 items
2. each rater of the entire rater team makes these assessments; i.e., they are not
randomly selected from a larger rater population,
6.2 Ensuring the Interrater Reliability of the COMET Test Procedure 153
a two-factor model of reliability measurement (rater fixed) is selected (cf. Shrout &
Fleiss, 1979). Furthermore, the decision must be made whether an unadjusted or an
adjusted reliability should be used as a measure. The unadjusted reliability reflects
the degree of agreement between the raters, while the adjusted reliability does not
take average differences between the raters into account as a source of error and thus
does not take the personal frame of reference of the raters into account. According to
Wirtz and Caspar (2002) (related to the ICC) and Shrout and Fleiss (1979), a
decision criterion for the use of the unadjusted or adjusted reliability calculation
lies in the properties of the rater test. Since all raters are to assess all subjects and the
reliability statement is to apply exclusively to the raters belonging to the sample, an
adjusted value can be used as a reliability measure in this case.
A reliable assessment can be assumed if the differences between the subjects
(here the pupils) are relatively large and the variance between the observers with
respect to the subjects is relatively small. The Finn coefficient can accept values
between 0 and 1.0. A value of 0.0 expresses that there is no correlation between rater
assessments, while a value of 1.0 arises when the raters have both equal mean values
and equal variances. The closer the value is to 1.0, the higher the reliability of the
assessments. For the Finn coefficient, values from 0.5 to 0.7 can be described as
satisfactory and as good from more than 0.7. Due to its merely low valuation
stringency, only Finn values with high interrater reliability are considered acceptable
in the present study; i.e., only Finn values of at least 0.7 are considered sufficient.
The following table shows the results of the reliability calculations for the eight
test person solutions that were given to the 18 raters for evaluation following the
rater training, whereby these are displayed both including and excluding the seven
non-variance homogeneous items (Table 6.1).
This shows that all Finn coefficients range within high reliability; i.e., the target
criterion of 0.7 defined for this study is achieved or exceeded. Even if the seven non-
variance-homogeneous items are excluded from the calculations, the results are
largely stable. All in all, the interrater reliabilities can be described as satisfactory.
Thomas Martens
The psychometric review of a competence model aims to examine whether and how
it has been possible to translate the theoretically and normatively based competence
model into a measurement model. Thomas Martens and Jürgen Rost point to the
complexity of the psychometric evaluation of a competence and measurement model
for vocational education and training and to the state of psychometric evaluation
practice, which is particularly challenged here. They describe the ‘approximation
process’ between the theoretical creation of models and their gradual verification
with various measurement models (Martens & Rost, 2009, 96 ff; Rost, 2004b):
• Competence models as psychometric models of one-dimensional competence
levels.
• Competency models as psychometric models of cognitive sub-competences.
• Competence models as psychometric models of level-related processing patterns.
The central object of the first COMET model experiment was the psychometric
examination of the competence model with the aim of developing it into a measure-
ment model (ibid., 95). In COMET Vol. III, MARTENS and others present the
method of an evaluation procedure with which the construct validity of the COMET
test procedure was checked (Martens et al., 2011, Ch. 4.3, 109–126) (Fig. 6.2). The
competence and measurement model based on educational theory and normative
pedagogy has all the required psychometric quality characteristics. In the discussion
about the external validity of the test procedure, it is also a question of whether the
modelling of professional competence has been successful, and whether the profes-
sional skills and knowledge of experts can be adequately represented in their
development from beginner to expert with the competence model. For this purpose,
a comprehensive justification framework was developed in the COMET project,
proven to be highly connectable in the international comparative studies conducted
to date.
To answer the question of how professional competences can be measured, some
fundamental questions of test theory and measurement theory must first be
discussed.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 155
Criteria
Objectivity HAASLER, HEINEMANN et al.: Ch. 2 MARTENS et al.: Ch.
ERDWIEN: Ch. 5 RAUNER, MAURER: Ch. 4.1
HAASLER et al.: 3 MAURER et al.: Ch. 5
Ch. 4 PIENING, MAURER: Ch. 4
Reliability MARTENS, ROST: ERDWIEN, MARTENS: MARTENS et al.: Ch.
Ch. 3.5 Ch. 3.2 4.2
HAASLER,
ERDWIEN: Ch. 5.1,
5.2
Validity RAUNER et al.: ERDWIEN, MARTENS: RAUNER et al.: Ch. 1, 2
Cahpter 1, 2 Ch. 3.2 HEINEMANN et al.: Ch.
MARTENS, ROST: RAUNER, MAURER: Ch. 3
Ch. 3.5 3.1 MARTENS et al.: Ch.
4.3
Fig. 6.2 Source references for the quality of the COMET test procedure (COMET Volumes I to IV)
Rost (2004a) suggests that the subject area of test theory is the conclusion of test
behaviour to the personal characteristic (Fig. 6.3).
In many practical applications of test theory—especially in psychological test
practice—it is assumed that a test behaviour can be translated directly into an
empirical relative; for example, ‘test task solved’ and ‘test task not solved’ is
converted into ‘1’ and ‘0’ and is then subsequently used as an estimator of a person’s
ability, for example in the dichotomous Rasch model (Rasch, 1960; see Rost, 1999).
However, even this simple relation raises a whole series of questions. Such a
structure implies, for example, the property intended by the test designer that there
can only be two possible outcomes of a test action, i.e. ‘solved’ or ‘not solved’. This
logic, for example, does not map the intermediate steps towards the result of the
action. How the result of the step came about is simply ignored, for example whether
there could be alternative steps leading to an equivalent result.
From the VET perspective and VET research, such dichotomisation of the test
behaviour into ‘correct’ and ‘wrong’ represents a strong restriction of the validity of
the content. The division into individual and independent steps, which can then be
regarded as ‘solved’ or ‘not solved’ in the sense of a single test action, also seems
hardly possible in many contexts of vocational education and training.
156 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Fig. 6.3 Connection between personal characteristic and test behaviour according to Rost (2004a)
This introductory consideration should have made it clear that the conclusion of test
behaviour on characteristics of professional competence is by no means trivial.
Basically, this is a fitting problem between empiricism and theory (see also
COMET Volume I: Martens & Rost, 2009). Steyer and Eyd (2003, 285) explain:
‘Measurement models therefore have the task of explaining the logical structure of
theoretical quantities and their combination with empirical quantities.’
In particular, therefore, this is a matter of
(a) Theoretical assumptions relating to a structure of professional competence;
(b) Mathematical relationships that can be described by a measurement model.
Theoretical model (a) and measurement model (b) should be linked in such a way
that one structure can be transferred to the other. According to Suppes and Zinnes
(1963), this should be as isomorphic and unique as possible.
A number of desirable properties can be added to both the theoretical model and
the measurement model.
The desirable properties of a test model cannot be discussed in detail here (see, for
example, von Davier & Carstensen, 2007). The characteristic that is particularly
controversial from the perspective of vocational education and training is the idea of
items that are independent of one another and measure the same personal character-
istic. Conceptually, this is described in classical test theory (KTT) with ‘essential tau
equivalence’ and in probabilistic test theory (PTT) with ‘local stochastic indepen-
dence’ (cf. Rost, 1999). This property of a test or a test model facilitates the exchange
of test questions or test items or to shorten tests—with loss of reliability. This test
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 157
property is also the basis for the development of adaptive tests, in which selected test
items are presented according to personality and test progress.
Vocational education and training focuses on the competence to act (cf. COMET
Vol. I, 28). This implies above all competencies for actions in work processes.
Optimal processing usually requires various consecutive steps, which then result
in a work result. This also means that there are usually several different strands of
action that can lead to an equivalent result. Such steps are no longer generally
independent of each other, but build on each other systematically. The artificial
isolation of these steps will therefore generally severely restrict the validity of the
content of the vocational competence model.
This therefore poses a dilemma: on the one hand, the test-pragmatic requirement for
a measurement model that contains independent test items and, on the other hand,
the content-theoretical VET requirement that the individual processing steps of a test
should build on one another.
Before this section outlines possible solutions to this dilemma, we would like to
point out the consequences if there is no convergence between the theoretical model
and measurement model in professional competence measurement.
If the demand for independent test items or test questions is maintained on the
side of the measurement model, this would mean that only a few professional
competences could be tested. It is then, of course, up to the respective domain
experts to ascertain whether this residual proportion of professional competences
to be measured is sufficient for an appropriate validity of content. In many cases, the
answer must certainly be ‘no’.
If, on the other hand, a non-measurable specificity of professional competence
were derived from theoretical content requirements, this would also have
far-reaching consequences. It would remain confined to expert assessments relating,
for example, to observations of work or the evaluation of work products. In
particular, the further formal processing of these assessments is then not assured.
For example, individual indicators of the expert assessment could not be meaning-
fully linked to an overall value. In particular, the test subjects could no longer be
compared with each other without a suitable linking rule for the indicators. Only the
ranking of the characteristics of individual indicators could be compared with each
158 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
other—and also only under the assumption that a reliable expert assessment is
available.
An insufficient formalisation of the competence measurement would also contra-
dict the objectivity of implementation and test fairness. There would be hardly any
possibility to regard the expert assessment thus obtained as independent of the
situational conditions. For example, both a work sample and a work product
would be inseparably linked to the respective practical working conditions. Such
confusion with the examination situation could be mitigated by standardising the test
situation. But even in this case, the test fairness can be violated if the conditions in
the training company and those in the test situation resemble each other to varying
degrees.
The aim is therefore to identify a suitable fit that can build a bridge between the
theoretical requirements of competency assessment and the desired requirements of
a measurement model.
Basically, two ways of approach can be distinguished:
1. In psychometric modelling, item dependencies can be taken into account by
modelling so-called testlets. Monseur, Baye, Lafontaine and Quittre (2011)
describe three ways to model such item dependencies: as partial credits, as
fixed effects or as random effects.
2. The other alternative would be to place the theoretical expert model on a better
empirical basis. This can be promoted in particular by the following measures:
(a) product development is mostly freed from situational influences,
(b) linking the rating criteria to a theoretical model of professional competence,
(c) ensuring interrater reliability through appropriate measures,
(d) mapping the theoretically derived rating criteria using a suitable psychometric
model.
The empirical procedure within the framework of the COMET project will be
outlined here as an example for the above-mentioned item 2. One focus of the
following presentation will be on the steps of the empirical approach, which are
more closely related to psychometric modelling.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 159
Five evaluation steps were carried out in the COMET measurement procedure:
1. Determination of the interrater reliability of task solutions.
2. Sorting of task solutions.
3. Verification of the homogeneity of the competence criteria (scale analysis).
4. Identification of typical competence profiles.
5. Longitudinal analysis of competence development.
The first step was to check whether the interrater reliability of the individual items
is sufficient for further data processing.
In a second step, the data matrix was restructured. The assignment of the
individual task solutions to the test persons and to the measuring points was resolved
so that all task solutions could be analysed together. This procedure seems justified
inasmuch as it may be possible for two different tasks to be performed by one test
person at different competence levels.
In the third step, the five items that can be assigned to a competence criterion are
then checked for homogeneity in line with the Rasch model, i.e. whether these
ratings actually measure the same latent dimension.
In the fourth step, the person parameters determined with the Rasch model, and
which correspond to the competence criteria, are calculated in a joint analysis. The
aim is to identify typical person profiles that correspond to the assumptions of the
COMET measurement model.
In the fifth step, the data record was returned to its original order. This means that
the four task solutions of the first two measuring points are again assigned to a test
person in order to be able to analyse longitudinal developments.
The decisive evaluation step in the empirical approach of the COMET project is
the rating of the open solutions by specialists—usually vocational schoolteachers
and trainers. The open solutions are evaluated by means of questions, five of which
form one of the eight criteria of the COMET model (! 4).
Before the actual empirical results of the COMET project are presented, two key
aspects of test quality are to be discussed in more detail: the reliability of the rating
procedure and the validity of the derived rating dimensions in terms of content.
In order to determine the reliability, two advisers evaluate the same task solution.
Securing interrater reliability is an absolute prerequisite for further data calculation
pursuant to a measurement model. Without sufficient measurement accuracy (reli-
ability), the data thus obtained would vary randomly and would no longer be
meaningful. To ensure interrater reliability, the following measures, systematically
160 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
applied during the COMET project (see COMET Volumes I–IV) can be
recommended:
• The rating schemes should be formulated as precisely as possible.
• Actual case studies should be used for training.
• Rater training should be accompanied by a continuous review of interrater
reliability until a sufficient target criterion is reached.
• The rater training should work with real task solutions.
• The composition of the rater teams should be systematically varied within the
training and also in the subsequent investigation.
• A third rating for systematically selected solutions can further increase measure-
ment accuracy.
The transformation of the open solutions into the criteria of the COMET measure-
ment procedure is the most important link in the measurement chain; therefore, it
must be critically discussed at this point whether the validity of the content can be
guaranteed here. Does it really measure what needs to be measured? The most
important measure to ensure the validity of the content is to have the rating carried
out by experts. These experts ensure that the abstraction of domain-specific solution
knowledge is incorporated into the target criteria. The direct involvement of domain
experts immanently and directly supports the validity of the content in the COMET
measurement procedure.
This means in particular that the rating of open task solutions must also be carried
out permanently by domain experts. As long as this is the case, the validity of the
rating procedure in terms of content is also assured in the long term.
The validity of the open tasks in terms of content and the universal applicability
of the eight criteria to the different domains of VET must be discussed elsewhere
(see COMET Volumes I-IV).
6.3.11 Population
The basis for the following calculations is a data set obtained in Hesse with
electronics technicians in industrial engineering (industry) and electronics techni-
cians for energy and building technology (crafts), which was collected at two
measuring points in the second and third year of training; 178 pupils at the first
and 212 pupils at the second measurement point completed the open tasks. In total,
1560 task solutions have been developed and rated accordingly (cf. COMET Vol-
umes I-IV).
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 161
The ‘Finn coefficient’ was used as a reliability measure (see Asendorpf & Wallbott,
1979). With a value range of 0.0 to 1.0, values of 0.5 to 0.7 can be interpreted as
satisfactory and values 0.7 as good reliabilities. The test for this sample showed
coefficients between 0.71 and 0.86. This means that the reliability can be classified
as consistently good (! 6.2).
After reviewing the interrater reliability, the data matrix was restructured. The
assignment of the individual task solutions both to the test subjects and to the events
was completely resolved, so that all task solutions could be analysed together in a
vertical data matrix. The analysis units in the procedure described below are no
longer the test subjects, but the tasks.
The raw values from the ratings of the evaluation items were then processed further,
as already described for Erdwien and Martens (2009) in COMET Vol. II. Each
subject was judged by two raters—in a few cases also by three raters. Although
the interrater reliability proved satisfactory (see Step 1), there were, of course,
divergent assessments. Therefore, the rater judgments were averaged before further
processing of the data. Mean values of 0 to 3 were calculated according to the rating
scale and rounded as follows for the following analyses (see Table 6.2).
The following analyses first examined whether the evaluation items for a criterion
are really homogeneous, i.e. form a scale that measure the same underlying dimen-
sion. In particular, it was checked whether the evaluation items of a criterion are so
similar that they can be summarised to a scale value. The eight criteria (clarity,
functionality, sustainability, economic efficiency, business process orientation,
social compatibility, environmental compatibility and creativity) were each calcu-
lated using the ordinary Rasch model. On this basis, the reliability (according to
Rasch) and the Q indices (cf. Rost & von Davier, 1994) were determined as item fit
measures. As the strictest model test, this solution was additionally contrasted with
the mixed Rasch model for two subpopulations. Information criteria were used to
directly compare whether the mixed Rasch model with two subpopulations was
better suited to the data. The ‘Consistent Akaike’s Information Criterion’ (CAIC)
was primarily used for this (see Bozdogan, 1987; Bozdogan & Ramirez, 1988). The
fit was better for the simple Rasch model than for the mixed Rasch model. This
ensures that the respective criterion can be represented by a latent parameter.
For individual criteria, however, questions (items) had to be excluded from the
further analysis due to insufficient homogeneity (measured by the Q indices (see
Table 6.3). After excluding the respective items, the following solutions of the
simple Rasch model were inconspicuous and matched the data well. An exception
is the criterion ‘environmental compatibility’—here an item was removed because of
an ‘overfit’; i.e., the item characteristic curve was too steep for the simple Rasch
model. After eliminating this item, the 1-class solution for environmental compati-
bility was also inconspicuous, even though the resulting reliability was somewhat
lower. An overview of the results can be found in the following table.
Thus, for all eight criteria, a satisfactory solution can be identified using the
simple Rasch model. This allows each task solution with exactly one value per
criterion to be included in the further analyses. The corresponding items thus each
form a scale for recording one of the eight criteria of vocational competence.
It should be noted that five items of a rating each relate to the same task solution.
Even if this fact has been taken into account in detail in the rating training courses, it
cannot be completely ruled out that the homogeneity of the items will be
overestimated by this procedure. This could lead to an overestimation of the actual
correlations, especially in the further calculation of correlations between the scales.
At the same time, the average of the rating assessments leads to an underestimation
of homogeneity. The exact effect of the rating procedures on homogeneity would
have to be checked with further data simulations. These are the additional reasons
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 163
why the following statistical methods concentrate on mixed distribution models and
thus above all on qualitative differences of the profiles.
The latent class analysis method was used to further identify typical competence
patterns in vocational education and training. The analytical procedure is based on
that of Martens and Rost (1998). The results of the above-mentioned ordinal Rasch
analyses of the competence criteria are included in the latent class analysis in the
form of rounded theta parameters. Here, the theta parameter represents the task-
specific ability to fulfil the respective criterion well.
This two-step procedure has the particular advantage that the data basis for the
subsequent latent class analysis becomes more robust against possible distortions of
individual evaluation items.
Latent-class analysis is a method that identifies typical profiles of task solutions.
For this purpose, the entirety of the task solutions is broken down into precisely
defined subgroups. Each subgroup (classes) defined in this way has exactly one
characteristic profile for the eight criteria. The challenge of this procedure is in
particular to determine the correct number of subgroups into which the whole is
broken down. With each additional subgroup, the fit of the measurement model to
the data must become more accurate. On the other hand, there is the fundamental
requirement that a measurement model should be as ‘simple’ as possible, i.e. as few
subgroups as possible should be identified. The two demands for ‘data fit’ (¼ more
subgroups) and ‘simplicity’ (¼ fewer subgroups) must be carefully weighed against
each other. Information criteria such as CAIC (cf. Bozdogan, 1987) are often used
for these weighing processes. However, the penalty function implemented in CAIC
for additional model parameters is arbitrary and differs from other similar informa-
tion criteria. To determine the correct number of subgroups, two further criteria were
therefore used: the use of bootstrapping (see, for example, von Davier, 1997) and the
consideration of the average allocation probabilities to the subgroups. Bootstrapping
creates a customised test distribution using synthetic samples. The actual sample
should not differ significantly from artificially generated samples. The medium
assignment probabilities indicate how well the criteria profiles of the task solutions
can be assigned to the subgroups. The mean allocation probabilities to the subgroups
should not fall below 0.8. Low assignment probabilities would mean that the profiles
cannot be uniquely assigned to the subgroups. The model with the most subgroups,
which simultaneously has a non-significant bootstrap (P(X > Z ) ¼ 0.234 for the test
variable Pearson X2) and provides average allocation probabilities to the subgroups
between 0.81 and 0.95, has 10 groups. Although the information criteria indicate that
models with a smaller number of groups could be better suited to the data—CAIC,
for example, refers to a solution with four subgroups—this of course only applies if
additional model parameters are disproportionately penalised.
164 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
The number of subgroups identified by latent class analysis, weighing ‘fit’ and
‘simplicity’, is 10 (Table 6.4). Each of the 10 groups has a typical competence profile
(Fig. 6.4). The measurement model of latent class analysis assumes that all solutions
assigned to a specific subgroup have exactly the same (latent) competence profile.
Only the level of assignment probabilities varies between the task solutions of a
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 165
subgroup. When interpreting the lower graphics, it must also be taken into account
that these are typical competence patterns for the solution of tasks and not the
competence patterns of persons. Our sample therefore includes people whose solu-
tions to tasks are assigned to different subgroups and thus different competence
profiles. Such intraindividual differences in competence can be caused by personal
characteristics, such as learning effects or fatigue, or task characteristics, such as
systematic task differences. In particular, systematic differences in tasks would have
to be very carefully taken into account when interpreting the results.
In order to present this result more clearly, the following sub-graphics of this
overall graphic are presented with a systematic selection of the subgroups.
Figure 6.5 shows the essentially parallel competence patterns of subgroups 1, 2,
3, 5, 6 and 10. 69.4% of the tasks that were solved with this competence profile.
These subgroups differ almost exclusively in their different profile heights. Sub-
group 6 with a share of 8% of all task solutions has the lowest competence profile;
i.e., the corresponding tasks were processed on the rating items with particularly low
competence according to the ratings. Conversely, the particularly good task solu-
tions were assigned to subgroup 1 with a share of 3%. When interpreting the
graphics, it must be taken into account that these are rounded theta parameters, so
the absolute height differences cannot be interpreted directly.
Figure 6.6 gain shows two almost parallel competence profile curves. These two
parallel subgroups 4 and 7 have a higher level of descriptiveness/presentation and
functionality compared to the competence profiles considered in Fig. 3.3. Such a
competence pattern has been theoretically expected and corresponds to the level of
‘functional competence’. The corresponding task solutions were therefore carried
out with a disproportionately pronounced functional competence.
Figure 6.7 shows the two remaining subgroups 8 and 9 together with the
subgroup 1 already shown above; like subgroup 1, the two subgroups 8 and 9 also
show an average competence profile. However, subgroup 9 shows a slight drop in
the criteria clarity/presentation and functionality compared to subgroup 1, while
subgroup 8 does not appear to have a very different profile from subgroup 1. The
validity of the contents of these two profiles must be checked by further analyses, for
example by distributing the subgroups to the task sets.
The competence patterns of most subgroups therefore run in parallel. In contrast
to this, subgroups 4 and 7 show a competence profile that corresponds to the level
‘functional competence’; a drop can be identified in the levels ‘procedural compe-
tence’ and ‘holistic shaping competence’. For subgroups 8 and 9, it is not immedi-
ately clear which theoretically expected competence profiles these patterns could
correspond to.
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 167
In the following, it was examined whether the four different tasks were also equally
distributed among the identified competence patterns. First of all, the different levels
of difficulty of the tasks are striking (Fig. 6.8). This can be seen directly in the
proportionate ratio of subgroups 6 (the lowest competence profile) and subgroup
5 (the second highest competence profile). Task 2 (skylight control) is relatively
speaking the easiest, followed by task 3 (drying space) and task 4 (pebble treatment)
and finally by the most difficult task 1 (signals).
It can be noted that there do not seem to be any task-specific, qualitative patterns.
In particular, the subgroups 4, 7, 8 and 9, which represent qualitative deviations from
the ‘standard pattern’, appear to be more or less equally distributed among the tasks.
Only for subgroup 4, there is a slightly reduced percentage for task 3 (drying space).
The distribution of competence patterns among the tasks can therefore serve as
proof that the identified patterns or subgroups are not specific competence profiles of
individual tasks.
This means that the solutions of the four test tasks are very similar in terms of the
proficiency criteria in the different subgroups. The implementation of the compe-
tence model presented here has proved its worth insofar as there is no task that shows
a completely independent solution in the form of an ‘own’ subpopulation.
168 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
In Step 5, the task solutions were again assigned to the individual persons and lined
up in time. For an initial evaluation, the number of identified subgroups from Step
4 was evaluated. The first two tasks were solved at the first measurement time and
the other two tasks (w3 and w4) at the second measurement time. For the interpre-
tation, it must be taken into account that each task is a mixture of tasks, since the
assignment of the tasks to the persons has been systematically rotated. Since only
test persons were considered who solved tasks at both times, the data basis for the
following graphics consists of 151 pupils. In particular, the first solved task
(w1) should be compared with the first solved task one year later (w3).
This shows a significant increase, especially for the largest subgroup 1. In
addition, the two subgroups with the highest competence profiles (subgroups
10 and 5) show an almost equal number at least in comparison with the measurement
points w1 and w3. This can be understood as a first indication of the validity of the
tasks in terms of content and curricula: At least for some of the tasks, a higher
competence profile can be shown.
At the same time, subgroup 6, which has the lowest competence profile, shows
that a fatigue effect occurs within a measurement for part of the sample (increases in
frequencies from both w1 to w2 and from w3 to w4). Furthermore, a kind of
reactance effect can be observed, which manifests itself as an increase of this
subgroup between the two measurement points, especially from w2 to w4. The
6.3 Latent Class Analysis of the COMET Competence and Measurement Model 169
proportion of these task solutions increases, especially in the last task. Probably
some of the pupils simply no longer wanted to work on their tasks in a motivated
way. This interpretation is also supported by the fact that subgroup 2 with the second
lowest competence profile has a relatively uniform share over time (see Fig. 6.9).
Overall, 89% of the task solutions correspond to the assumptions made in the
COMET model for vocational education and training (subgroups 1, 2, 3, 4, 5, 6,
7, 10). Only subgroups 8 and 9 show theoretically unexpected profile progressions.
Subgroups 4 and 7 in particular confirm that there are qualitatively different task
solutions which are characterised by a higher level of functional competence and
correspondingly lower levels of conceptual/procedural competence and holistic
shaping competence.
The question remains as to why no significant qualitative profile differences could
be identified with regard to the distinction made for theoretical reasons between
conceptual/process-related competence and holistic shaping competence. In the
sense of the theoretical competence model, at least one subgroup should have been
found in which the conceptual/procedural competence (sustainability, economic
efficiency, business process orientation) is higher than the holistic shaping compe-
tence (social compatibility, environmental compatibility, creativity).
Various explanations are conceivable: as already indicated above, the fact that the
ratings referred to one and the same task solution could tend to lead to certain
characteristics of the task solution ‘outshining’ other characteristics and to a uniform
assessment of the various criteria (halo effect). Furthermore, it could be justified by
the fact that the subjects studied are at the beginning of their careers. At least a
general halo effect can be excluded, because then subgroups 4 and 7 would not have
been identified either.
Further research is therefore required, which supplements the available data with a
sample of test persons who are already more advanced in their professional lives and
thus have a higher level of professional expertise. With such a sample, further
qualitative level differences could then be found.
The longitudinal analyses provide initial indications of the content and curricular
validity of the VET competence model presented here. However, it turns out that the
designed tasks promote a certain displeasure potential. This potential can be identi-
fied—as shown—with mixed distribution analyses, but when using the tasks in a
non-scientific environment, this potential ‘displeasure’ must of course be taken into
account. However, in order to carry out a more detailed analysis of individual
170 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Fig. 6.9 Development of the competence patterns of the individual subgroups (mc1 to mc10)
over time
6.4 Confirmatory Factory Analysis 171
development patterns over time, the number of samples would have to be increased
and the number of subgroups to be examined reduced.
6.3.20 Prospect
The measurement procedure within the framework of the COMET project demon-
strated a possible approach for resolving at least part of the contradiction between
measurement model and competence model in vocational education and training.
The combination of open tasks and subsequent ratings shown here could of course
also be implemented under other framework conditions, for example by computer-
supported holistic simulation of tasks, the solution of which could then be assessed
similarly by raters. Various options could be found for calculating the corresponding
rating assessments, based on the respective theoretical model (for an overview, see
Martens & Rost, 2009). Especially where the theoretical competency model predicts
qualitative profile differences, measurement models should be used that can map this
accordingly—such as mixed distribution models such as the latent class analysis
used here or the mixed Rasch model (Rost, 1999).
It should not be overlooked at this point that the ‘fit’ between measurement model
and theoretical model required in the introduction means that further criteria, such as
a particularly efficient measurement of professional competence, cannot be given
priority. Adequate consideration of both theoretical and measurement methodolog-
ical requirements means that methods that can solve this fitting problem, such as the
COMET measurement method, are relatively time-consuming to implement.
It must also be emphasised that measurement methods that meet the formulated
requirements are not suitable for all practical applications. For example, the use of
mix distribution models prevents the profile heights from being compared directly
with each other. This means that at least no simple application for selection processes
is possible. This is also directly related to the fact that it is not possible to identify
common parameters for an entire population. This makes the use in large-scale
studies very difficult.
The identity and commitment model of vocational education and training is evalu-
ated on the basis of two samples (A ¼ 1124. B ¼ 3014) using the methods of an
explorative and a confirmatory factor analysis. In this model, an occupational and an
organisational identity can be distinguished from each other. However, it is also
assumed that the two are related to each other—in both cases, it is a matter of identity
that is shaped by vocational training and work. The development of occupational
172 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
identity and identification with the profession in the process of vocational training
and professional work go according to this model rather with the development of
occupational and organisational commitment. Following Carlo Jäger, the work ethic
is defined as the unquestioned willingness to carry out the instructions of superiors
(Jäger, 1989).
Vocational Identity
NRW Saxony
BE1 SkBE1 I like to tell others what profession I have/am learning.
BE2 SkE2 I ‘fit’ my profession.
BE3r SKBE3r I am not particularly interested in my profession. REVERSE
BE4 SkBE4 I am proud of my profession.
BE5 SkBE5 I would like to continue working in my profession in the future.
BE6 SkBE6 The job is like a bit of ‘home’ for me.
Cronbach’s Alpha NRW: 0.843 (n ¼ 1124)
Cronbach’s Alpha Saxony: 0.871 (n ¼ 3014).
In both data sets, the alpha would increase if the reverse-coded item (Sk)BE3r
were removed from the scale: in NRW, the alpha increased to 0.859 and in Saxony to
0.889. The corrected item scale correlation in NRW is 0.425, and 0.447 in Saxony. It
remains open whether the reasons for this effect lie in the reverse coding and whether
the trainees deal with this differently than with the items coded in the right direction,
or whether interest alone or lack of interest alone does not yet say anything about the
identification with the profession.
Vocational Commitment
NRW Saxony
ID1 SkID1 I am interested in how my work contributes to the company as a whole.
ID2 SkID2 For me, my job means delivering quality.
ID3 SkID3 I know what the work I do has to do with my job.
ID4 SkID4 Sometimes I think about how my work can be changed so that it can be done
better or at a higher quality.
ID5 SkID5 I would like to have a say in the contents of my work.
ID7 SkID7 I am absorbed in my work.
Cronbach’s Alpha NRW: 0.767 (Note: Only data on 627 trainees for item ID7 were available.)
Cronbach’s Alpha Saxony: 0.820 (n ¼ 2985)
6.4 Confirmatory Factory Analysis 173
Organisational Identity
NRW Saxony
OC1 SkOC1 The company is like a bit of ‘home’ for me.
OC2 SkOC2 I would like to remain in my company in the future—even if I have the
opportunity to move elsewhere.
OC3 SkOC3 I like to tell others about my company.
OC4r SkOC4r I don’t feel very connected to my company.
OC5 SkOC5 I ‘fit’ my company.
OC6 SkOC6 The future of my business is close to my heart.
Cronbach’s Alpha NRW: 0.869 (n ¼ 1121)
Cronbach’s Alpha Saxony: 0.899 (n ¼ 3030)
Organisational Commitment
NRW Saxony
BetID1 SkBetID1 I like to take responsibility in the company.
BetID2 SkBetID2 I want my work to contribute to operational success.
BetID3 SkBetID3 I am interested in the company suggestion scheme.
BetID4 SkBetID4 The work in my company is so interesting that I often forget time.
BetID5 SkBetID5 I try to deliver quality for my company.
BetID6 SkBetID6 Being part of the company is more important to me than working in my
profession.
Cronbach’s Alpha NRW: 0.702 (n ¼ 1077)
Cronbach’s Alpha Saxony: 0.704 (n ¼ 2990)
The item (Sk)BetID6 fits rather badly to the rest of the scale. However, this is to
be expected with regard to the content of the item: here, two concepts are compared
with each other. The value of the affiliation to the company is asked, not the
affiliation itself. That makes this item difficult to understand. The corrected item-
scale correlation for NRW is only.198 and for Saxony.201. When this item is
excluded from the scale formation, the alpha for NRW increases to.732 for Saxony
to.737. The alpha therefore remains at a rather low level even when the conspicuous
item is excluded.
Work Moral
NRW Saxony
AM1 SkAM1 I am motivated, no matter what activities I get assigned.
AM2 SkAM3 I am always on time, even if the work does not require it.
AM3 SkAM2 I am reliable, no matter what activities I get assigned.
Cronbach’s Alpha NRW: 0.592 (n ¼ 1145)
Cronbach’s Alpha Saxony: 0.687 (n ¼ 3037)
Overall, the work moral scale should be revised, as the alpha is at a very low level.
Although the shortness of the scale of only three items has to be taken into account,
which additionally weighs on the alpha level, the very low corrected item scale
correlations of, e.g., AM1 (NRW) 0.401 and AM2 (NRW) 0.331(!) also indicate an
ambiguity of the scale in terms of content.
174 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
6.4.4 Explanations
6.4.5 Modification
BetID6 is excluded from the analysis. All other items are retained. See Fig. 6.11 for
the result of the model.
Compared to the first model, which takes all items into account, the latent factor
of operational commitment becomes somewhat clearer. However, with the exception
of minor changes in regression weights and correlation coefficients, there are no
major changes.
176 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
The statistical indices are improving marginally. The model cannot be really
optimised by deleting problematic items. A look at the CFA of the original model
based on the data from Saxony should provide further insights.
6.4.6 Discussion
It remains open why model and data do not go well together. The partially low
charges already described above provide initial information. These were already
noticeable during the reliability analysis. It is therefore necessary to further sharpen
the content-related fit of the items to the scales as well as the similarities between the
items of a scale. Furthermore, in this model a correlation is on the edge of possibility:
the two forms of engagement correlate to r ¼ 0.938! This means that they measure
almost the same. It is therefore important to work on the scales of commitment.
However, it remains to be seen at this stage whether a distinction should be made
between two forms of commitment in terms of content.
Saxony (Original Model)
The original model was checked again on a second data set in order to check the
results and findings initially obtained. In the first calculation, all items were taken
over, and no changes were inserted.
As in Saxony, the factor loads are largely within an acceptable range. Really
critical are only the items SkBetID6 (0.182!!!!) and partly those of SkBE3r (0.477).
Overall, the loads are somewhat higher than in NRW. Again, it is striking that
the regression from organisational commitment to organisational identity has the
6.4 Confirmatory Factory Analysis 179
greatest overall predictive power for organisational identity (β ¼ 0.59) and the same
also applies to the predictive power of occupational commitment to occupational
identity (β ¼ 0.677). The correlations between the two forms of identity are
r ¼ 0.637, and the correlation between the two forms of commitment even exceeds
1! r ¼ 1.068. The original model was modified as a result. It is also striking that there
are also very high correlations between work moral and the forms of commitment of
r ¼ 0.708 to vocational commitment and r ¼ 0.818 to organisational commitment
(Fig. 6.12).
Initially, MPlus tells us that the model, as already mentioned above, cannot
continue to exist in this form:
WARNING: The latent variable covariance matrix (PSI) is not positive defi-
nite. This could indicate a negative variance/residual variance for a latent
variable, a correlation greater or equal to one between to latent variables, or
a linear dependency among more than two latent variables. Check the TECH4
output for more information. Problem involving variable BTE.
As in the review by the data from NRW, the statistical parameters indicate that the
model and data do not match well. The resulting correlation of one in this model
invalidates the model.
180 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Since the originally assumed model does not ideally fit the data of both data sets, we
want to determine by means of explorative factor analysis (main component analy-
sis) what model is proposed to us on the basis of the available data.
NRW
First EFA: All variables taken into account.
The analysis is carried out with the SPSS program, and all items are first included
in the analysis. No assumptions are made. Factors included in the model should be
determined according to the Kaiser criterion (value greater than 1). The solution is
also rotated using the Varimax process. This means that by continuously rotating, the
items are tried to assign only one factor as clearly as possible. Medium high loads are
attempted to be avoided. However, the factors remain independent of each other.
6.4.8 Results
The main component analysis results in a solution with 5 factors, which looks like
this:
57.52% of the total variance can be elucidated with the 5 factors. The items in
parentheses could not be uniquely assigned, but load on at least two factors
similarly high.
6.4.9 Discussion
The forms of identity are very clear in the analysis. Factor 1 corresponds to the
organisational identity and factor 2 to the vocational identity. The work moral can
also be redetermined (factor 4), whereby however, the item BetID5 is awarded from
the area of the operational commitment actually intended for this. The original
182 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
separation between two forms of commitment is not found in the data. Factor 3 forms
a kind of general commitment factor, which consists mainly of items of vocational
commitment and partly of items of organisational commitment. The analysis of
reliability yields an alpha of 0.787 if BetID3 and 4 are omitted. Factor 5 consists
only of items from the area of operational exposure, but these are the items that have
already attracted negative attention in the previous analyses. BetID3 contains the
company suggestion system, which is not necessarily relevant for all trainees.
BetID6 contains an assessment of the importance of the company’s affiliation. In
terms of content, factor 5 seems to be a kind of ‘residual factor’ on which the items
that caused the student problems during processing load. BetID6 and, if possible,
BetID3 should be excluded from further analysis and revised if possible.
Second EFA: Exclusion of BetID6
A new explorative factor analysis (EFA) excluding the variable BetID6, which was
identified as inappropriate to its originally intended scale ‘organisational commit-
ment’ as well as to all other scales, should provide information about the factor
structure in which a ‘residual factor’ is avoided. The inclusion criterion again forms
the Emperor criterion.
The following factor solution results:
6.4.10 Discussion
Excluding the item BetID6, the fifth factor is omitted, which suggests that it was in
fact a kind of residual factor that resulted from the difficulty of the item BetID6. The
EFA carried out here paints a similar picture as the first EFA with regard to the
identity factors (factor 1 and 2). Factor 2 has the same structure as in the first EFA
and factor 1 is now only supplemented by the proportional charge of the item
BetID3, which no longer charges on a residual factor. The work moral (factor 4)
also achieves a similar structure as in the first EFA, with additional items from the
area of commitment loading on the factor of work moral. The commitment factor
(factor 3) has decreased overall. In this analysis, it turns out that the items BetID1
6.4 Confirmatory Factory Analysis 183
and 2 as well as the items ID1, 4 and 5 make up the core of this factor. The other
commitment items are now spread over various factors. They do not seem to fit
concretely enough on commitment alone, but instead blur the boundaries to identity
and work moral (with the exception of ID 7 and BetID5, which do not strictly fit the
commitment, but complement the vocational identity and work moral).
6.4.11 Discussion
With a third EFA: exclusion of the aging factors BetID6 and BetID3, the proportion
of the enlightened total variance increases significantly, whereas the values of the
factors become only slightly smaller.
Saxony
An explorative factor analysis will also be carried out again on the basis of the Saxon
data in order to check whether similar patterns arise in this data set.
First EFA: All Variables Taken Into Account
The first EFA contains all variables and comes to the following result after the
Varimax rotation:
6.4.12 Discussion
This first explorative factor analysis also comes to a solution of 5 factors. Again, it is
striking that not two engagement factors arise as assumed in the original model, but a
global engagement factor (factor 3) and a kind of residual factor (factor 5), on which
exactly the items upload, which already caused difficulties in the reliability analysis,
unlike the solution from NRW, the item SkBetID3 finds its place in factor 3 and
instead the item SkBE3r becomes blurred with charges on factors 2 and 5. This item
loaded data in the solution for NRW on the scale of the vocational identity intended
184 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
for it. This reverse-coded item seems to cause the trainees greater difficulties here, or
does not have the same significance for their vocational identity as in NRW.
Two further model corrections were analysed: exclusion of SkBE3 and SkID3; as
a result, these analyses contribute to the conclusion that the scales of organisational
commitment and work moral in particular should be revised.
Both the analysis of the data from NRW and the data from Saxony provide a pattern
that can best be described by a 4-factor solution. A clear distinction is made here
between vocational and organisational identity, while there is no separation between
vocational and organisational commitment. These forms blur into each other. The
work moral can be found similar to the originally assumed form in both data sets,
whereby it is supported sometimes more strongly times less strongly by further items
from the commitment range.
The evaluations carried out here suggest that the separation between vocational
and organisational commitment should be reconsidered and at the same time some
items should be assigned differently. Thus, item (Sk)ID7 appears consistently as part
of the vocational identity and item (Sk)BetID5 as part of the work ethic. The other
items, some of which have changed their affiliation to a scale, require vocational
pedagogical justification. It should also be examined whether the separation between
the forms of engagement should be removed. It remains to be seen whether this is
just one type of commitment, as the factor analysis consistently shows or whether the
scales provided so far are not suitable for validly measuring the construct and the
problem therefore lies in the construction of the scales.
The analyses carried out show that the separation between organisational and
vocational commitment could not be confirmed on the basis of the item structure
analysed. This may indicate that both concepts are one concept and can be measured
within a construct. Similarly, the result may also mean that the separation of content
is justified, but the items are not able to measure the constructs. It must therefore be
justified in terms of vocational education whether a separation of the two concepts
can be assumed, what exactly the core points of the concepts are, and at which points
they differ from one another. Against this background, new items can then be found
and scales developed (! 4.7: Table 4.7 and 4.8).
Other results relate to work ethics. So far, the scale contains only 3 items with
only mediocre reliability values. In the EFA, one or more items are additionally
assigned to the scale. It should therefore also be examined what exactly constitutes
the core of work moral and how it can be more clearly distinguished from
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 185
commitment. Then, further items can be found, and the existing items can be
modified (Table 4.9).
6.5.1 Method
Guillemin, Bombardier and Beaton (1993) point out that existing measuring instru-
ments must always be adapted to existing cultural and linguistic differences and
similarities when used in other cultures and in a different language (Table 6.5).
China and Germany show great differences in terms of the vocational training
system, language and culture. For this reason, the intercultural adaptation of
COMET measuring instruments and evaluation items to the vocational training
system/training in China has not only involved translations but also cultural adap-
tations. The overall process included preparation, translation, cultural adaptation and
performance evaluation. In addition, appropriate counselling training was conducted
Table 6.5 Different situations of intercultural adaptation (based on Guillemin et al., 1993)
Differences and similarities between languages and cultures Situation of cultural adaptation
from the target group in the new measurement and source in need of needs cultural
measurement translation adaptation
No difference in language, culture and country Not Not necessary
necessary
Same country and same language, but different culture Not Necessary
(e.g. group that immigrated to the country of source measure- necessary
ment a long time ago)
Different country and different culture, but same language Not Necessary
necessary
Same country, but different language and culture (e.g. new Necessary Necessary
immigrants in the country of source measurement)
Different country, language and culture Necessary Necessary
186 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
in China to ensure the reliability and validity of the evaluation of task solutions for
open test tasks.
The Chinese working group received the German version of the measuring concept,
the measuring instruments and the evaluation items (COMET-Bde I-IV) from the I:
BB of the University of Bremen and worked out the implementation concept for the
competence measurement in China together with the German colleagues. During
translation, the working group used the two-way translation method, in which a text
is translated directly into the foreign language and then translated back into the
source language in order to achieve high semantic equivalence and maintain the
content and meaning of the tasks in the original instruments. The translation work
was carried out jointly by the developers of the measuring instrument and concept of
the University of Bremen and by the scientific staff of Beijing Normal University
and the Beijing Academy of Education.
the formulations were adapted to Chinese usage. For example, ‘occupation’ was
replaced by ‘subject/training course’ (zhuanye) and ‘training company’ by ‘practical
training company’. Some questions that do not correspond to Chinese circumstances
have been deleted. To ensure the international comparison of the results, the original
numbering of the test items was retained. For example, questions 6 and 8 were
deleted from the German context questionnaire and simply skipped in the Chinese
questionnaire (Rauner, Zhao, & Ji, 2010).
Before each test, the Chinese working group organised experienced teachers and
practitioners from companies to check and adapt the validity of the open tasks and
problem-solving areas proposed by the German side, especially from the perspective
of professional validity instead of the teaching validity. It turned out that the teachers
were able to agree quite easily on the validity of test task content due to the
acceptance of the measurement model for recording professional competence. For
example, the experts (including teachers and practical experts) examined and
discussed four test tasks proposed by Germany in the course of measuring the
vocational skills of trainees in electrical engineering subjects in Beijing and quickly
reached an agreement. There was agreement that three of the four tasks did not need
to be changed and could be taken over directly as test tasks in China. The
corresponding problem-solving scopes did not have to be adapted either. Only the
understanding of a test task was discussed for a long time: the point was that the task
at one point in the description of the situation would go beyond the scope of
electrical engineering and that the trainees would also be required to work on the
task from an interdisciplinary perspective. At the beginning, the experts disagreed as
to whether the interdisciplinary element should be retained in the task. However,
after carefully interpreting the task with reference to the COMET competency
model, a consensus emerged that there would be an agreement between the task
and the model. Therefore, it was finally agreed to include this task as a test task
(Rauner, Zhao, & Ji, 2010). Teachers and practical experts tested and adapted the test
tasks on the basis of the COMET competence model and professional practice in
China. This also happened in the subsequent tests for trainees in the automotive
service sector.
The practice of cultural adaptation of the COMET concept as well as the test tasks
and the context survey show that the basic idea of the work process systematic
curriculum and the COMET competence model is accepted by the Chinese teachers
involved in the project. This shows that the COMET competency model has a sound
scientific basis for achieving corresponding educational goals and guiding principles
for international standards in vocational education and training and that an interna-
tional comparison can thus take place.
188 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
In order to ensure the assessment quality and the quality of the country comparison
of competence measurement, the training concept for raters to increase interrater
reliability was developed in the COMET project. The Chinese project group
organised training courses for raters for all tests, in which the teachers were involved
in the evaluation work. A very high interrater reliability was achieved with the rat
training. The process of rater training includes
• Presentation of the COMET model for vocational competence, the measuring
procedure and the evaluation procedure;
• Explanation of the eight evaluation criteria and 40 evaluation items for the
evaluation of the solution examples;
• Rating exercise using real examples, i.e., solutions of the trainees were selected as
case studies for each task and evaluated by the raters for the exercise. Each
exercise included three parts: the individual assessment, a group rating and a
plenary discussion. For the process of rater training and the test in electrotechnical
subjects, see COMET Vol. III, 4.2, Tables 4.5 and 4.6.
Based on the aforementioned processes and on the data from three tests, the
structural validity of the evaluation items was evaluated by means of an explorative
factor analysis and the reliability of the evaluation items by means of the internal
consistency coefficient. The three tests involved 831 trainees in electrical engineer-
ing subjects in 2009, 779 trainees in automotive engineering in 2011 and 724 trainees
in automotive engineering in 2012 (Zhuang & Zhao, 2012, 46 ff.).
6.5.6 Results
Effect of Rater Training on Increasing Interrater Reliability
The Chinese working group organised a very extensive rater training and came to the
following results:
• The first test rating was very different. The raters relied on their subjective
teaching experience and did not rely predominantly on the solution scopes.
After the importance of the solution scopes as an interpretation aid for the rating
had been discussed in plenary, the degree of agreement increased sharply at the
next rehearsal rating.
6.5 Validity and Interrater Reliability in the Intercultural Application of. . . 189
Table 6.6 Interrater reliability: Rater training for the preparation of competence measurement in
electrical engineering subjects (2009)
Day 2 Day 2 Day 3 Day 3
Day 1 morning afternoon morning afternoon
Pb-Code Task Finnunjust
H0282 Signals 0.41 0.82
H0225 Signals 0.54 0.79
H0176 Drying space 0.80 0.84
H0234 Drying space 0.75 0.80
H0265 Skylight control 0.84 0.82
H0102 Skylight control 0.82 0.83
H0336 Pebble treatment 0.86 0.85
H0047 Pebble treatment 0.79 0.79
Table 6.7 Interrater reliability: Rater training for the preparation of competence measurement in
the field of automotive engineering (2011)
Answer sheets Answer sheets Answer sheets
Name of Answer sheets from from Chinese from Chinese from German
the case Chinese trainees on teachers on winter trainees on winter trainees on winter
study oil consumption check check check
Number 29 persons 30 persons 30 persons 30 persons
of raters
Finnjust 0.7 0.76 0.85 0.77
190 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Table 6.8 Interrater reliability: Rater training of the competence measurement of trainees in the
field of automotive engineering (2012)
Name of Liquified
the case Window Window petroleum Classic Classic Glowing
study lifter 1 lifter 2 gas car 1 car 2 MIL Misfuelling
Number of 25 25 25 persons 25 25 25 25 persons
raters persons persons persons persons persons
Finnjust 0.64 0.79 0.84 0.78 0.8 0.84 0.83
Note: The interrater reliability (Finnjust) is satisfactory at a value >0.5 and good at a value >0.7
The COMET competence model assumes that competence at a higher level includes
competence at a lower level. Here, the factor analysis is carried out at the level of
functional competence (assessment points 1–10), procedural competence (assess-
ment points 11–25) and shaping competence (assessment points 26–40)
(cf. Table 6.9).
From the result of the factor analysis, it can be deduced that the 10 evaluation
items under ‘functional competence’ can be combined into one factor and the
15 evaluation items under ‘shaping competence’ into two factors (of which 10 eval-
uation items under ‘social compatibility’ and ‘environmental compatibility’ into one
factor and 5 evaluation items under ‘creativity’ into one factor). This ensures a good
structural validity. The 15 evaluation items under ‘procedural competence’ can be
grouped into three factors. Four evaluation items of each of the three criteria
‘sustainability’, ‘economic efficiency’ and ‘business and work process orientation’
can be combined into one factor with five evaluation items each. Overall, the
COMET evaluation items represent a good structure and essentially correspond to
the theoretical hypothesis (Table 6.10).
Table 6.10 Factor analysis for the 15 evaluation items under ‘procedural competence’
Components
1 2 3
WM11 0.386 0.001 0.501
WM12 0.382 0.056 0.752
WM13 0.2 0.106 0.781
WM14 0.8 0.012 0.232
WM15 0.81 0.062 0.204
WM16 0.623 0.304 0.105
WM17 0.133 0.866 0.043
WM18 0.177 0.83 0.111
WM19 0.091 0.529 0.521
WM20 0.384 0.713 0.241
WM21 0.708 0.351 0.236
WM22 0.728 0.308 0.251
WM23 0.434 0.394 0.359
WM24 0.642 0.328 0.343
WM25 0.135 0.312 0.665
Extraction method: Main ingredient
Rotation method: Orthogonal rotation method according to the Kaiser criterion
Convergence of rotation after 5 iterations
Reliability was analysed for the overall reliability of ‘vocational competence’, the
three competence levels ‘functional competence’, ‘procedural competence’ and
‘shaping competence’ as well as the eight criteria. It was found that the coefficient
Cronbach α for the overall reliability of ‘vocational competence’ and for the three
competence levels is above 0.9 and the coefficient α for all eight criteria is above 0.8.
Overall, the measurement model has a high internal consistency (see also Tables 6.11
and 6.12).
Table 6.11 Reliability analysis for the three assumed competence levels
Vocational Functional Procedural
competence competence competence Shapingcompetence
α 0.956a 0.953 0.907 0.924
coefficient
a
Item 20 was deleted from the evaluation scale of the 2009 test and item 3 in the 2011 and 2012
tests, which is why the overall reliability of occupational competence is the calculation result
without evaluation item 3. Calculation without evaluation item 20 results in α ¼ 0.971
192 6 Psychometric Evaluation of the Competence and Measurement Model for. . .
Discussion
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 193
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_7
194 7 Conducting Tests and Examinations
In the project work, current topics from the operational activities of the respective field of
application or specialist area of the candidate are to be taken up, which should also be
usable for the company if possible (Borch & Schwarz, 1999, 24).
The term ‘holistic task’ is intended to express that the purpose of this form of
examination is to test the understanding and knowledge of context as well as the
ability to solve professional tasks in full. This form of examination, introduced in the
1990s, has already met with a predominantly positive response from both trainees
and companies at the very first attempt (Fig. 7.2).
The majority of the trainees rated the examination part of company project work
as practice-oriented—on average 70%. Trainees regard the degree of difficulty of
these tasks to be appropriate. Approximately 30% consider this part of the
196 7 Conducting Tests and Examinations
Fig. 7.2 Evaluation of the in-company project work by the trainees (Petersen & Wehmeyer, 2001)
A comparison of this ‘holistic’ evaluation concept with the COMET test procedure
reveals striking similarities at the concept level.
7.1 How Competence Diagnostics and Testing are Connected 197
Fig. 7.3 Evaluation of the assessment of the company project work in the final examination by the
trainees (Petersen & Wehmeyer, 2001)
‘function’ and ‘function orientation’ and stand for a variety of process-related issues
that shape vocational work and training.
The complexity of this guiding principle posed considerable difficulties for VET
planning and research in its implementation in training and examination regulations.
This became apparent when formulating and establishing a competency model as the
basis for structuring the examinations. The ‘extended examination’ with parts 1 and
2 replaced the traditional intermediate examination and the first part of the final
examination—half-way through training - has since then been included in the
assessment of the final examination with a share of 30–40% (Fig. 7.5).
Part 1 of the final examination is a vocational examination that is tailored to the
various training occupations and refers to the qualifications that are taught in the first
18 months of dual vocational training. As an overriding reference, ‘the ability to
plan, carry out and control independently and to act in the overall operational
context’ is emphasised. ‘This ability must be demonstrated at a level appropriate to
the level of training achieved. The ‘complex work tasks’ examination format is used
to ‘focus’ on the qualifications required for the planning, creation and commission-
ing of technical systems or subsystems. In addition to assembly, wiring and connec-
tion, this also includes functional testing as well as the search for and elimination of
faults. Included are specific qualifications for assessing the safety of electrical
systems and equipment’ (ibid., 7).
The candidate receives a list of subtasks for the analysis and execution of the task
(see Fig. 7.6). The evaluation of the examination performance is based on this task
list. The system checks whether the sub-competences shown in the task list are
‘fully’, ‘partially’ or ‘not’ achieved. The situation description of this form of
complex work tasks differs only slightly from the form of the situation description
(scenario) of the COMET test tasks. It is striking that when explaining the concept of
7.1 How Competence Diagnostics and Testing are Connected 199
Fig. 7.5 Structure of the final examination of industrial electrical professions (BMBF, 2006b, 6)
the ‘complex work task’, for the processing of which—as with COMET test tasks—
120 min is provided, a gradual reduction of the ‘typical’ (complex) work tasks is
made. In a first step, this takes the form of reducing complexity by means of
restrictive partial contracts (BMBF, 28). In order to implement this concept of
reduced professional competence, an ‘examination hardware’ is specified (ibid.,
29). With an abstract test hardware ‘automation technology’, the typical character-
istics of the operational work processes evaporate. Questions about the usefulness
and practical value of a technical solution, its environmental and social compatibility
and the question of the creativity of the task solution no longer arise, or only to a very
limited extent.
For the construction of the ‘complex work task with reduced complexity’ in this
example, defined errors are to be installed ‘to ensure that all test specimens meet the
same basic requirements for troubleshooting. In addition, the evaluators can carry
out the evaluation more easily’ (ibid., 30). Similar requirements are proposed for
other parts of the test (e.g. planning the wiring and program changes). It is intended
to transform the ‘complex work task’ into numerous subtasks.
The solution scope is limited mainly to the criteria or the partial competence of
the functionality. The essential elements of professional competence, such as the
training regulations and the COMET competence model, are no longer in view. The
operationalisation of the open situation description leads to an examination format
guided by subtasks, which does not make it possible to check the professional
competence defined in the training regulations on the basis of the assessments of
the examination results.
The BMBF’s handbook points out this problem: ‘Unfortunately, it is possible to
agree on a procedure for an evaluation that is highly objective and reliable, but still
200 7 Conducting Tests and Examinations
Fig. 7.6 Work task (BMBF, 2006b, 31 f.): ‘Your supervisor instructs you to plan the changes,
carry out the changes in the control cabinet and test them at a pilot plant’
does not cover what is to be covered! It’s about the validity of a test. In this respect
one must be aware that one can record and evaluate very different performances of
an examinee. A central question always remains whether the examination perfor-
mance recorded is an appropriate indicator of what is to be examined. [. . .]’. It was
shown that examination requirements with a high affinity to the requirement dimen-
sion of the COMET competence model are defined in training regulations. When
implementing the training objectives and examination requirements in a manageable
examination procedure, the question arises as to the quality criteria of the new
examination practice.
In its evaluation report on the introduction of IT occupations, BBIB points to a
serious problem in the implementation of the new form of examination: ‘However,
the examination practice is quite different. In the first intermediate examination,
7.1 How Competence Diagnostics and Testing are Connected 201
sixty tasks were set instead of four—a blatant disregard for the training regulations.
The ‘holistic tasks’ are also subdivided and partly programmed [multiple choice
tasks]—but in no case holistic. To date, neither the Federal Government nor the
Federal States, as supervisory bodies over the Chamber of Industry and Commerce,
have prevented the unlawful examination practice’ (Borch & Weißmann, 2002, 21).
These conclusions derived from the evaluation results for the implementation of
the IT reorganisation procedure (Petersen & Wehmeyer, 2001) were obviously taken
into account in the reorganisation procedures in the following years. The renuncia-
tion of dissolving the ‘holistic’ tasks into a multitude of subtasks (according to
Aristotle: ‘The whole is more than the sum of its parts’) brought with it a new
problem: the evaluation of the very different solutions to ‘holistic tasks’ or the
quality of the work results of the wide range of different ‘operational tasks’.
According to the authors of the IHK NRW handbook Der Umgang mit dem
Varianten-Modell of 4 February 2010, a ‘still unsolved basic problem of this form
of examination’ is that ‘the examination board must arrive at a substantiated
statement at the end of the examination on the basis of a written work assignment’
(IHK NRW, 2010, 6).
With the COMET test procedure, a task format was developed, and a rating
procedure with which all quality criteria—both a high degree of substantive validity
and an equally high degree of objectivity and reliability—can be achieved. It
therefore suggests itself to investigate whether the COMET test procedure can be
applied to the tests and thus the ‘quality’ of the new tests can be increased.
With the form of fragmented division of the complex task (wholeness) into a
structure of subtasks, the solution scope of the task, as it is initially created in the
situation description, is clearly limited. As a consequence, this leads to a levelling of
the assessment of the examination performance. High-performance test subjects do
not have the possibility to make full use of the solution scope given by the open
situation description. The structuring of the task solution is precisely specified. Weak
trainees receive far-reaching support in solving the task through the question-guided
presentation of the task. There is a risk that the objectively given heterogeneity of the
competence characteristics of the subjects will be reduced (Fig. 7.7).
Since task-specific assessment criteria are applied in established examination
practice, the competence development in the form of competence profiles and
competence levels can no longer be compared across tasks. In addition, task-specific
evaluation items make it more difficult to qualify the examiners and to achieve a
sufficiently high degree of comparable examination results that depend on the
examiners. The COMET measuring method offers a solution. In addition to a
standardised measurement model (evaluation grid), the development of task-specific
solution spaces is a prerequisite (see above). These have the function of enabling the
raters (examiners) to interpret the rating items in a task-specific manner. After a rater
202 7 Conducting Tests and Examinations
Fig. 7.7 Levelling effect of the tests of 453 test participants (COMET electronic technicians) (TS:
total point value COMET test group)
training, the raters are able to evaluate the entire range of highly diverse solutions on
the basis of the standardised rating scale even without using the solution scopes.
The application of the COMET evaluation procedure for the evaluation of audit
performance in solving holistic work tasks simplifies audit procedures and increases
the reliability, accuracy and thus the comparability of audit results quite decisively.
At the same time, there is no need to divide the holistic tasks into a multitude of
subtasks.
The consequence of introducing an objective examination procedure is that the
heterogeneity of professional competences is reliably and objectively recorded.
For some training occupations, the training companies are free to choose between the
‘operational order’ version of the examination and the standardised form of the
‘practical task’. Practical tasks are created nationwide and are a form of simulation of
real work processes. The relevant ‘implementation guides’ rarely emphasise in more
detail the fact that these are two fundamentally different types of examination. The
operational order is characterised by its embedment in the social context of a
company, the specific competitive situation as well as the uncertainties of
unforeseeable events. Simultaneously, this complicates the realisation of a compa-
rable evaluation of operational orders. It is precisely the central characteristic of
these orders that they are each singular events. This strengthens their authenticity.
However, this form of examination poses the challenge of developing selection and
evaluation criteria that ensure the comparability of this element of the examination.
Using the training regulations for the profession of electronics technician for auto-
mation technology as an example, the BMBF has presented an implementation guide
7.2 The Measurement of Professional Competence 203
that makes it easier for companies to decide for or against this examination element.
Initially, reference is made to the qualification requirements to be assessed:
‘The candidate should demonstrate an ability to
1. Evaluate technical documents, determine technical parameters, plan and coor-
dinate work processes, plan materials and tools,
2. Assemble, dismantle, wire, connect and configure subsystems, comply with safety,
accident prevention and environmental regulations,
3. Assess the safety of electrical systems and equipment, check electrical protective
measures,
4. Analyse electrical systems and check functions, locate and eliminate faults, adjust
and measure operating values,
5. Commission, hand over and explain products, document order execution, pre-
pare technical documents, including test reports’ (BMBF, 2006a, 2006b, 9).
The examination forms ‘operational project’ (IT occupations) and ‘operational
order’ (electrical occupations) are de facto identical examination forms, even if an
attempt is made, with extensive justifications, to construct an artificial distinction
between technical competence and process competence. The latter is assigned to the
operational order in the examination regulations for electrical occupations. ‘This
[. . .] should be a concrete order from the trainee’s field of application. What is
required is not a ‘project’ specially designed for the examination, but an original
professional action in everyday business life. However, the operational order must
be structured in such a way that the process-relevant qualifications required from
the candidate can be addressed and ascertained for evaluation using practice-
related documents in a reflecting technical discussion’ (BMBF, 2006a, 2006b, 6).
Arguments in favour of this form of examination say that this is about evaluating
process competence in the context of business processes and quality management in
the operational overall context. The evaluation encompasses the qualities with which
the coordination and decisions determining professional action can be perceived in
the work processes (ibid., 4). This serves to express that this is not a question of
checking for ‘professional competence’. A large number of implementation guides
and practical examples have since been used in an attempt to impart the separate
assessment of process and professional competence to examiners and examination
boards. This was not successful because experience of centuries of examination
practice—from the journeyman’s project to the operational order—speaks against
this regulation and there is no other vocational pedagogical justification for this
concept.
For example, a recommendation for companies and examiners of North Rhine
Westphalia’s Chamber of Industry and Commerce responds to the question of
whether ‘technical questions’ are prohibited in the technical discussion following
the operational order by stating: ‘The central focus of the new examinations for the
operational order is on both the ‘processability’ and ‘technical character’, or
‘process competence’ and technical competence’. Although so-called ‘technical
questions’ are by no means prohibited during a technical discussion, they should
directly relate to the operational order’ (IHK NRW, 2010, 14 f.).
204 7 Conducting Tests and Examinations
Fig. 7.9 Evaluation form for ‘operational orders’ (BMBF, 2006a, 2006b, 15 ff)
2. In the structure of the operational orders and the evaluation of their solution
according to the concept of complete learning and working. The basic theory
underlying the holistic solution of occupational tasks for the modelling of
vocational action and shaping competence therefore remains unconsidered. The
theory of complete action abstracts from the contents of professional work
processes, with the consequence that the individual competences to be taught in
modern vocational education and training are partly ignored. Professional com-
petence should enable people to solve professional tasks comprehensively. In this
case, ‘comprehensively’ also refers to the concept of complete action, but above
all to the theory of holistic problem solution. And the latter concerns the require-
ments dimension of professional competence and therefore its development as a
competence level and competence profile.
It is therefore evident that the further development of this examination concept
should be based on the regulations for IT occupations (operational project). How-
ever, the criticism expressed in the evaluation studies that ‘artistic projects’ are very
frequently only developed for examination purposes—far beyond real operational
practice—must be taken seriously. At the same time, it should be remembered that
the term ‘operational project’ also includes an element of prospectivity and therefore
points beyond existing practice. In contrast, operational ‘routine orders’ are oriented
to existing practice. All too easily, this could create a central idea of adaptation
qualification: qualifying for that which exists. However, this would contradict the
fundamental change of perspective introduced in VET in the 1980s with the concept
of ‘Shaping Competence’. It is therefore a question of examining professional
competence—differentiated according to the progressive competence levels of func-
tional, processual and holistic shaping competence.
A comparison of the qualification grid Q1 to Q4 with the COMET competence
model shows very differentiated qualification requirements Q1, Q2 and Q4 reflected
in the modelling of the requirements and action dimension of the COMET compe-
tence model. The high affinity that exists between modern training regulations, and
the COMET competence and measurement model was documented in detail in
COMET Volume II.
In this respect, it remains to be examined whether and, if so, how Q3 ‘Executing
orders’ can be integrated into the COMET test procedure.
The following procedure seems suitable here:
1. The items of the rating procedure are suitable throughout not only for the
evaluation of the conceptual-planning solution of a test item, but also for the
evaluation of the execution of the operational orders. In examination practice, the
work result (product) is transferred to the client.
2. As documentation and explanation of the order (result) as well as, for example, by
explanations on how the result was handled and managed (e.g. in the form of user
training).
3. In the form of an expert discussion (30 min.), during which the candidate has the
opportunity to justify the solution of his task.
7.2 The Measurement of Professional Competence 207
4. In this regard, there is a parallel to the COMET test procedure. As is the case
during the technical discussion and during handover of their work results, the
trainees are asked to give comprehensive and detailed reasons for their proposed
solutions.
5. As the execution of an operational order (Q3) includes the examination of the
functional capability, and the complete implementation of all standards and
regulations (safety, health, environmental compatibility, etc.), the qualification
requirements encompass the identification of faults and defects as well as their
elimination. COMET competence diagnostics does not provide this implementa-
tion aspect, as this has so far been limited to measuring the conceptual-planning
solution/processing of tasks. It therefore makes sense to supplement the COMET
competence and measurement model with this partial competence of
‘implementing the plan’ (cf. Table 7.1).
It is a good idea to integrate these assessment criteria into an appropriately
modified assessment scale. This applies in particular to the sub-competences ‘clar-
ity/presentation’ and ‘functionality/professional solutions’ (Table 7.2).
Table 7.2 Adaptation of the assessment criteria to the rating or evaluation of operational orders/
projects
The requirement is ...
in no way not partly fully
met met met met
(1) clarity/presentation
Is the presentation form of the solution suitable for
discussing it with the client?
Is the solution presented appropriately for professionals?
Was it possible to verify the solution’s customer
friendliness?
Is the documentation and presentation technically
appropriate?
Was it possible to arrange the handover to the client in
customer-oriented manner?
(2) functionality/professional solutions
Was the ‘state of the art’ taken into account during
planning?
Was it possible to react appropriately to obstacles and
faults?
Was it possible to implement the plan in practice?
Was it possible to identify and, if necessary, to remedy the
errors?
Are the solution of the assignment and the procedure
adequately justified?
The operational work assignment or the operational project work as well as the
holistic work assignment represents competences or qualification requirements of
the vocational fields of action.
The ability to work is assessed based on examination tasks which are developed
or selected according to the criteria of representativeness and exemplarity.
In the case of safety-relevant professional competences (e.g. mastery of the
relevant safety regulations for electrical installations), it makes sense to establish a
concept of competence diagnostics to accompany training. The vocational fields of
action and learning are suitable for the temporal structuring of an extended exami-
nation (cf. BMWi, 2005, p. 46). Simultaneously, the great advantage of such
adiagnostic competence during training is an extended examination with a high
feedback structure. And this is particularly important in the development of voca-
tional competence (cf. BMWi, 2005, pp. 9 and 46; Hattie & Yates, 2015, 61 ff.). This
would be the first time that continuous training guidance based on the recording of
vocational competence development would be regulated in binding manner. Such a
procedure—involving the examination and testing practice of vocational schools—
would not only strengthen the quality of training but also significantly reduce the
burden on the time-consuming final examination process. This would make it easier
to justify a one-off final examination to measure the level and profile of competence
on the basis of characteristic and representative test or examination questions.
7.2 The Measurement of Professional Competence 209
The objectivity, reliability and at the same time the validity of the content of the
examination can be realised on a high level based on the COMET competence and
measurement model, with the prerequisite of ensuring that the examiners are
instructed in the evaluation of examination results.
Comparability of the examination would be ensured by the complex and authen-
tic (valid in terms of content) examination tasks to be developed in accordance with
the COMET competence model and high-quality criteria for the examination pro-
cedure by a standardised rating procedure.
The E 158 is used for this purpose: ‘Part 1 of the CAP can therefore only deal with
competences which are already part of the professional competence to be consid-
ered in the final examination’ (cf. E 158, p. 11). This recommendation suggests that
the same examination format should be used for Parts 1 and 2 of the CAP.
E 158 talks about the ‘execution of a complex task typical for a profession’: ‘The
work/procedure and the result of the work are evaluated’ (p. 20). Regarding
the operational order, it says: ‘The work and procedure are evaluated. The results
of the work can also be included in the evaluation’. However, this is only possible if
the candidate not only documents, but also justifies, the results of their work and
procedure. In the test model, the CAP therefore comprises both Part 1 and Part 2 of
an operational order (or alternatively, a ‘practical task’) with the following exami-
nation parts (Table 7.3).
The planning of the operational order (OO)/Practical Task (PT)/as well as the
justification of the proposed solution and the planned procedure are evaluated by
the examination board on the basis of the standardised COMET rating scale (Appen-
dix B) in the form of a team rating.
Fig. 7.10 Evaluation of the planning and justification of an operational order/practical task
212 7 Conducting Tests and Examinations
Expert Discussion
The expert discussion takes place on the basis of the preliminary assessment result
and the documentation of the OO/PT. The examiners are therefore able to check
whether the candidate ‘knows and can do more’ than the preliminary evaluation
result shows. The competence profile determined in the rating procedure and the
documentation of the OO/PT form the basis for the expert discussion (Fig. 7.10).
The preliminary evaluation result shows on which points the expert discussion
should concentrate. The weaknesses of the solution and the procedure identified in
the rating procedure are questioned again in the expert discussion. The subject of the
expert discussion is also the deviations between planning and execution of the OO or
PT. After the expert discussion, the examiners supplement their assessment with
regard to the criteria of the implementation of the plan on the basis of the
corresponding positions on the rating scale. In addition, they can correct ratings
from the perspective of the skills shown.
Practical Task
Holistic Tasks
The OO and, if applicable, the PT are evaluated using evaluation sheet A, and
the complex tasks are evaluated using evaluation sheet B (Appendix B). The
7.2 The Measurement of Professional Competence 213
non-applicable evaluation criteria will be deleted for each task or OO/PT. The
examiners evaluate each of the remaining criteria according to the following grada-
tions (Table 7.4).
Application Process
Table 7.5 Brief description of the criteria for a complete task solution (industrial-technical
professions)
Functionality The criterion refers to instrumental professional competence and
therefore to context-free expert knowledge. The ability to solve a
task functionally is fundamental for all other demands placed on the
solution of professional tasks.
Clarity/Presentation The result of professional tasks is anticipated in the planning and
preparation process and documented and presented in such a way
that the client (supervisor, customer) can communicate and evaluate
the proposed solutions. It is therefore a basic form of vocational
work and learning.
Sustainability/Utility value Professional work processes and orders always refer to ‘customers’
orientation whose interest is a high utility value as well as the sustainability of
the task solution. In work processes with a high division of labour,
the utility value and sustainability aspects of solving professional
tasks often evaporate in the minds of employees. Vocational edu-
cation and training counteract this with the guiding principle of
sustainable problem solving.
Economy/Effectiveness In principle, professional work is subject to the aspect of economic
efficiency. The context-related consideration of economic aspects
in the solution of professional tasks distinguishes the competent
action of experts.
Business and Work pro- It comprises solution aspects that refer to the upstream and down-
cess orientation stream work areas in the operational hierarchy (the hierarchical
aspect of the business process) and to the upstream and downstream
work areas in the process chain (the horizontal aspect of the busi-
ness process).
Social acceptability The criterion primarily concerns the aspect of humane work design
and organisation, health protection and, where appropriate, the
social aspects of occupational work which extend beyond the
occupational work context.
Environmental A relevant criterion for almost all work processes, which is not
compatibility about general environmental awareness, but about the occupational
and subject-specific environmental requirements for occupational
work processes and their results.
Creativity Indicator that plays a major role in solving professional problems.
This is also a result of the very different scope for design in the
solution of professional tasks depending on the situation.
The assessment criteria (rating items) that are not relevant from the company’s
perspective are marked when applying for an OO.
• The solution space to be specified by the client must define the cornerstones for
possible solutions. This is not an ideal solution proposal. Solution variants must
be possible which have as high a utility value as possible in line with the situation
description.
• The situation/order description also includes an overview of the technical and
business management options available in operation that are necessary for the
performance of the OO. This includes information on ordering and procurement
procedures.
• The applicant estimates the duration of the OO.
7.2 The Measurement of Professional Competence 215
The total score results from the addition of the individual values for the eight
sub-competences. It is a rough indication of the level of competence achieved.
Candidates can compare their TS with that of their examination group/class to see
where their performance is compared to that of the other trainees. An accurate TS
takes into account the degree of homogeneity of the competence profile (Fig. 7.13).
This example shows that, taking into account the competence profiles, the same
raw TS results in two different (corrected) TS(k). This means that the level of
competence of the two candidates is different. Therefore, two candidates with the
same raw TS of 42 can reach two different competence levels.
Conclusion
Fig. 7.13 Correction of the raw TS values (comparison of the competence characteristics of two
commercial occupations)
4. This would also solve the integrated review of in-company and school-based
training, as the COMET competence model represents vocational training as a
whole. At the same time, the specific contributions of the learning locations to
achieving and verifying employability can be identified.
5. The examination results based on the COMET competence model also reveal the
strengths and weaknesses of the training performance of the learning locations.
The examination results therefore provide a good basis for educational guidance
and for quality assurance and quality development in vocational education and
training.
6. The evaluation of examination performance based on the COMET competence
and measurement model leads to the development of common evaluation stan-
dards. This examination practice should prove to be a form of informal rater
training for examiners and should be supported by introducing the examiners to
the new examination practice.
7. A high degree of interrater reliability (consistency of examiners in the assessment
of examination performance) can be achieved by dual or team rating—a proce-
dure tried and tested in COMET projects.
The significance of the examination results in accordance with this examination
concept is significantly higher than that of conventional examinations.
Not only a score is displayed, but also
• The level of competence achieved
• The competence profile.
In addition, this examination form satisfies the established quality criteria of
competence diagnostics.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 219
In the context of the feasibility study ‘COMET testing’ (implications of the COMET
test procedure for achieving a higher quality (validity, objectivity and reliability) of
final examinations according to the Vocational Training Act), which was carried out
in cooperation with the COMET project (NRW) and the Chamber of Industry and
Commerce (NRW), it made sense to carry out a case study in which the examination
results of 96 candidates (motor vehicle mechatronics technicians) were compared
with their test results in the COMET test.
A correlation-statistical context analysis was performed, which made it possible
to map the relationship between two variables.
To investigate the interrelationships, differentiated scores are available both from
the final examinations and from the COMET test.
The following final examination values were used for a differentiated analysis:
• Mean value from the practical examination part
• Mean value from the written part of the examination
• The total examination score.
The COMET scores can be divided into the following areas according to the
competence model:
• Score of the competence level functional competence (FC),
• Score of the competence level procedural competence (PC),
• Score of the competence level (holistic) shaping competence (DC) and.
• Total score (TS).
Correlations can be used to map relationships between two characteristics, with the
correlation coefficient ‘r’ quantifying the strength of the relationship. The correla-
tions from r > 0.2 to r < 0.4 are considered weak. Mean correlations are
0.4 < r < 0.6. A strong correlation is indicated from r > 0.6 (cf. Brosius, 2004).
For the data of the automotive mechatronics technicians, the two test instruments
were first examined separately. It is evident that the elements of the chamber
examination—practical part, written part and overall assessment—are strongly
interrelated and therefore coherent. The areas of the COMET test are also closely
related and therefore measure the same construct. Each test forms a coherent unit in
220 7 Conducting Tests and Examinations
its own right. A comparison of the two tests revealed only a weak correlation of
r ¼ 0.25 ( p < 0.05).
The differentiated context analyses of individual elements of the two tests yielded
the following results1:
The score from the practical exam correlates to
• Strong with the score from the written test (r ¼ 0.63; p > 0.01),
• Not with the total score (ts) COMET (r ¼ 0.17; not significant),
• Weak with functional competence (fc: r ¼ 0.25; p < 0.05; pk: r ¼ 0.2),
• Not with the level of procedural competence (r ¼ 0.17; not significant) and,
• Not with the competence level of shaping competence (DC): r ¼ 0.06 (not
significant).
Although the result from the written part of the final examination correlates only
at a weak level, it is nevertheless significant with the COMET values for the
• TS (r ¼ 0.29; p < 0.01),
• Functional competence (FC) (r ¼ 0.31; p < 0.01),
• Procedural competence (PC) (r ¼ 0.28; p < 0.01).
The degree of correlation between the score of the written test and the score of the
COMET shaping competence (DC) is very weak and statistically insignificant. The
calculated correlation coefficient r can represent a random relationship.
The overall result of the test correlates at a low level with the COMET values.
• TS (r ¼ 0.25; p < 0.05),
• FC (r ¼ 0.29, p < 0.01) and.
• PC (r ¼ 0.24; p < 0.05).
There is no demonstrable link between the overall result of the final examination
and the (holistic) shaping competence identified in the COMET competence model
(r ¼ 0.12, not significant).
The weak positive correlation between the overall score of the final examination and
the COMET test result indicates that higher scores in the examination tend to be
accompanied by higher scores in the COMET test (see Fig. 7.14).
1
All calculations without extreme values, i.e. written part (chamber) > 0, practical part (cham-
ber) > 0, overall mark (chamber) > 5, functional competence >0, procedural competence >0 and
shaping competence >0, TS > 5.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 221
Fig. 7.14 Relationship between examination result (chamber test) and total score COMET, without
extreme values (TS > 5 and overall score > 5, r ¼ 0.25, p < 0.05; R2 ¼ 0.09)
It can first be assumed here that this aspect of the examination hardly correlates with
the values of competence diagnostics, as COMET is limited to measuring concep-
tual-planning competence. The correlation values to the TS with r ¼ 0.17 (not
significant) to the competence level of functional competence (FC) with r ¼ 0.21
( p < 0.05) and procedural competence (PD) with r ¼ 0.17 (not significant) show that
there is almost no correlation. Due to the divergence in content between the two
forms of examination (practical chamber examination and written examination at
COMET), an interpretation of the results only partly leads to added value. In
the practical examinations of the chambers, the trainees are asked to implement
the theoretical problem solution, while the COMET test asks them to formulate the
solution in writing. This is where translation errors can occur. The purely cognitive
solution of a problem does not necessarily mean that the actions are performed
according to the calculated procedure.
222 7 Conducting Tests and Examinations
Fig. 7.15 Relationship between the written examination (chamber examination) and functional
competence (COMET), r ¼ 0.31, p < 0.01, R2 ¼ 0.1 (without extreme values)
Written Examination
As expected, the most pronounced correlations to the COMET test are found for this
part of the examination. This predominantly applies to functional competence
(FC) with r ¼ 0.31( p < 0.01). A high score in the written part of the final
examination therefore goes hand in hand with a high score in the functional
competence area of the COMET test.2 This weak but still significant correlation is
shown in Fig. 7.15.
The correlation to procedural competence (PC) is somewhat weaker with r ¼ 0.28
(Fig. 7.16).
In contrast, with r ¼ 0.17 (not significant) there is no correlation with shaping
competence (GC) (Fig. 7.17).
2
Conversely, an upstream COMET test at the level of functional competence would be good
preparation for achieving high scores in the written part of the final examination.
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . . 223
Fig. 7.16 Relationship between the written examination (chamber examination) and procedural
competence (COMET), r ¼ 0.28, p < 0.01, R2 ¼ 0.08 (without extreme values)
Fig. 7.17 Relationship between the written examination (chamber examination) and shaping
competence (COMET), r ¼ 0.17, not significant, R2 ¼ 0.02
224 7 Conducting Tests and Examinations
7.3.3 Conclusion
The higher the score for the written examination, the higher the score for functional
and procedural competence—and vice versa.
Higher scores in the written examination do not justify holistic shaping compe-
tence. This is not covered by the examination.
This context analysis therefore confirms the findings of the empirical surveys
cited that the objectives of process-oriented and competence-oriented vocational
education and training anchored in the examination practice are not covered in the
training regulations. With the COMET test format, this deficit can be eliminated for
the written part of the examinations. The application of the COMET examination
procedure to the entire examination requires a supplemented competence model.
The application of the COMET examination concept for implementing examina-
tions in vocational education and training has a number of advantages.
1. The COMET competence and measurement model is an interdisciplinary proce-
dure for selecting and formulating holistic examination tasks and organisational
orders. The content dimension of the competency model must be concretised in
each case by the vocational fields of action that are relevant for the description of
the content of employability.
2. On this basis, examinations shall also be comparable on a supra-regional and
cross-occupational basis for the ground-breaking examination concept of the
company mandate.
3. This would introduce a scientifically based examination strategy that would
greatly facilitate the understanding of all those involved in VET.
4. The integrated review of in-company and school-based training would thus be
solved, since the COMET competence model represents vocational training as a
whole. At the same time, the specific contributions of the learning locations to
achieving and verifying employability can be identified.
5. The examination results based on the COMET competence model also reveal the
strengths and weaknesses of the training performance of the learning locations.
The examination results thus provide a good basis for training guidance and
quality assurance in training.
6. The evaluation of examination performance on the basis of the COMET compe-
tence and measurement model leads to the development of common evaluation
standards. This examination practice should prove to be a form of informal rater
training for examiners. This should be supported by introducing the examiners to
the new examination practice.
7. A high degree of interrater reliability (consistency of examiners in the assessment
of examination performance) can be achieved by dual rating (two examiners)—a
procedure tried and tested in COMET projects.
8. The informative value of the examination results according to this examination
concept is significantly higher than that of conventional examinations. It iden-
tifies not only the score but also the level of competence achieved and the
competence profile.
In addition, this examination form satisfies the established quality criteria of
competence diagnostics.
7.4 Measuring the Test Motivation 225
The influence of test motivation on test behaviour and test results is the subject of
different and sometimes contradictory research results. As part of the first Pisa test
(2000), an additional experimental study was conducted in Germany in order to gain
insights into the test motivation of different groups of pupils (distinguished by
school types) and the influence of various external incentives on test motivation.
The overall result of the experiment is that the different experimental groups do not
differ in their willingness to make an effort (Baumert et al.: Pisa 2000, 60). The
results of the study also suggest that external incentives should not be used, as these
effects are negligible. Differences in test motivation between ‘Hauptschule’ (general
school in Germany offering Lower Secondary Education) pupils and ‘Gymnasium’
(high school in Germany offering Higher Secondary Education) pupils could not be
established in this study. Regardless of the type of school, the effort was relatively
high throughout. The various incentives had no significant influence on the test
results (ibid., p. 27 ff.).
In the first COMET project, a motivation test was therefore not included. How-
ever, test practice then suggested that the test motivation should be considered as a
context variable in the second test as part of the longitudinal study.
Fig. 7.18 Cross-over design for the use of test items in longitudinal section (COMET Vol. I, 144 f)
226 7 Conducting Tests and Examinations
The test comprised a total of four complex test items (COMET Vol. I, 144 f.). Each
test participant had to solve two test items, with a maximum of 120 min available to
solve each test item. After the first test of the one-year longitudinal study, the
observation of the teachers involved in the test already indicated that the motivation
to work on the test items played a greater role than had initially been assumed. It was
therefore examined how the test time of two times 120 minutes was used by the test
participants and the proportion of test refusers. This resulted in first clues regarding the
test motivation of the test participants and for recording the test motivation.
In a pre-test, it was first examined whether there was a systematic drop in
performance when processing the second complex test item. In the evaluation of
the test results, a distinction was made between trainees in their second and third year
of training and students from technical colleges.
The experiences of the first phase of the study already suggested that the
motivation to complete the test items played a greater role than assumed among
the various groups of trainees. On the one hand, reports by teachers indicated that
test motivation among vocational school students varies. On the other hand, some of
the trainees have only partially exhausted the test time of 2 120 min; some pupils
have not seriously completed the test items, and they form the group of test refusers.3
Based on these findings, the test motivation and test behaviour were recorded at
the second test time (March/April 2009), whereby the formulation of the questions is
based on PISA test practice (Kunter et al., 2002). In addition, the teachers conducting
the tests answered questions on test motivation in the classroom and on the working
atmosphere; the results can be used for comparison at class level. In addition, the
comparison of processing time and test result of the first and second test items allows
conclusions to be drawn about the course of motivation during the test.
Results of the Survey on Test Motivation (cf. COMET Vol. III, Sect. 7.7)
The general interest of trainees and students in the COMET test items varies greatly.
More than half of the electronics engineers found the first test item interesting to very
interesting (55%). This figure is even higher for electronics technicians specialising
in energy and building technology (61%) and for students of technical colleges
(60%).
Overall, all test groups worked on the first task in concentrated (73%) and careful
(65%) manner. On the other hand, it is noticeable that almost every second elec-
tronics technician for industrial engineering states that he is less motivated to work
on the second task than on the first task; however, this is only stated by every fifth
electronics technician specialising in energy and building services engineering and
students of technical colleges (Fig. 7.19).
3
A refuser is a participant who has achieved a total score of less than five points or who has
completed both test items together in less than 60 min. In Hesse in 2009, 24 participants were
identified as refusers according to this definition, of which ten were E-EG trainees (7%) and
fourteen E-B trainees (6%).
7.4 Measuring the Test Motivation 227
Fig. 7.19 Frequency distribution according to test groups: ‘I am (a) less motivated to work on the
second test item, (b) similar, (c) more motivated than for the first test item
In this context, by comparing the test results of the two test phases (2 120 min for
two test items), it is possible to examine whether and for which test groups there are
significant differences in the test results between the first and second test item. If the
test result of a test group is worse for the second test item, this can be interpreted as
an indication of decreasing test motivation.
This effect is not present for all test groups. In the case of the apprentice
electronics technician specialising in energy and building services engineering,
there is no difference between the results of the first and the second test item. This
may be because this group has a rather low overall test level. The electronic
technician trainees for industrial engineering achieve a significantly better result in
the first task than in the second: 15% achieve the highest level of competence in the
first test item and only 6% in the second test item (cf. Fig. 7.20). This also
corresponds to the lower motivation of this test group for the second test item
described above. In the case of the first task, the risk group is only 10%; in the
case of the second task, this figure rises to 23% (cf. Fig. 7.21). Here, too, a test for
228 7 Conducting Tests and Examinations
Fig. 7.20 Competence level distribution of the group of electronic technicians for industrial
engineering (Hesse), comparison of the results based on the first and second test item 2009
Fig. 7.21 Competence level distribution of the group of technical college students (Hesse), and
comparison of results based on the first and second test item 2009
mean value differences4 shows that the average total score for the first task is
significantly higher than for the second task.
This results in a considerable loss of motivation, especially among weaker
students. Higher-performing students improve from the first to the second test
item. The majority, however, do slightly worse than in the first task. Figure 7.22
illustrates this effect: each cross in the diagram represents a trainee, the axes show
4
t-Test for dependent samples.
7.4 Measuring the Test Motivation 229
Fig. 7.22 Scatter diagram to compare the results for the first and second task (Hesse, electronics
technician for industrial engineering, 2009, n ¼ 297)
the total scores achieved for each of the two tasks completed and the horizontal and
vertical lines in the diagram show the total score (26.5) for both tasks. The diagonal
line divides the graphic into two parts. The top part (A) shows the trainees who did
better in the second task than in the first, and the bottom part (B) shows those who
did worse in the second. Part B contains significantly more participants (63%).
Comparison of the Processing Time for the First and Second Test Item
The recording of the processing time also allows an assessment of the extent to
which the motivation decreases in the course of the test. For the first test item, the test
participants worked an average of 100 min and for the second test item only 83 min.
This can be interpreted as fatigue or decreasing motivation.
However, a shorter processing time cannot be exclusively attributed to a lack of
motivation on the part of the test participants from the start of the test. It must also be
considered that an excessively demanding task solution can lead to a participant
ending the test early. However, this is contradicted by the fact that there is only a
small correlation between the test result and the processing time.
This pre-test on the relationship between test motivation, processing time and test
results led to the decision to reduce the test scope for each test participant to the
processing of a test item. Only in subsequent projects was test motivation included
more comprehensively in the context analysis as a determinant of competence
development.
230 7 Conducting Tests and Examinations
When recording test motivation, a distinction is made between primary and second-
ary motivation aspects.
The primary motivational aspects are
• The assessment of the occupational relevance of the test items. It is assumed that
for test participants with a developed professional identity, the occupational rele-
vance of the test items has a motivating effect on the processing of the test items.
• The benefit of the test items. The evaluation of the benefit of the test items results
from the assessment of the test participants that participation in the test has a
positive effect on training.
• The interest in task processing represents another primary motivational aspect. On
the one hand, this motivational aspect is based on the two other primary motiva-
tional aspects and, on the other hand, on the interest in the content of the tasks.
The secondary motivational aspects are
• Commitment.
• Concentration.
• Diligence and.
• Task-solving effort.
(cf. the test motivational model in Fig. 7.23).
The primary motivational aspects represent the evaluation of the test items as
relevant for vocational education and training, without this already being associated
with a willingness to make an effort in processing the test items. If, for example, a
test is conducted just before a final examination, this may lead to a lack of interest in
the test, as it is perceived as a disruption in exam preparation. The test motivation is
then impaired without affecting the evaluation of the occupational relevance of the
test items and their basic benefit for vocational training. The evaluation of the
interest in the test items or the test results from the occupational relevance and, at
the same time, from the benefit of the test for the training as well as for the
examination preparation if necessary.
The secondary motivational aspects result from the primary motivational dimen-
sion. The four secondary motivational aspects represent different aspects of the
willingness to make an effort.
The recording of the processing time for the solution of the test items can be
regarded as a dimension of the test motivation, as shown by the study cited above. At
the same time, it is immediately evident that the processing time is also an indicator
of the competence level. The test results show that more efficient test participants use
the available test time (120 min) to a greater extent than less efficient test partici-
pants. The processing time is therefore an indicator for both the competence level
and the test motivation.
7.4 Measuring the Test Motivation 231
Dear trainees,
We would like to hear from you how you assess the test task you have worked on. For this
purpose we would like to ask you for some information. Then please place this sheet in
the envelope provided for your task solution.
For things that are very important to you personally, you make a special effort and
give your best (e.g. sports, hobbies, ...).
In comparison, how much effort did you put into the test task?
(Please mark with a cross!)
1 2 3 4 5 6 7 8 9 10
minimum effort maximum effort
In the previous COMET projects, the test motivation was analysed on the basis of the
individual items. An exploratory factor analysis was carried out to check whether
connections between the observable motivational aspects can be explained by
superordinate dimensions. This makes it possible to uncover non-observable (latent)
dimensions that can be superior to the observable items.
The test motivational model is based on the hypothesis that a perceived occupa-
tional reference, benefit and interest in the submitted test items leads to careful,
concentrated processing.
Sample
The data were collected during a COMET test of second- and third-year nursing
students from a total of six locations in Switzerland (locations: Aarau, Basel, Bern,
Lucerne, Solothurn, Zurich/Winterthur). A total of N ¼ 477 persons took part in the
survey, 87% of whom were female (n ¼ 417).
The items used in the motivation questionnaire (Table 7.7) are therefore subjected
to an explorative factor analysis, which is intended to reveal the underlying struc-
tures of the items. Due to the correlative character of the items, the assumption is
made that possible factors also correlate with each other. Accordingly, a direct
rotation is used for factor analysis, which largely allows for a possible correlation
between the factors.
As a result, two factors or motivational dimensions can be extracted. The factor
loads are listed in the following table (Table 7.8).
Factor 1 consists of the items Commitment, Diligence, Concentration and Effort.
The statements on Interest, Meaningfulness and Occupational relevance are based on
a factor of 2. With regard to the formulation of the content of the items (cf. Fig. 7.24),
factor 2 can be seen as meaningfulness (primary motivation). The factor describes
the benefits for the professional future identified in the test items and links an interest
Table 7.7 Test instruments for the first test time and second test time of the first COMET project
(COMET Vol. II, 41)
Use from test
Testing instrument time
Open test items T1 (2008)
Context questionnaire t1 (2008)
Questionnaire on test motivation for trainees t2 (2009)
Teacher questionnaire on the test motivation of trainees t2 (2009)
Rater questionnaire on the weighting of competence criteria t2 (2009)
Test of basic cognitive ability (subtest ‘figure analogies’ of the cognitive ability t2 (2009)
test (CAT))
7.4 Measuring the Test Motivation 233
Table 7.8 Results of the factor loads of the motivational items on the extracted factors (data:
nursing staff Switzerland 2014, N ¼ 477)
Item parameters Factor loads
Item M SD rit 1 2
Commitment 2.48 1.00 0.78 0.89 0.01
Diligence 2.44 0.95 0.74 0.87 0.01
Concentration 2.58 1.01 0.71 0.82 0.03
Effort 4.86 2.22 0.67 0.74 0.04
Interest 2.44 1.04 0.65 0.03 0.83
Meaningfulness 2.37 1.02 0.6 0.02 0.77
Occupational reference 2.99 1.11 0.36 0.02 0.49
Comments: Factor loads >0.30 are marked bold; Bartlett test: χ 2 ¼ 1709.19(df ¼ 21), p < 0.001;
Kaiser-Meyer-Olkin (KMO) measure ¼0.86. N ¼ sample size; M ¼ mean value, SD ¼ standard
deviation; rit ¼ selectivity.
Fig. 7.24 Results of the explorative factor analysis (data: nursing staff Switzerland 2014,
N ¼ 477). r ¼ correlation coefficient; a ¼ factor charge
with them. Factor 1 describes the behaviour when processing the test items and is
referred to as investment (secondary motivation). The term investment refers to the
motivational skills used during processing.
The defined factors have a medium positive correlation (r ¼ 0.66). Both factors
explain an overall variance of 62%.
The studies cited above already indicated a connection between test motivation and
performance. With regard to the content dimension of the two factors, it is reason-
able to assume that meaningfulness functions as the primary motivation dimension
and investment as the secondary motivation dimension, as presumably people who
feel that testing makes sense also invest more in test processing. The mediator
analysis described below was carried out in order to further investigate the relation-
ship between the two extracted motivational dimensions and the relationship
between these dimensions and the test performance.
234 7 Conducting Tests and Examinations
Based on the results of the factor analysis described above, it was examined whether
the two motivational dimensions make an explanatory contribution to the COMET
test performance (measured as the total score (TS)).
The following hypothesis was made for the statistical investigation of the
problem:
The motivation factors meaningfulness and investment causally explain the
competence performance in the COMET test procedure. As a mediator, the invest-
ment factor mediates the connection between the meaningfulness factor and the
performance factor (TS) in the COMET test procedure.
Method
In order to investigate the hypothesis statistically, a mediator analysis was carried out
with the three variables meaningfulness as independent variable, investment as
mediator variable and TS as dependent variable. The analysis was carried out in
the four steps usual for a mediator analysis (Preacher & Hayes, 2004), in which
various linear regression analyses were calculated. In Step 1, the regression of the TS
was examined for meaningfulness. In the second step of the analysis, a regression of
investment was examined for meaningfulness, and in the third step, the regression of
the total score to investment was examined. In the last step of the analysis, a multiple
regression of the total score was examined for meaningfulness and investment.
The Sobel test was also carried out to check the statistical significance of a media
effect found (Preacher & Hayes, 2004).
Outcomes
The results of the mediator analysis are shown in Fig. 7.25. The analysis showed that
with a corrected R2 ¼ 0.043, the variable meaningfulness can explain 4.3% of the
variance in the total score. Even if this variance portion is small, the regression
model from step 1 becomes significant with F ¼ 22.13(1;475), p < 0.001. In this
model, the relationship between meaningfulness and total score is significant and
positive (b1 ¼ 4.314.70; p < 0.001). The results of the regression from step two
show that with F ¼ 210.48(1;477) and p < 0.001, there is significant regression. This
model explains 30.5% of the variance of investment. The relationship between
meaningfulness and investment is positive and significant (b2 ¼ 0.60; p < 0.001).
The regression of the third analysis step showed a significant result with F ¼ 32,53
(1;475) and p < 0.001. The model explains 6.4% of the variance of the total score.
The relationship between investment and total score is also significant and is positive
(b3 ¼ 4.79; p < 0.001).
The regression from step four indicates that at 6.8%, a significant degree of total
variance of the total score can be explained by the two variables meaningfulness and
7.4 Measuring the Test Motivation 235
Fig. 7.25 Mediator analysis to determine the relationship between primary and secondary moti-
vation and performance in the COMET test procedure (data: nursing staff Switzerland 2014;
N ¼ 477). bi ¼ regression coefficient; * p < 0.05; ** p < 0.01
Discussion
The results of the mediator analysis indicate that the investment completely mediates
the connection between meaningfulness and performance. The result of the Sobel
test confirms this result. This means that people who see more benefit in the COMET
test procedure invest more and achieve better performance in this way. The invest-
ment can therefore be confirmed as primary motivation and the investment as
secondary motivation. This confirms the assumption made in earlier COMET studies
that more motivated people also achieve better test results. However, the analysis
clarifies that the willingness to invest in the test item depends on how strongly the
benefit of testing is perceived. For the future, therefore, the test participants should
be advised of the benefit of the COMET test procedure before carrying out the test, in
order to avoid poor test performance resulting from poorly perceived benefit.
The relatively low overall model’s explained variation of 6.8% indicates that a
large part of the variance of the performance remains unexplained by the present
model. This indicates that, in addition to the motivational components, numerous
other factors have an influence on the test performance. This is immediately obvious,
236 7 Conducting Tests and Examinations
Based on the relatively low explanatory variance of the performance by the two
motivational factors, which was shown in the mediator analysis, the variable of the
processing duration could lead to an increase of the explained variance. Based on the
hypothesis that a comprehensive, reflected task solution with detailed justifications
(corresponding to the COMET task) inevitably results in a longer processing time,
the processing time is recorded even after the COMET test has been shortened by
one task in order to be able to further investigate this aspect as an indicator of test
motivation.
The processing time for the test, which is asked on a four-step scale, is as follows
in this sample:
It is apparent that almost half of the trainees exploited the full processing time. A
correlation of the processing time with the total score of r ¼ 0.53 culminated in a
significant result. This means that there is a medium-strong positive correlation
between the processing time of the COMET test and the total score achieved.
The question of the interaction between the motivational aspects of meaningful-
ness and investment, the processing time and the total score will now be examined in
greater depth. The research interest here is particularly aimed at investigating the
influence of processing time on the performance of the trainees in addition to the two
motivational dimensions.
For this purpose, the factors investment and meaningfulness as well as the
processing time are examined in a hierarchical regression analysis with the total
7.4 Measuring the Test Motivation 237
Table 7.9 Summary of the hierarchical regression analysis for predicting the variable ‘Total score’
(n ¼ 462)
Variable B SE β R2 Corrected R2 p
Step 1
Meaningfulness 2.47 1.12 0.12 0.077 0.073 0.028
Investment 3.62 1.03 0.19 0.001
Step 2
Meaningfulness 1.30 0.99 0.06 0.296 0.291 0.187
Investment 2.69 0.91 0.14 0.003
Processing time 19.02 1.60 0.48 <0.001
Note: dependent variable: total score; variable duration has been dichotomised: (0) ¼ 0–1 h,
(1) ¼ 1–2 h)
The test motivation of nursing students is above average and varies between
locations (training centres) in terms of the criterion of ‘commitment’ between 6.1
and 7.5 (on a scale of 1–10) (Fig. 7.26).
238 7 Conducting Tests and Examinations
This shows that the willingness to commit at the individual locations has shifted
between the training centres among the test participants over the course of a year.
If one combines the values of the primary and secondary motivational aspects
into a motivation profile, then four special features are highlighted (Fig. 7.27).
At many locations, students rate the occupational relevance of the test items as
high to very high.
It is therefore surprising that the benefit of the test is sometimes estimated to be
significantly lower. If one assumes that these students relate the benefit of the test
(the processing of the test items) to their educational situation, then this supports the
thesis that the students clearly distinguish between their education and the occupa-
tion to be learned. A clarification of this difference can be based in a first step on the
evaluation of the context data on training quality (see below).
The interest in the test seems to be fuelled by its professional relevance and
benefits: the values for the interest aspect of motivation therefore frequently lie
between the assessments of these two other primary motivation aspects.
The secondary motivational aspects obviously represent the same, largely agreed
motivational dimension (see below).
In the course of a year, there has been a significant increase in test motivation at
Training Centre A. In contrast, the test motivation at another training centre (L) has
dropped. Using the context data, the teachers were able to clarify this development in
a feedback workshop. Here, for example, conflicts within study programmes or
identification with the test procedure as the cause of the different test motivation
of the participants were discussed (Evaluation Workshop of Nursing Training
Switzerland, January 2015).
7.4 Measuring the Test Motivation 239
Fig. 7.27 Test motivation at the various locations, first main test
Based on the factor analysis described above, the test motivation results can be
summarised in a factor matrix (Fig. 7.28). The factor values indicate whether the test
group achieved above- or below-average scores in relation to the sample examined.
Values >0 represent above-average and values <0 below-average motivation levels.
Figure 7.28 shows that the test participants of the training centres E and C 2013 have
an above-average test motivation—in relation to both motivation dimensions.
240 7 Conducting Tests and Examinations
Complementary to this, the test participants of the training centres F and B show
below-average test motivation in 2013.
For the participating training centres, it is interesting to see how their position in
the factor matrix changed from the first to the second test time—in the course of one
year (Fig. 7.29). This shows that the heterogeneity of the test motivation has
decreased slightly overall and that the test motivation has changed significantly on
some occasions.
Within the scope of the first main test of the electronic engineers for industrial
engineering (E-B), as well as the electronic engineers for energy and building
7.4 Measuring the Test Motivation 241
Fig. 7.30 Comparison of E-B and E-EG according to interest and evaluation of benefit legend:
2 ¼ rather agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and 5 ¼ fully
disagree are not proven)
Fig. 7.31 Comparison of E-B and E-EG according to care and concentration—Legend: 2 ¼ rather
agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and 5 ¼ fully disagree are not
proven)
technology (E-EG) (NRW), the test motivation was also recorded. If one compares
the test motivation of the two test groups with each other, there are clear differences
in the level of motivation (Figs. 7.30, 7.31 and 7.32).
Deviating from the widespread thesis that there is no significant correlation
between competence levels and test motivation, these two test groups (E-B:
N ¼ 170; E-EG: N ¼ 192) show a pronounced correlation between test motivation
and competence development. This connection is particularly clear in the case of the
E-EC-A for the item ‘Interest in task processing’ (Fig. 7.30). The test motivation data
were available when interpreting the COMET test results. In the context of the
feedback workshops with the occupation-related project groups also mentioned in
this text, comparisons were made repeatedly between motivation during examina-
tions—on the one hand—and participation in COMET tests—on the other. Experi-
enced examiners justified their assumption that the examination motivation of the
trainees was significantly higher than the test motivation of the COMET test
242 7 Conducting Tests and Examinations
Fig. 7.32 Comparison of E-B and E-EG according to commitment and effort in solving the test
items Legend: 2 ¼ rather agree/3 ¼ undecided/4 ¼ rather disagree (categories 1 ¼ fully agree and
5 ¼ fully disagree are not proven)
procedures. After all, a good examination result would also benefit a professional
career, while participation in a COMET test would at best provide insights into the
state of competence development.
Figure 7.33 shows a striking difference between test and exam motivation.
The importance of the primary motivational aspects of occupational relevance
and benefit is (clearly) rated higher for the COMET test procedure than for exam-
inations. The large difference for the primary motivation criterion of occupational
relevance is particularly striking. The average assessment of the occupational refer-
ence is estimated to be significantly higher for the COMET test with an average
value of MV ¼ 4.2 than for the occupational reference of the examinations
(MV ¼ 3.2). The evaluation of tests under the aspect of benefit is indifferent with
a mean value of MV ¼ 3.0. The final examination has neither a high occupational
7.4 Measuring the Test Motivation 243
Fig. 7.33 Primary motivation aspects in the comparison of examinations and COMET tests
Fig. 7.34 Secondary motivational aspects in the comparison of examinations and COMET tests
relevance nor a high benefit (for the training and the concrete occupational activity)
for the motor vehicle trainees.
The values for the secondary motivational aspects are complementary. In the case
of examinations, the willingness of the trainees to make an effort is significantly
higher than in the case of tests performed as part of competence diagnostics
(Fig. 7.34).
Overall, this results in a complementary motivation structure for COMET tests
and final examinations. The primary motivation factors (excluding interest) are rated
higher for the COMET test procedure. This is based on a slightly above-average
motivation regarding the secondary motivational factors. In other words, the test
244 7 Conducting Tests and Examinations
motivation results from the assessment of the test participants that the test items have
high occupational relevance and are therefore highly beneficial for training.
The motivation profile for examinations is completely different. The secondary
motivation factors are highly rated: on average between MV ¼ 4.2–4.5. These high
motivation values for the indicators care, concentration, effort and commitment
stand in clear contradiction to the indifferent ratings of the primary motivation
factors. The insight of the trainees that a good examination result is beneficial for
their professional career obviously characterises the high examination motivation,
which is not diminished by the fact that the professional reference and the benefit of
the exam are assessed as indifferent.
This case study in a highly sought-after training occupation supports the interest
of organisations in the working world in an application of the COMET competence
and measurement model for the procedures of final examinations.
The example of the COMET project South Africa (electronics technician) shows that
the national labour market and training structures—the cultural context—are a
decisive determinant of the test and training motivation of the trainees. The example
of COMET (electronics technician) South Africa confirms this thesis particularly
impressively, as all the courses involved have achieved only a low level of compe-
tence in an international comparison (Germany, China, South Africa) (COMET RSA
Study 2013). At the same time, the test participants were very highly motivated. In
summary, the project report states
The test takers were highly motivated and interested in the test items. Still, the test results are
often below the level of functional competence. Professional and holistic shaping compe-
tence have rarely been reached. On the other hand, the South African learners were very
motivated to take the test and are very committed to their learning in general, as well
(ibid., 44).
Both the occupational orientation and the benefit of the test items were highly
rated by the test participants (Figs. 7.35 and 7.36).
When asked about their interest in processing the test items, 53% of the respon-
dents stated that they were very highly interested and 34% were highly interested.
Only 10% stated that they were less interested in the test items and a further 10%
were not at all interested.
The effort with which the test items were processed was correspondingly high
(Fig. 7.37).
These high values of the test motivation correlate with the identification with the
training companies (Fig. 7.38).
7.4 Measuring the Test Motivation 245
Fig. 7.35 Evaluation of the occupational relevance of the test items (COMET RSA, 2013)
Fig. 7.36 Evaluation of the benefit of the test items (COMET RSA, 2013)
The high youth unemployment rate in South Africa is regarded as the decisive
reason for the high organisational identity and the very high training motivation of
young people who have a training contract. This is also transferred to the test
motivation.
246 7 Conducting Tests and Examinations
Fig. 7.37 Evaluation of the effort spent on the test items (COMET RSA, 2013)
Fig. 7.38 Identification with the companies providing in-company vocational training
7.4.6 Conclusion
The in-depth analysis of the individual motivational aspects makes it clear that the
connections between the individual motivational items can be explained by two
underlying dimensions: Meaningfulness and investment.
It has been shown that the test performance of the trainees is determined by three
factors: estimating the meaningfulness (primary motivation) of the test items as high
usually leads to a committed processing of the test items (secondary motivation/
investment). Both motivational dimensions have a significant influence on the test
performance. Highly motivated test participants therefore achieve better results.
However, the variance of the test performance clarified by the motivational dimen-
sions is not exhaustive. Another significant, strong influencing factor on the test
performance has been the duration of the test processing.
The results discussed and the research on test motivation also open up the
possibility of investigating the job-specific differences in test motivation.
A comparison of the motivation regarding the processing of COMET test items
and examination tasks resulted in a contrary motivation picture. While the assess-
ment of the primary motivation aspects regarding the COMET task was (signifi-
cantly) higher among the trainees, the assessment of the secondary motivation
aspects with regard to the examination tasks was significantly higher in contrast.
The results suggest that the fact that examinations have an influence on professional
career, while COMET testing has no such influence, works as an external incentive.
This extrinsically motivates trainees in the examination situation, while no such
motivation occurs in the course of the COMET test procedure. The extrinsic
motivation is expressed in the result that trainees invest heavily in the completion
of the examination task, although they tend to assess the occupational relevance and
the benefit of the examination task rather indifferently and do not have an excessive
interest in the examination task. Accordingly, the inadequate inclusion of the
COMET test in the feedback structure at the vocational school poses a major
problem. This can also be a reason for the low test motivation. The effects of good
feedback on the ‘learning success’ of pupils/trainees/students by teachers, trainers or
lecturers, e.g. based on tests or other forms of evaluation of competence develop-
ment, are unanimously rated as very high in empirical educational research. Feed-
back in the form of learning and training counselling is rightly regarded as the
linchpin of a good learning culture (Weinert, 1996).
Although COMET test participants receive individual feedback on their test
results, they are aware that the test results are not grade relevant. After all, there
was a constant interest in feedback on the test results. This expresses the fundamental
interest in feedback. However, as long as this form of competence diagnostics is not
used in its potential for training and learning counselling and is not systematically
integrated into a new feedback culture, test motivation will remain low for some of
the test participants. This effect is reinforced by the fact that the trainees are fixated
on the two selective examination times (Part 1 and Part 2 of the examination) and
give significantly less weight to all other forms of assessment of their training
success. This also applies universally to all forms of school performance measure-
ment. Part 1 of the examination takes place after 18 months and part 2 at the end of
the training period—after three to three and a half years.
248 7 Conducting Tests and Examinations
It can be assumed that the interest in this form of competence diagnostics will
increase if the test participants experience competence diagnostics as an evaluation
instrument which is of high diagnostic importance for the evaluation and therefore
also for the improvement of training quality.
In individual cases, however, the COMET competence survey is currently even
experienced as a disruption of regular education. The processing of test items is
associated with a considerable effort if it is carried out in concentrated and commit-
ted manner. The vast majority of trainees quite obviously rate this as an additional
performance to be provided. The impact of this setting on the test result is closely
related to the timing of the test. A case study in which a group of trainees in their
fourth year of training were tested just before the final examination (Part 2) shows
that the motivation to take the test at this point was so low that the overall results
were significantly worse than could be expected from the advanced training. This is
confirmed by the information on the motivation of the trainees and is caused by
trainees concentrating on passing the examination well. A test that is not included in
the examination preparation or in the examination itself is therefore perceived as a
disruption of training in individual cases.
In addition to these aspects, the hypothesis also arose that context-related factors
such as the school climate have an influence on test motivation. This hypothesis
should be investigated in follow-up examinations. An investigation of the relation-
ship between the development of occupational and organisational identity, work-
related commitment and test motivation is also still pending.
A new field of research in this context is the recording of test motivation and
training commitment under the conditions of different national training traditions
and cultures. For example, the unusually high training and test motivation of
South African trainees and students could be attributed to the fact that they belonged
to the minority of young people who could expect to be employed in the training
companies once they had successfully completed their training. A comparatively
high level of test motivation was also measured among Chinese trainees and students
at higher vocational schools. The Chinese COMET consortium’s hypothesis that this
can (also) be traced back to a social norm according to which ‘official’ surveys
express a positive view of the situation has not yet been empirically confirmed.
The approach presented here for recording test motivation and the test results
show that research into test motivation for the implementation of competence
diagnostics projects in vocational education and training is a prerequisite for the
analysis of test results.
Fig. 7.39 Comparison of the competence profiles of the second and third training years using the
example of the training occupation of electronics technician for industrial engineering (Interim
Report COMET Vol. I, 27)
may seem at first, they could not, however, be proven on the basis of the
empirical data.
The phenomenon of stagnation was again measured in the follow-up projects
Automotive Mechatronics Technician and Industrial Mechanic (Hesse) as well as in
the eight occupation-related subprojects of the COMET project NRW. The hypoth-
esis of a fundamental structural characteristic of dual vocational training could
therefore be assumed (Fig. 7.40).
In this example, the ex-post experiment (Karlinger, 1964) on which the compar-
ative measurement is based defines the training period as an independent variable:
the second and third years of training and, as an independent variable, the compe-
tence development in the form of the distribution of the test participants among the
four competence levels or the competence profiles of the comparison groups. The
formation of the two comparison groups ensures a systematic variation of the
independent variables: the training period as well as the measurement of a dependent
variable, the competence development.
The formation of the two comparison groups amounts to a dissolution of class
structures and therefore fulfils a central condition of experimental research: the
control of disturbance variables (Campbell & Stanley, 1963). What is most impor-
tant in the classroom: the teacher and the classroom-specific learning environment,
which is decisively shaped by the teacher, are therefore not available as indicators for
explaining more or less successful learning.
The stagnation hypothesis was exacerbated by the comparison of the competence
profiles of 205 trainees (industrial mechanics) with 102 vocational school students
(of a comparable subject).
252 7 Conducting Tests and Examinations
Fig. 7.40 Average competence profiles of industrial mechanics 2011, by training year
Fig. 7.41 Comparison of average profiles of trainees and students at vocational schools
A comparison of the competence profiles of the test groups (Fig. 7.41) shows that
the competence profiles of the trainees and the technical college students largely
coincide. At V ¼ 0.18, the competence profile of the trainees is slightly more
homogeneous than that of the students at V ¼ 0.25. The values of the trainees are
those of the second test time.
This result indicates that the transition from dual vocational education and
training to vocational school studies may also lead to stagnation in competence
7.5 Planning and Executing COMET Projects 253
development. When interpreting the test results of the technical college students, it
was taken into account that all participating technical colleges are located in voca-
tional schools that had also taken part in the COMET project and that the teachers
generally teach both the trainees and the technical college students. The thesis that
the phenomenon of stagnation can be traced back to examination practice is out of
the question for this comparison, as the forms of examination differ between
technical colleges and dual training programmes.
In the meantime, a wealth of results from numerous COMET projects from the
international research network is available, which make it possible to clarify the
phenomenon of stagnation in competence development and condense it into a
hypothesis.
At the second test time (longitudinal section), 69% (!) of the test subjects in the
COMET project Industrial Mechanics (Hesse) reached the highest level of compe-
tence. In the previous year, these test persons only reached 38% of the highest
competence level as second year trainees. Over the same period (1 year), the
proportion of risk students in this test group fell from 13% to only 5% (Figs. 7.41
and 7.42). A very high increase in competence from the second to the third year of
training was measured. The seemingly insurmountable hurdle for competence
Fig. 7.42 Comparison of the competence levels of industrial mechanics in the longitudinal average
(second year 2011 and third year 2012)
254 7 Conducting Tests and Examinations
Fig. 7.43 Comparison of competence level distribution of industrial mechanics in their 2second
(n ¼ 71) and third year of training 2012 (n ¼ 133) in cross-section
development in the second half of the training period had—in this case—lost its
significance (Fig. 7.43).
If one compares the competence characteristics and development of second- and
third-year trainees at the second test point (cross-sectional study 2009), it becomes
very clear that the phenomenon of stagnation in competence development has
completely evaporated. In most follow-up projects with two testing periods of
1 year apart, the phenomenon of stagnation proved to be an obstacle in established
VET practice, which was quite obviously overcome by the introduction of the
COMET method of quality assurance and quality development.
If competence development is depicted in the form of competence profiles, it
becomes clear that the increase in competence affects all sub-competences
(Fig. 7.44).
The phenomenon of stagnation in competence development took a new turn with
the evaluation of data from longitudinal studies. The assumption that the established
examination structure in dual vocational education and training with its two exam-
ination times caused a stagnation in competence development in the second half of
training (the ‘resting-on-ones laurels’ effect: that is, on the successes of the inter-
mediate examination passed) could not be empirically confirmed.
Another factor that has a demonstrably high influence on competence develop-
ment came to the fore: the competence of the teachers/lecturers.
Due to the willingness of a larger group of Chinese vocational schoolteachers in
electronics and automotive mechatronics to participate in the COMET tests, it was
possible to systematically compare the competence profiles of these teachers and
lecturers with those of their pupils and students. The 2009 study already showed that
the competence profiles of a group of teachers specialising in electrical engineering/
electronics were very similar to those of their students at the Vocational Colleges in
Beijing (COMET Vol. III,160 f.).
7.5 Planning and Executing COMET Projects 255
On a much larger data basis, the hypothesis of the transfer of the problem-solving
patterns of teachers/lecturers to their students was investigated in the COMET
project Auto Service (China). For the first time, this study confirms that teachers/
lecturers subconsciously transfer their problem-solving patterns and the technical
understanding they incorporate to their students (Fig. 7.45).
The project consortium attributed the fact that the trainees (and their teachers) at
the skilled worker schools in Guangzhou (GZ) have both a higher and more
homogeneous level of competence than the students (and their lecturers) at the
vocational universities to the fact that the skilled worker schools in Guangzhou
were involved in a pilot project aimed at introducing vocational training oriented
towards learning fields (Zhao & Zhuang, 2013).
Based on this insight, the phenomenon of stagnation in competence development
can now be condensed into a well-founded hypothesis. For the competence devel-
opment of trainees/students, this means that whenever their teachers/trainers have a
more or less inhomogeneous understanding of the subject matter or of problem-
solving, this pattern in fact also limits the competence development of their pupils/
students in the institutionalised forms of training or vice versa. ‘Competent teachers
have competent pupils’ (Rauner, 2015a, 432; Zhao, 2015; Zhou, Rauner, & Zhao,
2015, 396 ff.)
256 7 Conducting Tests and Examinations
Fig. 7.45 Transfer of vocational problem-solving patterns from teachers to their pupils (Rauner,
2015a, Fig. 3.6)
The agreement on the project objectives is a key factor in determining the project
design.
Possible project goals are
• Testing and introduction of the COMET methodology as an instrument of quality
assurance and development at the level of educational processes.
• Carrying out a pilot project to initiate innovations in curriculum development and
the didactics of vocational education and training (e.g. the introduction of the
learning field concept or new forms of examination).
• To design the COMET project as a research and development project for the
further development of COMET methods and for the scientific qualification of
multipliers.
258 7 Conducting Tests and Examinations
• Gain knowledge for policy guidance and vocational training planning (e.g. for the
development and modernisation of occupations, vocational education and train-
ing, the improvement of cooperation between learning locations and the modern-
isation of framework curricula).
• Testing international cooperation in vocational education and training (e.g. in the
development of European job profiles and international standards for vocational
education and training.
The linchpin for the success of the project is the formation of the professionally
related project groups (teachers, trainers, theorists). It is their task to develop and test
the test items (including the solution areas) in cooperation with the scientific support,
to qualify as advisors, to carry out the rating and to implement the knowledge gained
in this process in their didactic actions and, if necessary, in curriculum development
and in the design of tests and examinations.
Together with the scientific support, the steering group takes over the steering of
the project within the framework of the specifications set by project planning. The
steering group also has the task of transferring the project results. Performing this
task is a crucial factor in determining whether pilot projects lead to organisational
development and a new quality of VET practice—or whether the results of pilot
projects disappear again under pressure from the established structures of established
VET programmes. Model experiment research shows that model experiments often
fail due to the transfer of model experiment results (Rauner, 2004).
When the test item development is complete (pre-test), the next step is the final
determination of the test participants.
Representative test groups take part in large-scale competence diagnostics pro-
jects. These represent populations of trainees/students of
• Training occupations and study subjects,
• Vocational education and training such as dual vocational training, vocational
schools and vocational colleges (vocational colleges) at the level of,
• Regions and states or,
• International comparative projects.
Two criteria are decisive for the participation of the programmes.
Evaluating the teachers/trainers/lecturers of the courses to be involved:
1. The professional validity of the test items (Table 7.10);
The question to be answered is: ‘How highly do you rate the validity of the test
items in terms of content in relation to the occupation to be learned on a scale of
0-10’.
7.5 Planning and Executing COMET Projects 259
The content validity of the test items for an educational programme is given if the
teachers/lecturers of an educational programme rate the professional validity at least
as ‘high’. In this assessment, the curricular validity of the test items is initially
excluded.
If the test items are classified as valid in terms of content for an educational
programme, then the prerequisite for participation in the test is given.
2. The curriculum validity of the test items (Table 7.11).
The question is: ‘How highly do you rate the curricular validity of the test items
for your educational appetite?’ Or: ‘Do the test items suit the curriculum/framework
curriculum of the educational programme?’
The question of the curricular validity of the test items has two functions.
1. It examines whether and to what extent the curricula aim to impart vocational
decision-making competence.
In the case of a (higher) school education programme, it is to be expected that
even if the teachers/lecturers highly assess the professional validity (the assess-
ment yardstick is the later occupation), the assessment of the curricular validity
will be lower, as the teachers/lecturers must take into account the fact that a phase
of familiarisation with the occupation is still required following the (higher)
school education programme.
The participation of degree programmes with a high proportion of practical
experience, on the other hand, is generally possible. This has been demonstrated
by the participation of ‘professionally qualifying’ higher education programmes
in COMET projects (Heeg, 2015). As a rule, the interest lies in finding out to what
extent (higher) scholastic training courses can convey employability to the stu-
dents (Heeg, 2015; Fischer, Piening, Heinemann, Hauschildt, & Frenzel, 2015;
Zhou, Rauner, & Zhao, 2015).
2. If the teachers/trainers assess the curricular validity of the test items as low or very
low, then—according to the quality criterion of fairness—it is not fair to involve
this test group in the test. Participation in such a test can only be justified if
trainees/students show great interest in this type of test item despite (very) weak
test results (e.g. in a pre-test) or if there are serious educational policy reasons to
assess the quality of national forms of education/training, e.g. in an international
comparison.
Table 7.12 Assessment (in %) of German and Chinese teachers/raters on (a) ‘To what extent can
the four test items be used to record the training objective (cognitive potential for action) identified
in the occupational profile’ and on (b) ‘The content of the items corresponds to % of the
framework curriculum for the 2nd/3rd year of training’. Survey 2009 (COMET Vol. III, 93)
(a) ‘Professional’ (b) Learning field (curricular)
Respondents validity validity
Teacher/rater Germany 72% 2nd year: 47%
(n ¼ 26) 3rd year: 63
Teacher/rater Peking (n ¼ 32) 78% 2nd year: 39%
3rd year: 57%
Representativeness
The test population of the PISA project (2003) in Germany was 884,358 fifteen-
year-olds. The sample size was 4660 test participants. In a two-stage procedure, the
schools to be involved were first identified as the primary sample unit. In a second
step, 25 fifteen-year-olds per school were randomly selected. Statistical representa-
tiveness could not be achieved according to this procedure. However, this two-stage
procedure for determining the sample is sufficient to achieve an approximate repre-
sentativeness. The random selection of test participants in the schools is decisive.
When selecting the number of schools (primary, higher secondary, lower secondary
and vocational), this procedure takes into account the distribution of fifteen-year-
olds among the school types. Given the small number of school locations in the
central and smaller federal states, however, it is hardly possible to map the regions
with their specific social structures.
The random formation of representative test groups at the selected schools
contributes to the representativeness of the test results insofar as it relates to the
population of 15-year-old pupils. However, it is not possible to clarify the hetero-
geneity of competence development within and especially between classes, as these
cannot be represented by the few randomly selected pupils. The learning climate of
the school classes is abstracted in accordance with this test design and leads to a
considerable restriction of the informative value of the test results for the participat-
ing schools and teachers.
This problem can be illustrated using the example of the COMET project Automo-
tive Mechatronics Engineer (NRW), as the professional competence development in
and between the subject classes—including the same school locations—is recorded.
A comparison of a group of second and third year trainees in a given sample of
trainees shows the phenomenon of stagnation of competence development (at the
first test time): The competence development of the two test groups does not differ or
differs only very marginally from another (Fig. 7.46). A comparison between the
same test subjects as trainees in their subject classes reveals an impressively high
degree of heterogeneity between the classes (Fig. 7.47).
This example illustrates that the learning climate of the classes has a very large
influence on competence development.
The category of representativeness must therefore be conceptualised in such a
way as to capture the situation-specific peculiarities that characterise real learning. In
the sense of situated learning, a school class represents a class-specific learning
climate that has a decisive influence on the competence development of the students
to be trained. Therefore, in the comparative analysis of competence development and
262 7 Conducting Tests and Examinations
Fig. 7.46 Average competence profile per training year (automotive mechatronics technicians
NRW)
Fig. 7.47 Class comparison: highest vs. lowest TS (automotive mechatronics technicians NRW)
the learning climate, the subject classes play a fundamental role in learning
research.
The new research field of competence diagnostics in vocational education and
training requires the investigation of problems and questions on the basis of well-
founded hypotheses. The formation of randomised comparison groups is also of
some importance. In the randomisation technique, situational ‘interference variables’
are usually neutralised (Bortz & Döring, 2002, 58). This, however, severely limits
the potential of educational research. Competence diagnostics requires the scientific
examination of situated learning, the singularity of vocational learning processes, as
7.5 Planning and Executing COMET Projects 263
is the case, for example, in the subject instruction of vocational school, technical
college and specialised school classes. While the clarification of fundamental laws of
vocational learning must be abstracted from the specific conditions of learning
situations in classes, the clarification of class-specific effects on the analysis of
learning situations and learning environments shaped by teachers and trainers is
important. In the first case, the question of the representativeness of the sample of
test persons to be involved in an investigation must be clarified. In the second case,
abstracting from the learning climate of the classes would prevent the clarification of
essential factors for the competence development of the learners or classes.
In competence diagnostics in vocational education and training, the samples of
the occupations involved often represent both forms of representativeness.
Table 7.13 Distribution of trainees in the selected training programmes among the Länder
involved in the project (Federal Ministry of Education and Research (2006), Vocational Training
Report 2006 Part II and Annex; Federal Statistical Office 2006)
Trainees Trainees absolute and Trainees absolute and
(nationwide) in percent 2005 in in percent 2005 in
Training occupation 2005 Hesse Bremen
Electronics technician FR 22.325 1.619 175
energy and building (7.3%) (0.8%)
technology
Electronics technician for 14.165 1.012 234
industrial engineering (7.1%) (1.7%)
264 7 Conducting Tests and Examinations
for research-economic reasons, a total survey was carried out in Bremen. The
training of apprentices as electronics technicians is carried out at three vocational
schools.
The number of test participants at the first test time was n ¼ 697, of which
371 were trainees for Hesse and 255 for the state of Bremen as well as 65 were
students at technical colleges (Hesse).
The sample at the second test time (1 year later), as the basis for a new cross-
sectional and longitudinal study, had the same size. The trainees in the second year
of training at the first test time, who formed the sample for the third year of training at
the second test time, took part in the longitudinal study.
The samples of all educational programmes involved in the COMET project
Electronics Technicians (Hesse) are representative in two ways. They represent
both the test population and the class population, as all participating VET schools
involved all second- and third-year classes of electronics technicians as well as the
technical school classes in the survey.
This form of representativeness of the project also made it possible to analyse the
class-related effects of competence development. In this respect, there is also a high
degree of situational representativeness, which formed the basis for the inclusion of
the COMET project in the schools’ internal quality assurance and development
procedures (Table 7.14).
• The scientific and didactic orientation of teachers and lecturers as well as trainers
often represents a considerable barrier to the development of test items in
accordance with the COMET competence model. One successful way for the
development of test items is the participation of one or more experienced
teachers/trainers who have already gained previous experience in the develop-
ment of test items in a COMET project of a related profession.
• The test items are not developed based on an existing curriculum, but in relation
to the fields of action of professional practice (following the practice of Interna-
tional World Skills— IWS).
• Nevertheless, once the drafted test items have been tested and revised, it is
necessary to assess the curricular validity of the test items for all educational
programmes involved in a project.
• Modular structures in training and examination practice cannot be considered in
the development of test items because they contradict the requirements of the
COMET competence model (concept of (holistic) task solution).
• Conceptualisation of the context analysis.
7.5 Planning and Executing COMET Projects 265
Table 7.14 Overview of the number of test participants at the first recording time (Haasler,
Heinemann, Rauner, Grollmann, & Martens, 2009, Section 4.7)
Electronics
Electronics technician
technician FR energy
for and
industrial building
engineering technology
(industry) (handicraft)
2nd 3rd 2nd 3rd
year year year year
Hesse Heinrich Emanuel Merck School, 50 22 72
Darmstadt
Commercial schools of the Lahn-Dill dis- 16 18 6 13 53
trict, Dillenburg
Werner von Siemens School, Frankfurt am 32 17 15 29 93
Main
Ludwig Geissler School, Hanau 32 27 59
Oskar von Miller School, Kassel 17 26 13 18 74
Radko-Stöckl-School, Melsungen, 11 9 20 371
Germany
Bremen Technical training Centre Mitte (TBZ), 22 30 27 41 120
Bremen
Schulzentrum Sek. II Vegesack—voca- 28 16 0 8 52
tional schools for metalworking and elec-
trical engineering, Bremen
Commercial educational institutions, 10 13 31 29 83 255
Bremerhaven
207 169 103 147 626
The first step in the conceptualisation of the context analyses is to define the interest
in knowledge and the development goals in relation to the test groups involved in a
COMET project, as well as the educational programmes and systems which repre-
sent the test groups.
The COMET instruments are used to measure
1. The pedagogical-didactic potential of vocational training programmes and sys-
tems for the qualification of skilled workers on the basis of individual competence
characteristics and development
2. The commitment, identification with the occupation to be learnt and, in dual
training programmes, emotional attachment to the training companies. The
conceptualisation of recording the willingness to learn must—in contrast to
recording the vocational competence—take into account the structures of the
vocational training programmes and systems. When developing and applying the
scales for occupational and organisational identity and for occupational and
organisational commitment, for example, it must be considered whether and in
266 7 Conducting Tests and Examinations
what form learning in the work process (the company as a learning location) is
integrated into vocational learning.
The context surveys provide data for the analysis and interpretation of the test
results. This research approach goes far beyond the established descriptive
analysis approaches in competence diagnostics. With the introduction of the
multidimensional ‘heterogeneity model’, the PISA project has succeeded in
partially overcoming the educational policy requirement for a descriptive project
design (PISA, 2003; Prenzel et al., 2004, 377 ff.).
Survey of Trainees/Students
Four classes of instruments can be distinguished which have so far been used to
determine the context characteristics of in-company learning:
• Instruments that focus on the objective conditions of the quality of in-company
learning within the framework of dual vocational training;
the special tasks in vocational schools. For example, in contrast to general schools,
parental work plays a minor role in vocational schools. In contrast, deviant behav-
iour and pupil orientation are also important determinants of school quality in
vocational schools. In addition to items from general school quality research, some
items were added to genuine vocational-educational quality dimensions such as
learning location cooperation and practical relevance of teaching (Pätzold, Drees,
& Thiele, 1998; Pätzold & Walden, 1995).
The hypothesis to be tested with regard to vocational competence development as
a function of the school learning environment would be that a school environment
characterised by high pedagogical quality and high work process orientation would
be conducive to a coherent competence development process with regard to all three
concepts of vocational competence (functionality; process functionality; utility value
and customer orientation; social relevance). The first step would be to determine the
impact and strength of individual conditions on the course of vocational competence
development. Moreover, the replacement of a ‘school learning concept’ by a ‘voca-
tional learning concept’ can be examined and differentiated over the course of
training (Bremer, 2004, 114 f.).
An online procedure is recommended for carrying out the context analysis, as this
considerably shortens the time between the implementation of the test and the
feedback of the test results. With this form of quantitative context analysis,
generalisable phenomena and insights can also be gained in large-scale projects.
The clarification of the learning and training situation of classes also requires
qualitative methods and, above all, dialogue between teachers and trainers about
‘their’ test results and the analysis results concerning them. A collegial interpretation
of class-specific training situations can therefore provide insight into the quality of
vocational training processes that goes far beyond the quantitative results. The
innovation potential in vocational education and training practice ultimately depends
on this.
Quality Diagrams
Eight scales were formed from the total items available for the survey of test
participants (Table 7.17).
The scales are arranged in a quality diagram in such a way that the three central
scales for evaluating in-company vocational training are assigned to the upper half of
the diagram:
• Business process orientation of training quality.
• Training quality.
• Training support (by the trainers).
7.5 Planning and Executing COMET Projects 269
Table 7.18 Items on the vocational school as learning location and on the scholastic self-concept
(cf. Piening, Frenzel, Heinemann, & Rauner, 2014 for details)
Cronb.
Scale Items alpha
Didactic Classmates frequently disturb the lessons. (recoded) α ¼ 0.71
climate Classmates have little consideration for other students.
What we do in class, I usually find interesting.
Students often bunk school.
I feel comfortable at my school.
Teacher Our teachers consider the interests of the students in their lessons. α ¼ 0.84
evaluation Our teachers make the lessons interesting.
Our teachers take our students seriously.
Our teachers have a good overview of company reality.
Our teachers cooperate with instructors and master craftsmen from
our company.
Our teachers are well versed in the subject.
Didactic Our teachers also take care of individual students. α ¼ 0.83
quality Our teachers coordinate the planning and execution of lessons with
each other.
Our teachers can also teach difficult topics in an understandable
way.
Our teachers give us the opportunity to solve problems indepen-
dently and give us advice.
learning at school. The items were selected to facilitate the evaluation of the great
heterogeneity with which the trainees, vocational school pupils and technical college
students experience the learning climate of the different vocational institutions.
In dual vocational training, the scholastic learning climate is generally assessed
by trainees in competition with the training quality of in-company vocational
training. Their experience of cooperation with other trainees and specialists as well
as their relationship with their teachers is included in their assessments of the quality
of the scholastic learning climate (Table 7.18).
Teachers are the key factor for the qualities of scholastic learning. This is shown
by the empirical data of the COMET projects—in accord with the state of the art in
relevant research. One strength of the scales is that they differentiate between items
with which the practical competence of the teachers can be evaluated: ‘Our teachers
have a good overview of company reality’. For many professions, these items are
particularly important, as passing the exam and therefore achieving employability
are the yardsticks by which learners assess their teachers and trainers. As the
trainees/students clearly distinguish between the occupation-related and the sub-
ject-related competence of their teachers, this is taken into account when selecting
the items.
Two scales for learning location cooperation are assigned to both learning
locations in the quality diagram. They are therefore arranged in the middle of the
diagram. A distinction is made between a scale based on the content and another
7.5 Planning and Executing COMET Projects 271
Table 7.19 Items for the differentiated evaluation of learning location cooperation
Cronb.
Scale Items alpha
Learning location coop- My training company and the vocational school coordi- α ¼ 0.64
eration (structure) nate training with each other.
Joint projects are carried out between our company and
the vocational school.
The training company is satisfied with the school’s work.
My company attaches great importance to attending
vocational school.
Learning location coop- Learning at vocational school and in the company is well α ¼ 0.89
eration (contents) matched.
Vocational school lessons are oriented towards company
practice.
The lessons at vocational school help me to solve the
tasks and problems of the work at the company.
I can apply the content I learn in vocational school to
my work.
The work that I carry out in the company is also dealt
with at vocational school.
based on the structure of the learning location cooperation. Both scales make it
possible to very clearly illustrate how the two learning locations are communicated
with each other (Table 7.19).
The quality diagram simplifies the illustration of training quality so that 57 individual
items can be reduced to eight quality criteria. This increases the informative value of
the analysis results in several respects.
This presentation form for context analyses illustrates the strengths and weaknesses
of the vocational training quality at a glance (in this case, from the pupil’s perspec-
tive). The selected examples are taken from the project ‘Recruitment and training
organisation’ (Piening et al., 2014). The size of the area in the diagram indicates the
level of training quality and the homogeneity of the quality characteristics indicates
the quality of learning location cooperation and the preference that the trainees have
for the learning locations (Fig. 7.48).
In-company training is rated more positively than school-based training. Learning
location cooperation is the central weakness in this example. The cause of this
quality deficit seems to be attributed to school as a learning location.
272 7 Conducting Tests and Examinations
The quality values are derived from the scale’s mean values. The advantage of
this form of presentation is that the quality diagrams can also be compared with the
values of the evaluation for the individual items.
Illustration of Heterogeneity
This presentation form for context analyses also makes it possible to illustrate the
degree of homogeneous and inhomogeneous quality profiles and to quantify them
using a coefficient of variation. For example, the Qualified Groom has not only a
slightly above-average qualification profile, but also a relatively homogeneous one.
In contrast, the quality profile of the Specialist Warehouse Clerk is very inhomoge-
neous. Here, too, the overall very low quality of training is likely to be caused
primarily by the low level of training support (Fig. 7.49).
Data Protection and Coding of the Personal Data of the Test Persons
For the neutral rating of the respondent solutions, it is necessary to anonymise the
personal assignment of a respondent to their solution to the test item. The response to
the task solutions identified by name was therefore first coded centrally before being
forwarded to the evaluators for rating. Consequently, the rater can determine neither
the creator of the solution variant, the training year or training occupation nor the
attended vocational school. All information that could consciously or subcon-
sciously influence a rater’s evaluation was withheld.
In this design, the personal assignment of the data to the subjects can only be
carried out centrally by the scientific supervisor who leads the investigation. The
7.5 Planning and Executing COMET Projects 273
Fig. 7.49 Quality profiles of the training occupations Qualified Groom and Specialist Warehouse
Clerk
personal code of each subject is required both for the second survey in the longitu-
dinal section and for identification in the written survey on the context
characteristics.
The data of the two measurement points at which the test items were used are
anonymously compiled by Scientific Support with the help of a code word. The
collected individual data of the subjects will not be passed on to third parties. All
pupils who took part in the survey receive individual feedback on their personal level
of competence.
274 7 Conducting Tests and Examinations
The use of test items in cross-over design results in a problematic situation from a
professional pedagogical perspective: after the first use of the test items, the sub-
jects—quite rightly—expect feedback on their results. As all test items are used
again at the second recording time in order to confront each subject with all four test
items in the portfolio, the items cannot be intensively discussed and didactically used
in vocational school lessons for methodological reasons.
Participation in the study was left up to the vocational school students. However,
in the cover letter of the study to the subjects, it was pointed out that the test is part of
the training in order to substantiate the serious character of the survey (see cover
letter).
When informing test participants about the objectives and implementation of the
tests, a distinction must be made between the initial performance of a COMET test
and the repeated participation of test groups in a test, e.g. in a longitudinal study with
two test times.
As a rule, when participating in a COMET test for the first time, the competence
level of the corresponding population is examined. The ‘initial situation’ is measured
when it is intended to also use the COMET test procedure as an instrument for
quality development.
All test participants are informed in writing about the objectives, the procedure
and the evaluation of the test.
In order to ensure the objectivity of implementation, it is essential to avoid
teachers/trainers challenging their trainees/students to a special effort through a
motivating speech or other motivating activities (competitive situation). Equally
problematic are derogatory remarks with which the test is presented, for example,
as an annoying duty that has nothing to do with the actual training.
If a test group takes part in a COMET test for the second time (in a longitudinal
study), it should also be informed in writing about the test. It must be explained what
a longitudinal study is and that each individual test participant, each of the classes
involved, an education centre or even the test participants in a region learn whether
and how the quality of the training has changed and how it can be improved; why—
if necessary—a new context survey is carried out and how and when the test
participants are informed of the test results (see the example of a letter to the test
participants).
Test Items for Electronics Engineers
Dear Trainees,
As part of your vocational training, we would like to confront you today with two
typical tasks for electronics technicians.
7.5 Planning and Executing COMET Projects 275
In this project, the tests were carried out at the vocational school as learning location.
The subjects were confronted with the test items as part of the regular vocational
school curriculum. The first recording time at which the test items were applied
encompassed a period of 5 h. The test began at 08:00 hours with the instruction of the
test participants by the teacher, and the test ended at 13:00 hours. In order to achieve
a uniform approach of the subjects, the teachers were informed beforehand about the
exact course of the test by means of written instructions:
• Each student received a total of two test items for processing. Each task was
scheduled to take a maximum of 2 h to complete.
• The publication and processing of the two tasks were strictly separated in time.
There was a half-hour break between processing task 1 and task 2.
• One week before the test was due to take place, students were informed by their
teachers that a test was taking place and within what framework it was taking
place. A written instruction for the teachers was also prepared for this preliminary
information.
The results of the competence measurement in vocational education and training
suggest that the test results will also differ considerably due to the great heteroge-
neity of the trainees’ previous schooling and other personality traits. This is
reinforced by the different attractiveness of the occupations (their identification
potential) and the large differences in the commitment of the trainees. It was
therefore necessary to decide whether the survey of the trainees’ test motivation
was required in order to obtain additional data for the interpretation of the test results
if these differed greatly between different groups of pupils.
As a rule, either one-off tests or longitudinal examinations with two test points are
carried out. The decision for the temporal course of a project is directly related to the
project objectives. Longitudinal studies have the greater research and development
potential. The timing of COMET projects also depends on whether the transfer of
project results and experience is already systematically integrated into a project with
two test points. This particularly concerns the qualification of multipliers for the
implementation of COMET projects.
Test Scope
So far, two variants of the temporal scope of the test have been tested.
Variant 1 Each test participant works on a complex test item. The maximum
processing time is 120 minutes. The COMET test is not only a measurement
7.5 Planning and Executing COMET Projects 277
Online Rating
The processed test items (item solutions) are forwarded to the raters for dual online
rating in anonymised form. The online rating enables prompt feedback to the test
participants. The feedback documents contain the competence profile of the test
participant for the fields of action that were processed with the test items. The
responsible teachers/trainers explain the feedback documents to ‘their’ trainees—
comparable to the return of tests or exams. The feedback document also contains the
competence profile of the respective test group, so that each test participant can
classify his/her test result accordingly.
As the competence profiles of the individual test participants in a test group (e.g. a
vocational school or technical college class or a test group of a training company) are
generally similar to the competence profile of the overall group, the competence
profiles also show which partial competences in the training process were imparted
at which level of ‘success’.
The responsible teachers and trainers have the opportunity to reflect on the
strengths and weaknesses of the training with reference to competence profiles and
to take appropriate quality development measures.
In the feedback workshops, the researchers present their test results on the compe-
tence development of the test participants, structured according to the learning
groups involved (classes, companies, school locations) and present their interpreta-
tions based on analysis data. The focus is on the results of the questions:
• What competence levels do the test groups have?
• What are the differences between the different types of competence, differentiated
according to class, school location, training company, region and country?
• What is the degree of heterogeneity of the competence values within and between
the test groups?
• Which interpretations of the test results do the results of the context analysis
permit?
It would then be a good idea to give teachers and trainers the opportunity to
interpret the test results and their interpretation by scientific support on the basis of
their own teaching and training experience. In this case, it is particularly important to
identify the causes for the differences in test results between the test groups involved
(e.g. the classes).
The same procedure is used for the research results on the survey of vocational
and organisational identity as well as vocational and organisational commitment and
on the assessment of training quality by the test participants.
One of the guiding principles of the feedback workshops is: Learning from each
other.
In a final step, teachers/trainers report on the application of the COMET compe-
tence and measurement model as a didactic tool for the design, organisation and
evaluation of vocational training processes. Further points include the exchange of
experience on the changed learning behaviour of trainees, the application of the
rating scale for new forms of evaluation of teaching/training projects and the
handling of obstacles in the introduction of COMET-based forms of vocational
teaching and learning.
One aim of the feedback events is to use the test and discussion results to justify and
agree didactic and training organisational measures and to define a time frame in
which these can be implemented and their success verified.
Three overarching questions must be considered (see also Chap. 9):
1. How can the COMET competence model be translated into didactic action?
2. How can the development of competences be checked using a self-evaluation
concept based on the rating procedure?
3. Which competences are not covered by the COMET test procedure and how can
these also be promoted?
280 7 Conducting Tests and Examinations
The primary benchmark for the justification and evaluation of innovations is the
competence of the test participants to solve professional tasks completely and at a
high level of knowledge. The competence profiles of the test participants are useful
for identifying the problem-solving patterns of the test participants. At the same
time, they represent the expertise of their teachers and trainers (Zhou et al., 2015,
401 ff.).
Example The importance of feedback from test results—above all in the form of
competence profiles—is very clearly demonstrated in a German-Chinese follow-up
project in automotive mechatronics (Zhou et al., 2015). Students from industrial and
comprehensive colleges in several provinces as well as trainees and master students
from technical colleges took part in the Chinese subproject. The latter had partici-
pated in a pilot project for the introduction of the learning field concept and in this
context also dealt with the concept of the holistic solution of professional tasks on
the basis of competence profiles of the electronics engineer project. This is clearly
reflected in the test results (Fig. 7.51).
Fig. 7.51 Competence distribution of the test groups China (Industrial Colleges, Comprehensive
Colleges, Technician Colleges), Hesse, NRW (second- and third-year trainees)
7.5 Planning and Executing COMET Projects 281
Of the Hessian trainees and master students, 64% reached the second and third
competence levels, and 18% the third level. Of the formally somewhat more highly
qualified (senior) trainees and master craftsmen studying at the skilled worker
schools in China, 61% reached the second and third competence levels, of which
43% reached the second and 18% the third. The share of the risk group (whose
competence is lower than the nominal competence) is 12% for Hessian trainees and
only 5.4% for Chinese trainees.
Of the NRW test group, slightly more than one-third of trainees (38%) reached
the second and third competence levels, of which 30% reached the second and 8%
the third.
The weakness of Chinese higher education (IC, CC) is evident in the very small
proportion of students who achieve the highest level of competence. This is only
1.7% of IC students and 7.4% of CC students.
The high degree of functional competence among Chinese CC students (65%)
and IC students (52%) is an indicator of a study with a subject-systematic orienta-
tion—and therefore with hardly any practical orientation. Only 22% of CC students
and 28% of IC students therefore have a professional work concept (procedural
competence).
At 38.24%, the proportion of risk students in the NRW test group is significantly
higher than in the comparable groups.
If one compares the best classes of the Hesse and China trainees, it becomes
apparent that the class of Hesse trainees has a TS ¼ 46.7 and is therefore above the
competence level of the best Chinese class with a TS ¼ 42.3, but that the Chinese
class has a much more homogeneous competence profile (V ¼ 0.20 versus V ¼ 0.28)
(Fig. 7.52).
The skilled worker and master craftsman students in all three test groups
Fig. 7.52 Comparison of the competence profiles of the two best test groups (classes) in China and
in Germany
282 7 Conducting Tests and Examinations
One of the most important steps in project planning is the development of a transfer
concept. In this regard, two transfer methods compete with each other: a quasi-
experimental project design and innovation projects with an integrated transfer
concept.
In a quasi-experimental project design, the focus of the project activities lies on
the verification of a pedagogical-didactic model by scientific support. A consistent
control of the framework conditions during the course of the project is the prereq-
uisite for securing the project result. Based on this result, the project-implementing
agency then decides on the implementation of the test model—e.g. in the form of
legal regulations, decrees and other administrative provisions.
The alternative transfer variant is to set up a pilot project from the outset as an
innovation project with a transfer component. This variant is widespread. Soon after
they were established—in the tradition of experimental research—the model test
programmes of the Bund-Länder Commission (BLK) and the so-called economic
model tests (controlled by BIBB) were increasingly defined as innovation projects
(Deitmer et al., 2004). However, the implementation of sustainable transfer methods
is considered a weakness of this model experiment variant (Rauner, 2004).
For the introduction of competence diagnostics in vocational education and
training as a method of quality assurance and quality development, there are
numerous possibilities for a successful transfer of project results. The challenge
here is to implement a multiple transfer concept. This requires close coordination of
the project activities between the actors involved in the innovation project in
vocational training practice, the steering and support systems and the scientists
involved.
A key function for the transfer of the COMET methodology is played by the final
examinations of vocational training programmes (Rauner, 2015a). Final
7.5 Planning and Executing COMET Projects 283
examinations are regarded as the ‘secret curricula’ and, in vocational education and
training, they require cooperation between experts in vocational education and
training planning and practice, therefore strengthening cooperation between learning
locations.
The individual test performance is determined on the basis of open complex test
tasks with a processing time of up to 120 min. The evaluation of the solutions is each
carried out by two teachers independent of one another. Each of the eight criteria is
operationalised by five items each. The evaluators must first decide which of the
eight criteria or which of the 40 items can be applied to the respective task. Test
practice has shown that the test items were formulated in such a way that almost all
forty items are used. It was specified that
• All eight criteria of the competence model’s requirement dimensions are captured
by each one or two test task(s) to be processed by the test participants,
• At least two of the five items each, with which the competence criteria are
operationalised, must be used. Items that are insignificant for a test item are
deleted by the raters.
For each item, the raters indicate the degree to which it has been fulfilled
(Table 8.1).
This form of interval scaling rating is formed by the numerical gradation of digits
0–3. The decision for four scale levels was made to exclude neutral answers. The
contextual justification for this is given by an allocation of the three successive levels
of work process knowledge to the numerical gradations. Table 8.2 exemplifies how
the evaluations of the raters must be converted into scores.
The average value of 2.4 is calculated from the sum of the average values for the
functionality criterion. The average values are rounded up or down to one decimal
place and multiplied by a factor of 10. The example shows a point value of
PF1 ¼ 24.0.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 285
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_8
286 8 Evaluating and Presenting the Test Results
Table 8.3 Calculation of the scores for the three competence dimensions
Competence Scores (competence Scores (competence
Competence dimension criteria criteria) dimension)
Holistic shaping compe- 8 PG1 11 PG 12.0
tence (DG) 7 PG2 13
6 PG3 12
Procedural competence 5 PK1 18 PP 17.3
(DP) 4 PK2 16
3 PK3 18
Functional competence 2 PF1 24 PF 22.5
(DF) 1 PF2 21
Professional competence TS: 51.8
The scores for the competence dimensions are determined as arithmetic averages of
the competence criteria defining a competence dimension. The scores are rounded to
one decimal place (Table 8.3).
If each test participant processes two test items, the average of the results of the
two test items is used to calculate the results for each test participant.
The data on the individual test subjects can be aggregated according to various
criteria such as subject classes, vocational school locations, federal states, states and
characteristics of the test participants.
8.1 Classification of Individual Performance in Professional Competence Levels 287
According to the specifications described above, the value ‘fully met’ corresponds to
a score of 22.5 (with a maximum of 30 points). In examination practice, a (sub-)
examination is considered passed if at least 50% of the points attainable for the
uppermost result interval are achieved. In this case, that would be 22.5 points.
According to the 50% rule, the minimum score of 11.3 is therefore set for the
achievement of a competency level.
When assigning a sub-result to one of the three competence levels (competence
characteristics), the dual character of vocational competences as successive and
interrelated competence levels on the one hand and as independently existing
288 8 Evaluating and Presenting the Test Results
Fig. 8.1 Example for 17 test persons and their test results
competence characteristics on the other (cf. COMET Vol. I, Sect. 4.3) is considered
by taking both aspects of competence into account when defining the criteria for
assigning individual test results to a competence level. In this respect, the definition
for calculating the minimum scores for the competence levels was based on criteria.
The allocation of individual performances to competence levels follows a
criterion-oriented interpretation of the task solutions. This is based on the compe-
tence model (! 4.2).
The competence level of a test person is determined in accordance with the
criteria of the operations he or she masters (Table 8.3).
Two evaluations are determined to assess the test results for individual
participants:
1. The number of points achieved,
2. The level of competence.
Within the framework of the test evaluation, it was examined whether the
competence level of functional competence can be assumed as basic competence
for the other competence levels. The results in Sect. 4.2 indicate that both interpre-
tations—successive and interrelated competence levels as well as independently
existing competence dimensions—are justified. As the results of the Latent Class
Analysis show, it is appropriate to assume an increasing difficulty of the competence
dimensions (! 4.2). At the same time, the analyses have also shown that the
dimensions are independent of each other in terms of content, which justifies
describing them as different competence dimensions.
The COMET consortium has provided the following definitions for the assign-
ment of trainee performance to a competence level (Fig. 8.1).
8.1 Classification of Individual Performance in Professional Competence Levels 289
Table 8.4 Rules for the compensation of missing functional competence scores
Achieved score for ‘Functional Minimum score required for CP + CD to reach competence
competence’ (PF) level 1 (∑(PP + PD))
>11.2 –
10.3–11.2 3
9.3–10.2 6
8.3–9.2 9
8.2 No further compensation possible
Table 8.5 Rules for the compensation of missing functional competence scores in procedural
competence
Achieved score for ‘Procedural’ (PP) Minimum score at CD to achieve competence level 2 (PD)
> 11.2 –
10.3–11.2 3
9.3–10.2 6
8.3–9.2 9
8.2 No further compensation possible
The test results can be displayed in different ways. The representation in Fig. 3.1
summarises the results of a school class.
In addition, a network diagram is created for each individual test participant
(Fig. 3.8). This representation, which includes not only the three competence
dimensions but also the eight competence criteria, emphasises the multidimensional
character of the competence model.
Fig. 8.2 The graphical representation of competence level distribution in a test group of vocational
school students, n ¼ 27
The results of the analysis also show that the competence levels and components
are relatively independent (COMET Vol. II, 67 ff.). This dual structure of compe-
tence components, both as competence levels and as dimensions of competence
profiles, considerably expands the possibility of evaluating test results. In this case, it
should be borne in mind that both forms of representation, the competence levels and
the competence profile representation of vocational competence, represent two
complementary evaluation perspectives, each of which in itself is subject to abridge-
ments. For practical handling of the illustrated test results, it is therefore advisable to
interpret both forms of representation in relation to individual results in context.
This is exemplified by the results of a test group of students from technical
colleges (Fig. 8.2).
292 8 Evaluating and Presenting the Test Results
Fig. 8.3 Comparison of competence level distribution in several test groups (results from 2009)
Fig. 3.2 shows that, with 41%, the first competence level is the most pronounced;
30% achieve the second, and only 26% achieved the third competence level. This
could be read as if only 41% of the test persons reached the first competence level. If
the diagram is read correctly, the statement is 96% of the test persons reach the first
competence level, 56% of them the second and 26% the highest competence level.
The risk group comprises 4%.
This means that, as a rule, all test persons who reach the second and third
competence levels also have functional competence—as a basic competence—
which can be assumed for the second and third competence levels. This also applies
to the relationship between the second and third levels of competence. The third
competence level includes the skills of the first and second competence levels.
For the comparative representation of a larger number of test groups, a form of
representation is suitable in which the distribution of competences at competence
levels is represented as shares of 100% in a bar (Fig. 8.3).
This form of representation serves to clearly illustrate the differences between test
groups with regard to the competence levels achieved.
In the COMET measurement model and in the rating procedure, this is reflected in
the evaluation of the solution aspects on the basis of items that are rated according to
a four-stage interval scale (0–3) (Table 8.6).
8.2 Graphical Representation of the Test Results 293
Table 8.6 Assignment of the interval scale to the levels of work process knowledge
Fully met Partly met Not met In no way met
Interval scale 0–3 3 2 1 0
Levels of work process knowledge Know Why Know How Know That
Fig. 8.4 Distribution of overall scores for nominal, functional, procedural and holistic shaping
competence
The assignment of the scale values 1–3 to the levels of work process knowledge is
based on a pragmatic justification. An item is then evaluated as ‘fully met’ if the
respective solution aspect is not only considered but also ‘justified in detail’. Each
test task therefore states: ‘Justify your solution in full and in detail’. If this is
completely successful, then this corresponds to the level of action-reflecting or
‘Know that’ knowledge. An item is then not (or ‘still’) met if the underlying rules
for the complete solution of a task were considered but could not be justified. This
then corresponds to the level of ‘Know that’ or the value ‘1’. An item is ‘partly met’
if the corresponding solution aspect could be justified in principle, but without
adequately taking the situational context into account.
The definition of the three successive and interrelated competence levels on
which the COMET measurement model is based leads to (in test practice) relatively
large intervals in the overall scores (TS).
This means that it is possible for subjects with a higher overall score to be placed
at a lower level of competence. This happens whenever they reach this level at a
higher level of knowledge.
Figure 8.4 shows that, for example, a TS of 45 can mean that a test person/test
group has achieved both the competence level Procedural competence (high) and the
competence level shaping competence (low). This differentiating form of evaluation
and representation of the competence characteristics depicts the reality of vocational
education and training much more validly and accurately than a score on a
294 8 Evaluating and Presenting the Test Results
Fig. 8.5 Standardised subdivision of competence levels into ‘low’, ‘medium’ and ‘high’
The representation of the same test results in the form of a network diagram
(Fig. 8.8) realistically illustrates the characteristics of all eight competence compo-
nents, but not the effect of the successive and interrelated competence levels. This is
why the pointers for the development of the competence levels also appear here as
independent of each other. In contrast, the length of the pointers here represents the
extent of the competence development in the form of scores. The average score for
functional competence (CF) is 14.6 points. The average value of procedural compe-
tence (CP) is 11.7 points and that of shaping competence (CD) is 9.2 points. These
values do not contradict the other forms of representation of occupational
competence.
Modelling the requirements dimension of the COMET competence model
(! 4.2) is based on
1. the concept of the complete solution of professional tasks,
2. differentiation according to the three levels of successive and interrelated work
process knowledge: action-leading, action-explaining and action-reflecting.
The scaling model for vocational education and training is not assigned the
difficulties of the individual test items on the respective competence scale. Instead,
the probable skill values of the competence development of the competence dimen-
sions functional, procedural and organisational competence form three overlapping
courses. For example, the average functional competence can be defined as a value
between 0.33 and 0.66 on the CF scale (functional competence). Accordingly, a
medium to high ‘functional competence’ value can correspond to a low to medium
‘procedural competence’ value. Based on more than 7000 test results, the distribu-
tion for the test groups was determined in accordance with test levels and the total
score. This distribution forms the basis for the definition of low, medium and high
Fig. 8.8 Average competence profile of a test group of vocational school students (type ‘Voca-
tional education and training’), n ¼ 27
298 8 Evaluating and Presenting the Test Results
The representation form of the competence profiles using network diagrams contains
two items of information: the three competence levels and the eight competence
components.
To this end, a measure of the homogeneity or variance of the competence profiles
can be determined independent of the level of the total score: the coefficient of
variation. It is quite possible that the test persons solve a test task at a relatively low
level, but nevertheless with equal consideration of all eight solution aspects. The
coefficient of variation V indicates whether the competence profile is more balanced
or imbalanced. A low coefficient of variation stands for relatively high homogeneity
of the competence profile, while high values stand for low homogeneity (Fig. 8.9).
An analysis of the network diagrams shows that the criteria ‘environmental’ and
‘social compatibility’ in particular have the lowest significance in vocational train-
ing. In the world of work, however, this would have a considerable impact as,
depending on the task at hand, violations of environmental and social compatibility
regulations can have far-reaching consequences. In the demand for the professional
execution of a work order, ‘professional’ is often associated with the categories
‘specialist’ or ‘technical’ in the context of scholastic learning. In in-company
vocational training, on the other hand, the ‘professional’ category refers to ‘skilled’
work. If the vocational school succeeds in designing vocational education and
training from a work and business process-related perspective, this also means a
change of perspective in the technical understanding (Bauer, 2006), as corresponds
to the COMET concept of the complete solution of vocational tasks. In contrast, if
‘professional’ is associated with specialist science, we lose sight of work and
business processes. The consequence is vocational education and training moves
away from its function of imparting vocational competence (cf. KMK, 1996).
Conclusion The COMET test procedure enables the representation of competence
not only in the form of competence levels, but also of competence profiles. This is
mainly due to the didactic quality of the test procedure. Teachers/trainers as well as
trainees and students can read directly from the competence profiles as to which of
300 8 Evaluating and Presenting the Test Results
Fig. 8.9 Differentiation of the competence profiles according to the total score (ATS) and the
coefficient of variation: (a) E-B, class no. 7, n ¼ 26; (b) E-B, class no. 5, n ¼ 18; (c) E-B, class
no. 24, n ¼ 19; (d) E-B, class no. 23, n ¼ 17 (results COMET Electronics Engineers (Hesse) 2009)
The differences in the vocational school performance of trainees and students are a
well-known phenomenon. It is not uncommon for high-school graduates and young
people without a lower secondary school leaving certificate to learn the same
profession. This tends to occur more frequently in craft trades as, for example, a
high-school graduate wants to continue his/her training as a master craftsman after
passing the journeyman’s examination in order to assume responsibility in the
parental company and to be able to perform the function of a trainer. The heteroge-
neity of performance in these vocational school classes is therefore very high. In
occupations under the auspices of the IHK (German Chamber of Industry and
Commerce), the proportion of trainees with university entrance qualifications has
risen significantly in recent years, in line with the motto: ‘Learn a ‘real’ profession
first before I tackle the jungle of the new study courses’. In 2013, for example, 30%
of trainees in IHK occupations in their first year of training had a higher education
entrance qualification (cf. Report on Vocational Education and Training 2014, 28 f.).
The heterogeneous performance structure has less of an impact on trainees in the
training companies. Companies have the opportunity to select applicants who meet
their requirements. This informal selection leads, for example, to the fact that the
majority of apprentices in occupations such as media designer, industrial clerk and
IT occupations are high-school graduates (‘high-school graduate occupations’). In
these occupations, this form of informal selection also tends to reduce the heteroge-
neity of the performance structure in the classes of vocational schools.
In countries with school-based VET systems, a distinction is generally made
between two to three successive and interrelated school-based programmes: voca-
tional colleges, technical colleges, higher technical colleges and, more recently,
so-called ‘vocationally qualifying bachelor’s degree programmes’ based thereon.
If the admission requirements for the vocational track or the academic track for
higher education programmes are controlled by selection and admission regulations,
this reduces the spread of competence development among pupils and students.
The phenomenon of heterogeneity in vocational education and training, the extent
of which has so far been underestimated, will be presented below and analysed on
the basis of empirical results, whereby the degree of heterogeneity measured in
previous COMET projects is shown first. This is followed by an interpretation of the
causes for the heterogeneous performance structure in vocational education and
training programmes and considerations on ‘dealing with heterogeneity’.
302 8 Evaluating and Presenting the Test Results
The standard method for representing heterogeneity in the COMET project is the use
of percentile bands (Fig. 8.13).
The differences and dispersion in competence levels, determined by scores,
between test subjects or test groups formed according to different characteristics
such as occupations, states, age and prior schooling, provide information on the
degree of heterogeneity assumed in vocational education and training. The percentile
bands also used in the PISA studies can be used as a form of representation.
Representation using percentile bands makes it possible to graphically bundle
three different types of information on different groups (school locations, sectors,
years of training, educational programmes and education systems) (Fig. 8.14). The
centre mark (CM) shows the average value of the groups. Differences in average
performance become visible by comparing the different averages.
Whether these differences are significant can be seen from the grey area around
the average value on the bands, the confidence interval. With a 95% certainty, this is
where the ‘true’ average value lies, i.e. the projection from the respective group to
8.4 Heterogeneity of Professional Competence Development 303
nd
class 13 (n = 14, 2 year) 43 % 29,3
nd
class 18 (n = 15, 2 year) 60 % 23,4
Fig. 8.10 Percentage of trainee electronics technicians in the risk group (NRW project 2013/2014)
304 8 Evaluating and Presenting the Test Results
nd
class 4 (n = 24, 2 year) 46 % 40,1
rd
class 1 (n = 24, 3 year) 39 % 42,1
rd
3 year(n = 76) 34 % 36,3
nd
2 year(n = 94) 21 % 32,9
nd
class 7 (n = 26, 2 year) 15 % 35,0
Fig. 8.11 Proportion of trainee electronics technicians at the level of holistic shaping competence
8.4 Heterogeneity of Professional Competence Development 305
Fig. 8.12 Comparison of the distribution of competences of industrial clerks (INK) and shipping
clerks (SPKA) (COMET NRW, 2013)
the population. Accordingly, differences between two groups are significant and
most likely not accidental if the average of one band is outside the grey area of
another.
The third important information of the percentile bands concerns the spread of the
results, i.e. the professional distance between worse and better test results. The white
areas represent the values for 25–50% or 50–75% of a group. This range includes the
values for half of the test participants grouped around the average value. Finally, the
outer grey areas contain those cases which form the lower (10–25%) or upper
(75–90%) range. The best and weakest 10% of the results are not captured by
the bands so as not to distort their width by individual outliers. The white part of
the bands (including the grey confidence interval) therefore indicates the range of the
average 50% of the test results. The entire band shows the range of results of 80% of
the participants. The 10% best or worst results are to the right or left of the band.
To avoid major distortions in the representation of class results, test groups with
less than 15 participants are not included in the representation of percentile bands in
this report.
The total scores (TS) and the variation range of the percentile bands can be
represented in the form of learning times and learning time differences (LTD). As
vocational training with a duration of 3–3.5 years corresponds to a maximum of
approximately 70 points, a training duration of 1 year roughly corresponds to a score
of 20 points.
The following represents characteristic percentile bands for different vocational
education and training programmes.
For larger samples, the percentile bands are extended to the fifth and 95th
percentiles.
Example The spread of the competences of the second- and third-year test group of
electronic technicians in industrial engineering (E-B), building and energy engineer-
ing (E-EG), and of the technical college students (electronic technicians) of full-time
and part-time students (F-VZ and F-TZ) is striking in several respects and was
something the responsible project consortium did not expect in this form. This
mainly concerns
1. the extraordinarily wide range of variation (spread) of the competences of the test
participants in the classes. This often amounts to 40 and more points and therefore
corresponds to a learning time difference of two and more years.
2. the large differences in competence levels between the classes. Despite compa-
rable prior training of E-B and E-EC trainees, the levels of competence differ, in
some cases considerably, from one another. The E-B-Class with the weakest
performance differs from that with the highest performance by a learning time
difference of almost 1 year.
3. The formal difference between the qualification levels for initial vocational
education and training and the level of vocational schooling apparently has hardly
any influence on the competence levels measured (Fig. 8.15).
8.4 Heterogeneity of Professional Competence Development 307
Fig. 8.15 Percentile bands for professional competence across test groups at class level for trainees
(results from 2009)
The evaluation of all empirical values available so far on the spread of the compe-
tence of test persons and test groups results in values from 0 to a maximum of 80.
Theoretically, values of up to 90 are conceivable. In fact, however, values above
80 were only measured very rarely. This also applies to the value ‘0’. If the empirical
values of the spread are plotted as a learning time difference of 0–3 years on the
vertical line and the corresponding average values of the test groups on the horizon-
tal line, this results in test-group-specific patterns for the heterogeneity of the
competence development as well as a characteristic function, with which the depen-
dence of the learning time difference on competence levels can be described.
y ¼ ai - ba2i ( x b)2.(a1 ¼ 2.5; a2 ¼ 1.5; a3 ¼ 0.5 for each max. LTD; b ¼ max.
Achievable LTD).
308 8 Evaluating and Presenting the Test Results
Fig. 8.16 Heterogeneity diagram of various occupations (shipping clerks (SPKA), industrial clerks
(IK), electronics technicians in China)
Prior Schooling
The COMET projects confirm that prior schooling has a considerable influence on
the development of heterogeneous performance structures. The composition of
learning groups in VET programmes is a crucial determinant of heterogeneity.
This finding applies to large-scale studies (cf. Heinemann, Maurer, & Rauner,
2011, 150 ff.). On the other hand, this does not permit the conclusion that, for
example, a high proportion of trainees with a lower secondary school leaving
certificate in a class determines the level of competence or that a high degree of
heterogeneity in prior schooling in a class causes a correspondingly high degree of
heterogeneity in the level of competence. As could be shown in this report, classes
with a comparable structure of trainees in the same occupation and at the same
location can achieve very different levels of competence.
The more comprehensively and tightly regulated the admission requirements for
vocational training programmes, the more homogeneous the development of com-
petence structures. Characteristic examples are the vocational programmes of the
upper secondary level, post-secondary and tertiary vocational programmes of the
Chinese vocational training system. In China, admission to the general branch of
upper secondary education is decided by a nationwide test procedure. Students who
do not pass this test are referred to vocational programmes. Access to ‘higher
vocational education’ (at universities) is also regulated by a selection procedure.
These admission and access regulations contribute to the fact that the heterogeneity
of competence development in Chinese vocational education and training at all
qualification levels is low to medium.
310 8 Evaluating and Presenting the Test Results
According to the results of the COMET projects available to date, the teacher plays a
decisive role in the competence development of trainees and students. Apart from the
curricular structures specific to educational programmes, teachers/trainers are the
most influencing factor for professional competence development. This can be
predominantly seen in the fact that the competence level of learning groups can be
very different despite having the same prior schooling and the same training
programmes, without this having an effect on the spread of competence develop-
ment—insofar as this does not result from the competence level (see above). This
result points to the particular challenge teachers and trainers face in dealing with
heterogeneity.
The degree of heterogeneity depends on the competence level of the learning groups.
In learning groups with a low competence level (TS 40), the degree of heteroge-
neity tends to increase—irrespective of whether the learning groups are homoge-
neous or heterogeneous. In the learning groups with a high competence level
(TS 40), the degree of heterogeneity decreases again with a further increase in
the competence level in accordance with the function underlying the heterogeneity
diagram.
The research interest in the commitment of trainees and employees mainly consists
in identifying different reference fields to which commitment refers as clearly as
possible. As depicted by the model description (! 4.7), there are three main factors
that can be taken into account here: emotional attachment to the organisation:
organisational identity, identification with the occupation: professional identity and
an abstract willingness to perform that refrains from concrete work content: work
ethics (! 4.7). Some fundamental assumptions and research questions are reflected
in the design logic of the instruments.
It cannot be assumed that such ‘types’ are normally available in pure form, but
that the various forms of commitment interact with each other. Positive experiences
in the company influence professional commitment and work ethics, while on the
other hand, it seems difficult to maintain professional commitment even when faced
with disappointments in relation to one’s own organisation. Relationships such as
these can be analysed with the help of the instruments.
In addition, these types do not exhaust the possible fields of reference of com-
mitment—the relationship to individual colleagues, teams, certain activities etc. can
also play a major role and must be surveyed separately.
It is of particular interest for vocational education research whether the process of
developing professional identity leads to shifts in the dominant field of motivational
reference. The development of professional identity is referred to the willingness to
develop it subjectively. Commitment can be generated from the affiliation to the
occupation or the enterprise or even the work as such. It can be shown that these
different normative fields of reference of commitment in turn have repercussions on
the development of competence and identity (! 8.6).
I-C split scales are required to calculate the I-C diagrams. This form of representation
is suitable for a differentiation according to occupations.
Like the competence levels, the commitment scales are also differentiated into
low, medium and high (commitment split scales). To determine the limit values, the
33rd and 66th percentiles of each commitment scale are determined on the basis of
the Bremerhaven study (Heinemann, Maurer, & Rauner, 2009) (n ¼ 1560). In the
8.5 Measuring Identity and Commitment 313
The commitment split scales are required to calculate the commitment lights. As a
rule, the lights are calculated per occupation and presented in a cross-occupational
diagram (Fig. 8.17).
Diagrams displaying all commitment scales of a profession can also be used
(Fig. 8.18).
Several line diagrams are generated for the commitment progressions. The average
scale values of the total sample are displayed in a line diagram. In five further
diagrams, the average values of the second and third training years are shown for
each commitment scale to enable a comparison in commitment between these two
training sections.
Example The progression of professional identity in industrial/technical
occupations
In the occupational group of the industrial-technical industry, industrial mechan-
ics stand out in comparison with other trainees due to their heterogeneous
314 8 Evaluating and Presenting the Test Results
Fig. 8.17 Example of a commitment light, here for professional commitment, COMET NRW 2014
The z-standardised commitment scales are required for the four-field matrices and
enable a comparison of the different professions. The average value of the total
sample is 0, and the standard deviation is 1. Values >0 therefore mean that the
subgroup with this value has an above-average result. Conversely, values <0 mean
that the subgroup with this value has a below-average result. The further away from
0 the value is, the greater the deviation from the average.
A total of two matrices are created, one for identities (professional and
organisational identity) and one for commitments (professional and organisational
commitment). Work ethics are not taken into account in this evaluation. The
z-standardised average values of these scales are calculated for each occupation.
316 8 Evaluating and Presenting the Test Results
The results are then presented in a diagram across all occupations (cf. Figs. 8.21 and
8.22).
The formation of a four-field matrix in which the two axes represent the professional
and organisational identity results in the emergence of four fields to which occupa-
tions with different identity profiles can be assigned (Fig. 8.22).
This allows a distinction between occupations which show the identity as either
professional or organisational. This form of typological distinction also allows the
identification of the type of those who identify with both work and company as a
whole as opposed to non-identifiers.
The work-oriented trainees/employees have both a professional and a
organisational identity, but it is unclear which of the two identities is the primary
and the dominant one. The graphic representation of the work-oriented trainees
shows the quantitative characteristics of both or the work-related identity potential
of the occupations or the corresponding training relationships.
8.5 Measuring Identity and Commitment 317
Fig. 8.21 Work-related identities of trainees from both locations by occupation: (I) consistently
high identity, (II) professional identity, (III) organisational identity, (IV) weakly/non-developed
identity (ibid. Fig. 7.26) (tab. See Annex 5)
These trainees identify equally with their profession and with their company. Here,
occupations with a high potential for identification go hand in hand with training
conditions that promote close ties to the company.
Fig. 8.22 Work-related commitment of trainees from both locations by occupation: (I) consistently
committed, (II) professionally committed, (III) organisationally committed, (IV) weakly/non-
committed (ibid. Fig. 7.27) (Tab. see Annex 5)
The four-field matrix can also be applied to professional and organisational com-
mitment. Here, too, the matrix shows differences (Fig. 8.22). For example, the
occupation of carpenter can be assigned to the professional type of commitment
and the occupation of specialist for driving operations to the type organisational type
of commitment.
Consistently Committed
Those who are consistently committed strongly identify with their profession and
their company and have an underlying professional and organisational commitment.
Professionally Committed
Organisationally Committed
The Weakly/Non-Committed
The non-committed type has neither a distinct professional identity nor an emotional
attachment to the company. In this case, the willingness to perform is based on
tightly controlled work implementation (according to detailed instructions) and the
risks taken with an insufficient willingness to perform. These include in particular
the risk of losing one’s job and a low wage level. While the company cannot directly
change the identification potential of these occupations, it poses a large number of
opportunities to strengthen organisational commitment.
320 8 Evaluating and Presenting the Test Results
Based on an extensive survey of more than 3000 Saxon trainees from more than
70 occupations, the identification of the trainees with their apprenticeship occupa-
tions and companies and their willingness to perform were also examined. Identity
and commitment profiles were determined for different occupational groups. For
these forms of representation, the values were standardised in order to make them
directly comparable. Standardisation leads to a uniform average of zero and a
uniform standard deviation of the scale values of one. A positive average for an
occupational sample therefore means that this group is more pronounced than the
overall sample of Saxon trainees. No other form of representation shows the
attractiveness of their occupations from the trainees’ perspective so clearly
(Fig. 8.23).
The I-C network diagrams illustrate both the level and the profile of the I-C
expression.
The mechatronics engineers complete the E-D profile in almost all identity and
commitment dimensions slightly above the average of the total sample (with the
exception of organisational identity). In contrast, however, the I-C network diagram
of the plant mechanics, which is also balanced, is significantly smaller, as their
values are below average in all five identity and commitment dimensions. The
context data show the reasons for these weaknesses in industrial education.
If the values are based on representative surveys, then the I-C profiles do not only
display the attractiveness of the occupations for trainees. The experts from BIBB and
the social partners involved in career development therefore have quantitative and
qualitative data on the identification potential (IP) of occupations at their disposal for
the first time. The IP values indicate whether the career developers, under the
guidance of the responsible BIBB department, have succeeded in developing occu-
pations with which the trainees identify. If the IP values are below average, these
occupations do not have the potential to develop professional competence, motiva-
tion, responsibility and quality awareness (Fig. 8.24).
Fig. 8.23 I-C network diagrams of plant mechanics (n ¼ 37) and mechatronics engineers (n ¼ 108)
(ibid. Fig. 7.15)
8.5 Measuring Identity and Commitment 321
Fig. 8.24 I-C network diagrams of glaziers (n ¼ 62) and auto-mechatronics engineers (n ¼ 114)
(ibid. Fig. 7.17)
Fig. 8.25 I-C network diagrams of warehouse logistics specialists (n ¼ 21) and warehouse
specialists (n ¼ 36) (ibid. Fig. 7.23)
Fig. 8.26 I-C profiles, group 6, commercial professions (ibid. Fig. 7.24)
Fig. 8.27 I-C network diagrams of office clerks (n ¼ 315) and real estate agents (n ¼ 57) (ibid.
Fig. 7.25)
The model concept of the connection between identity and commitment in voca-
tional education and training (! 4.7, 6.4) requires empirical examination so that
trainers and teachers can rely on it in their didactic actions.
A striking—but expectable—aspect is that there is a clear connection between
identities and commitments. As a rule, identification with the occupation is also
transferred to identification with the training company. The same applies to the
relationship between professional and organisational commitment. However, the
form of the four-field matrix shows that the differentiation between both forms of
identity and commitment is necessary to capture the characteristic identity-
commitment profiles.
The degree to which the two identity and commitment scales correspond with
each other can be seen in Fig. 8.28.
There are clear correlations between the identities and commitments. In the
weakest case, the vocational identity can ‘only’ be explained to 32.5% by the values
of the organisational identity, the strongest connection being between organisational
and professional commitment. At 47.6%, the values of professional commitment
explain the values of organisational commitment.
How the factors of the identity-commitment model can be explained by the
simultaneous effect of several context factors in reality is described below. Based
on the survey data, models were created that allow the various commitment aspects
to be explained by the context factors1.
The identification with the learnt profession represents a motivation basis to get
involved in one’s profession and to act competently. A high professional identity can
also serve to overcome disadvantages related to one’s job such as poor pay and shift
work. In the rationale for the identity-commitment model, it is assumed that the
connection with the learnt occupation can be promoted by both learning venues of
the dual system. While the reputation of the profession can help to create a strong
sense of self-esteem within vocational education and training programmes at
1
The following sections are based on the multiple regressions calculated with the SPSS statistics
software.
324 8 Evaluating and Presenting the Test Results
Fig. 8.28 Strongest and weakest correlations between the commitment scales
vocational school, the activities belonging to the occupational profile are another
influencing factor. Depending on the profession and values, either the breadth of the
profession or its speciality can help to create a bond with the profession. The level of
the activities could also lead to a highly developed professional identity for the
trainees after a comparison with their individual expectations.
Based on the available data, professional identity is, as expected, determined by
the in-company characteristics of the quality of training (Fig. 8.29). Identification
with the profession is most strongly promoted by self-employment and autonomous
work. Similarly, training across the entire breadth of the profession, including
appreciation of the trainee by employees, and the transfer of responsible tasks
contribute to a high level of professional identity. Other factors that explain profes-
sional identity were established in the working and school climate and in the
cooperation between learning venues in terms of content. Both the assessments of
the working and school climate confirm the assumption that a professional reputation
or the significance of the profession in the organisational hierarchy of companies
contributes to an identification with the profession. The characteristics of the aspects
‘in-company training quality’ (e.g. training level, independent working and learning,
etc.), ‘working atmosphere’, ‘school atmosphere’ and ‘content-related learning
venue cooperation’ together account for 32% of the variability in the values for
professional identity.
A look at the results of the six professional groups involved shows that the fit of
the model for explaining professional identity is the greatest in the industrial-
technical industry, in which the context factors can even clarify 46% of the variance
of the values of professional identity. For commercial occupations, however, other
factors seem to have an influence on the development of professional identity. For
example, the global model of Saxon trainees in the commercial sector can explain
only 24% of the variance in the values of professional identity, leaving 76% of the
variance unexplained (not shown).
8.6 Identity and Commitment as Determinants of Professional Development 325
Fig. 8.29 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional identity, R2 ¼ 32.1% (This is the corrected coefficient of determination R2, at
which the number of variables included in the model equation is calculated. It combines the
requirements of model adaptation and economy. It combines the requirements of model adaptation
and economy.), predictors: quality of training (independence, scope of the profession and demands
of the tasks), working and school atmosphere, content-related learning venue cooperation (The
graphs in the following sections show the extent to which the characteristics given would predict the
commitment range and what value was actually given by the trainees. The greater the deviations
from the calculated straight line, the less accurately the characteristics (e.g. work climate, training
quality, etc.) predict the corresponding commitment range.)
Fig. 8.30 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional identity, R2 (corrected) ¼ 53.3%, predictors: company climate, training
support, training quality (breadth of profession, independence and demand/level of tasks), teacher
evaluation
scholastic factor ‘teacher evaluation’ contributes only slightly to the prediction of the
organisational identity and has the opposite effect; i.e., the higher the teacher
evaluation, the less pronounced the organisational identity. This surprising result
can be traced back to the fact that trainees may attest their teachers a high level of
expertise, but a significantly lower level of profession-related competence, which
relates to the reality of vocational work processes. This means that in the assessment
of training quality, the imparting of theoretical and practical knowledge is experi-
enced as competing points of reference in training: the appreciation of one of the two
poles reduces the appreciation of the opposite pole (Fig. 8.30).
The most important factor in predicting organisational identity is the working
atmosphere. A similarly high importance is attached to the factor of training support
(above all by the trainers). It is therefore the professional and social interactions
between the members of an organisation that lead to an emotional bond with the
company.
This model used to explain the variance of the values for organisational identity
particularly applies to commercial-technical craft trades and in commercial pro-
fessions. For these professional groups, the model explains 57% and 58% respec-
tively. A comparatively small proportion of unexplained variance remains. With
explanatory values of 48%, this model offers the worst R2 (coefficient of determi-
nation) in the general trade sector.
8.6 Identity and Commitment as Determinants of Professional Development 327
Fig. 8.31 Observed values on the Y-axis, predicted by the determinants on the X-axis; determi-
nants of professional commitment, R2 (corrected) ¼ 47.2%, predictors: training quality (indepen-
dence), business process orientation, working atmosphere, training quality (demands and level of
tasks), learning venue cooperation in terms of content, school atmosphere and teacher evaluation
328 8 Evaluating and Presenting the Test Results
equally important for the development of personality that vocational training also
proves to be an ‘education in the medium of the profession’ (Blankertz). This
presupposes that the trainees acquire interrelated knowledge so that they learn to
integrate their training activities into the company’s business processes. This is the
only way to develop a sense of responsibility and quality.
For occupational research, the identification potential of the training professions
is an essential object of research. The results of this research are crucial for the
modernisation and development of professions, since a considerable proportion of
professions have a very low identification potential (Piening and Rauner,
2015a, b, c, d, e, f). This affects the quality of training in these professions.
Measuring professional and organisational identity and the commitment based
thereon are among the instruments of quality assurance and quality development
that can be used to enhance the attractiveness of dual vocational training.
Chapter 9
The Contribution of COMET Competence
Diagnostics to Teaching and Learning
Research
9.1.1 Introduction
With teaching/learning research, pedagogy—on its way into the system of disciplin-
ary sciences—has gained access to the methods of established research traditions.
Since then, teaching and learning research (cf. Straka, Meyer-Siever, & Rosendahl,
2006) has been shaped by both experimental and quasi-experimental research and
the reduction of the category of education to that of “learning” as a form of
behavioural change (cf. Skowronek, 1969).
For example, didactic research based on conceptual change research has pro-
duced results that have received much attention in educational practice. This mainly
applies to didactics in the natural sciences (cf. Carey, 1985; Posner, Strike, Hewson,
& Gertzog, 1982; Vosniadou, 2008). Waldemar Bauer has presented an analysis on
the significance of conceptual change research as a field of vocational training
research (cf. Bauer, 2013). He concludes that research into professional knowledge
during the process of professional competence development is of fundamental
importance for the didactic actions of teachers: “If TVET teachers or trainers
would have some knowledge with regard to the intuitive concepts of their students,
they would better estimate learning barriers and problems of knowledge acquisition”
(Bauer, 2013, 227 f.). It can be assumed that scientifically founded didactics of
vocational education and training that differentiate between occupations and occu-
pational fields are impossible without profession-specific knowledge research that
clarifies how learners acquire the knowledge of work processes incorporated into
vocational work in the course of their development.
A new field of teaching and learning research was developed in the Collaborative
Research Centre (CRC) 186 “Statuspassagen und Risikolagen im Lebensverlauf”
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 331
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_9
332 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
(Status Passages and Risk Situations in Life Courses). For example, Martin Fischer
and Andreas Witzel show how the qualitative secondary analysis method can be
used to investigate the relationship between professional competence (development)
and professional identity (cf. Fischer & Witzel, 2008). They confirm the thesis that
skilled workers bundle their knowledge through processes of self-organisation
(cf. Heinz, 2002). The differentiation between different forms of professional iden-
tity and between professional and organisational identity leads to the realisation that
organisational identification contributes to limited learning behaviour (cf. Fischer &
Witzel, 2008, 42). This form of teaching-learning research predominantly contrib-
utes to the justification of hypotheses, which can only be tested if both professional
competence and the forms of professional and organisational identity can be empir-
ically recorded (! 7.1, 7.5). The methods developed in CRC 186 for extended
teaching and learning research have also succeeded in further developing the
vocational socialisation research represented above all by Wolfgang Lempert
(1995, 2000) (cf. Heinz, 2002).
The attempts to transfer methods of competence diagnostics to the measurement
of vocational competence based on test methods successfully applied in PISA
research (cf. Baethge, Achtenhagen, Babie, Baethge-Kinsky, & Weber, 2006;
Nickolaus, Gschwendtner, & Abele, 2009) have enriched teaching and learning
research in vocational education and training to the extent that a wide range of
experience has been gained on the limited scope of standard-oriented test methods in
vocational education and training. This is particularly evident at the level of testing
instruments, which ignore a central characteristic element of professional activity:
The solution of professional tasks is always confronted with a solution space defined
by the context of operational business processes, which facilitates a decision
between a—usually large—variety of possible solutions. All methods of empirically
recording occupational competence on the basis of standard-oriented test tasks are,
therefore, ruled out, as their solutions can only be evaluated as right or wrong
(cf. Rademacker, 1975). The basis for competence diagnostics in vocational educa-
tion and training are complex and open test tasks (cf. Haasler, Heinemann, Rauner,
Grollmann, & Martens, 2009, 103 ff.). In this context, Martens and Rost point out
that a fundamental distinction must be made between a difficulty model and a
capability model (cf. Martens & Rost, 2009).
learning field concept” (Scholz, 2015, 155; cf. the contributions in Chap. 2
“Erfassen beruflicher Kompetenz: Erwartungen, Ziele und Erfahrungen der
Berufsbildungspraxis” (Capturing vocational competence: expectations, goals
and experiences of vocational training practice) in: Fisher et al., 2015a and
Katzenmeyer et al., 2009).
The feasibility study on the application of COMET methods for the design of
vocational examinations (cf. Rauner, 2015b) and the first pilot studies carried out on
this subject (cf. Gäumann-Felix & Hofer, 2015) open up a field of teaching-learning
research with considerable potential for developing the quality of vocational educa-
tion and training. This assessment is based on the insight that examinations as the
“secret curriculum” determine the quality of vocational education and training like
no other element of control and organisation.
This is directly related to developing new concepts for improving cooperation
between learning locations (cf. Rauner, 2017, Ch. 4.3).
The verification of the hypothesis that a cross-venue competence model and the
examinations based thereon have the potential to improve cooperation between
learning venues poses a challenge for vocational education and training research
and planning, as a confirmation of this hypothesis could “solve”—or at least
alleviate—a problem that has been pending for decades. A comparison of dual-
cooperative vocational education and training in Switzerland with the dual voca-
tional education and training variant in Germany is possible on the basis of the data
from the COMET projects on nursing training in Switzerland and the European
“COMCARE” project (cf. Fischer, 2013; Fischer, Hauschildt, Heinemann, &
Schumacher, 2015; Gäumann-Felix & Hofer, 2015).
Fig. 9.2 COMET NRW 2013, Forwarding and logistics clerks, Task 3: Planning a warehouse
logistics project (TS ¼ 43.33; V ¼ 0.54)
The remarkable aspect about this result is that it was identified by the teachers
whose competence profile was the cause of the unilaterally characterised compe-
tence profile of their pupils. The one-day training had enabled them to identify this
competence profile of their pupils—and, therefore, also their own. As the project
progressed, the new understanding of the subject among the teachers involved in the
evaluation also stabilised in the classroom. This example points to a new field of
teaching-learning research.
The results available so far on the connection between teaching and learning
initially only show that the new quality of research methods leads to a considerable
expansion of teaching-learning research. Furthermore, a differentiation of the obser-
vation and rating procedures, with which the didactic actions of teachers can be
validly and reliably recorded (Sects. 10.7 and 10.8), holds new possibilities for
teaching-learning research on the connection between vocational competence devel-
opment and teacher competence.
Trainees are interested in attractive professions, in other words, those with which
they can identify. This can be linked to a variety of hypotheses and questions
examined in studies on the vocational socialisation of trainees (cf. Brown, Kirpal,
& Rauner, 2007; Heinz, 1995; Lempert, 2000).
One of the guiding principles here is to go beyond the extensive research
activities on professional socialisation and the development of professional identity.
The strength of the previous research is impressively proven by the richness and
quality. This is made clear by the discovery of social development tendencies, which
are reflected in changed structures of professional socialisation. For instance, Walter
Heinz identifies a fundamental structural change “from the practice of work ethics
through external control within the framework of regulated employment relation-
ships to the self-regulation of vocational learning processes within the framework of
risky employment processes” (Heinz, 2012, 324). And BAETGHE assumes a
structural weakening of occupation-based work to draw conclusions for vocational
training. Wolfgang Lempert develops methods to sensitise students for the teaching
profession at vocational schools to the processes of vocational socialisation. This
requires an examination of individual professional biographies and training courses
(cf. Lempert, 2006, Chap. 7).
What social science research on vocational socialisation and the development of
vocational identity has in common is that vocational training research, which is
concerned with career development and the design of training regulations and which,
in this context, deals with the identification potential of professions and their
attractiveness for trainees, plays a subordinate role. This is the central focus of
research and development tasks that deal with the connection between professional
and organisational identity and the related professional commitment.
Anglo-Saxon commitment research is oriented towards the interest in determin-
ing how operational commitment can be increased (cf. Heinemann & Rauner, 2008).
The survey of more than 4000 trainees shows that this form of vocational research
has the potential to address development-oriented issues to increase the attractive-
ness of training professions in view of the shortage of skilled workers in the
intermediate employment sector (cf. Rauner et al., 2015b). This includes questions
regarding
• The content and quality profiles of the professions
• The sense in differentiating occupational profiles according to specialisations and
training priorities
• The development of design criteria for occupational profiles and training
regulations
• The establishment of vocational fields as a basis for vocational specialisations as
study subjects for vocational schoolteachers
The empirical data on the development of professional competence and profes-
sional identity/professional commitment collected in numerous COMET projects
make it possible to examine how the development of competence and identity
correlate with one another on a job-specific basis, which shows that there is an
existing correlation as postulated by Blankertz. The investigative instrumentation
340 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
The data of the context surveys are available for the interpretation of the test results
on competence development, offering the opportunity to identify indicators of
competence development.
This field of research is primarily concerned with the elucidation of the funda-
mental theoretical contexts of learning. There is, therefore, a clear preference for
experimental and quasi-experimental research designs in the practice of teaching-
learning research.
If, for example, the influence of test motivation on the results of competence
diagnostics is investigated, the overall sample is usually used as the basis. Both the
class-specific and the occupation-specific characteristics of the test motivation are
then not taken into account. The competence of the test participants appears exclu-
sively as an individual quantity and not—which would actually be required—as a
quantity which is decisively shaped by the learning environments of the classes and
their teachers. If both categories were correlated on the basis of individual data, the
decisive class effect would be missing, which explains 30–60% of the dependence of
competence development.
If, on the other hand, the average competence development of the classes (of a
profession) and the average motivation of these classes are correlated, then this can
help to show whether and to what extent the two values are interrelated.
Large-scale analyses are nevertheless necessary whenever fundamental rules can
be investigated. This was shown using the example of the transfer of problem-
solving patterns from teachers/lecturers.
observation model has led to surprisingly high levels of interrater reliability among
teacher trainers (! 10.7, 10.8).
For class observation, in which several teachers/lecturers always take part, this is
a prerequisite for teachers/lecturers to be able to assess at a high level of agreement.
This result suggests that the COMET competence and measurement model should be
introduced with a corresponding extension to include the observation model for the
second state examination.
Perspectives
The modelling of the I-C model and the measurement methods based thereon are
already instruments that make it possible to examine this second dimension of career
development in detail. In this case, the decisive context variable is the profession.
The occupation-specific values—for example in the form of I-C profiles—show the
identification potential of professions. The identification potentials are indicators for
the attractiveness of the professions. In connection with the I-C profiles, the experts
involved in career development, therefore, have evidence-based data on the strengths
and weaknesses of the occupational profiles (from the perspective of the trainees).
The question remains as to how the identification potential of professions is
influenced by the conditions of the respective training companies and whether
there are existing regional peculiarities.
Initial analyses of the relationship between professional profiles and training
quality show to what extent the development of professional identity and
342 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
In the mid-1980s, the ability to (co-)design work and technology in social, ecological
and economic responsibility was developed as a new guiding principle for voca-
tional education and training (cf. Expert Commission for Work and Technology
1988, Chap. III, 4.; Rauner, 1988) and tested in numerous pilot projects as the basis
for the organisation and design of vocational education and training processes.1
Horst Kern and Michael Schumann supported this development with their book
project “Das Ende der Arbeitsteilung?” (“The End of the Division of Labour?”), in
which they summarised the results of the industrial sociological qualification
research of the Sociological Research Institute Göttingen (SOFI) (cf. Kern &
Schumann, 1984). The Enquete Commission of the German Bundestag (11th par-
liamentary term) “Future Education Policy—Education 2000” takes up these trends
in research and, in its final report, repeatedly emphasises the “change of perspective
from an overly narrow orientation towards adaptation to an active participation in
shaping the future society [...] and the world of work as a central education policy
orientation” (1990, 5, 20, 28). It, therefore, takes up the essential moments of the
expert hearing on the “structural change of work and occupation and its relationship
to education and training with special consideration of the flexibility aspect”
(15.02.1989) of the expert opinion on “design-oriented vocational training” submit-
ted by the ITB (15.02.1989). In this regard, “Design competence” is also expressly
1
These primarily include the dual model experiment (for both learning locations) “Business and
work-oriented, dual-cooperative training in selected industrial professions with optional advanced
technical college entrance qualification (GAB)” (cf. Bremer & Jagla, 2000), the dual model
experiment “Design-oriented vocational training with the learning-venue network of SMEs and
vocational schools in the Wilhelmshaven region (GOLO)” (cf. Howe & Heermeier, 1999) and the
model experiment “Innovation at the Vocational School” (NRW) (cf. Heidegger, Adolph, & Laske,
1997).
9.2 Professional Competence and Work Ethic 343
Fig. 9.3 Four characteristics of lean production, including learning within the work process (ibid.,
82)
required for technical education (ibid., 30). The Commission proposes that the
Vocational Training Act should include a corresponding educational mandate.2
Only with the “International Motor Vehicle Program” (IMVP), the largest
research project on industrial development ever carried out by MIT (Massachusetts
Institute of Technology), has the change from scientific management (Taylorism) to
“lean production”—to an organisation of entrepreneurial processes oriented towards
operational business processes—been impressively demonstrated using the example
of the international automotive industry. The drama of this structural change is
particularly significant due to one research result: the labour productivity of Japanese
automobile companies (which had developed and introduced the concept of lean
production) was twice higher than that of the US and European automobile indus-
tries (cf. Womack, Jones, & Roos, 1990, 9). The central factors of lean production
are above all the transfer of competences and responsibility to the directly value-
adding work processes and the reduction of horizontal and vertical division of
labour, combined with the introduction of flat company hierarchies (Fig. 9.3).
This study was of fundamental importance for vocational education and training.
It marks with great clarity the replacement of industrial vocational training, which is
characterised by scientific management (scientifically founded and practically tested
by W. Taylor), by vocational training which is oriented towards the guiding principle
of co-designing the world of work. Lean production is characterised by a shift of
competences, responsibility and quality assurance tasks to directly value-adding
work processes and a clear reduction in the horizontal and vertical division of labour.
Some selected research results from the study presented by MIT in 1990 shows the
great challenge for vocational education and training that is oriented towards helping
to shape the world of work. This particularly applies to the very large number of
suggestions for improvement made by employees and the reflection of work expe-
rience as a central element of learning in the work process (Fig. 9.3).
Vocational training research identified a close connection between the modern-
isation of companies through the introduction of business process-oriented
2
Anchoring this recommendation in the Vocational Training Act (BBiG) failed because of the
cultural sovereignty of the federal states. Therefore, the BBiG is also assigned to the “Economic
Constitution”.
344 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
3
The KMK later extended this central idea to include the aspect of “economic” responsibility
(cf. KMK, 2000, 8).
9.2 Professional Competence and Work Ethic 345
Employability
Shaping Competence
“Learning within the work process” poses a riddle: How does one acquire profes-
sional skills without first acquiring the corresponding theoretical knowledge? The
answer: Beginners in a profession become experts by doing what they want to learn.
Trainers support learners by confronting them with work situations that present them
with a challenge, with which they have to cope.
9.2 Professional Competence and Work Ethic 347
Fig. 9.4 Competence profiles with different competence levels and degrees of homogeneity
Examples
In these examples, both the trainees of the medical assistants (MFA) and the
industrial clerks (INK) achieve a comparably high competence level of TS ¼ 54.9
and 46.5 in contrast to the shipping clerks (SPKA). The homogeneity of their
competence profiles is (very) homogeneous with V ¼ 0.18 for the MFAs and with
V ¼ 0.19 for the INKs. With V ¼ 0.31, the equally high competence level of the
SPKAs with TS ¼ 44.0 is, however, significantly less homogeneous. If we look at
the characteristics of the competence profiles, we can see the cause of the inhomo-
geneity of the SPKA competence profile. The sub-competences K6 (environmental
compatibility), K7 (social compatibility) and K8 (creativity) are underdeveloped
(compared to the other sub-competences). The competence profiles of the INKs and
MFAs show that the trainees are not only able to completely solve the complexity of
professional tasks, but also to consider the value of all (!) solution criteria at a high
level of knowledge.
All four INK classes partaking in the test achieve (in the second main test) are
significantly higher and more homogeneous levels of competence than the test
participants in the industrial and technical occupations.
This also refutes the widespread prejudice that commercial vocational training
requires subject-specific (semi-academic) training rather than training based on
learning fields. Now the opposite has come to light! Only the MFA trainees and
the nursing students at the Swiss colleges of higher education have demonstrated
such a consistent and successful implementation of the learning field concept as in
the sub-projects MFA and INK in the COMET NRW project. This also indicates that
these project groups have succeeded in consistently implementing the criteria for
(test) task development on the basis of the pre-test results. This becomes clear when
comparing the competence profiles of the INK pre-test participants (tasks 1, 4, 5, 6)
with the competence profiles of the same (revised) tasks in the main test (Fig. 9.5).
Within the framework of competence diagnostics in vocational training, the
pre-test has the function of testing the drafts for test items developed by the
vocational project groups (usually teachers) and of selecting and revising items
which are used in the test. In a one-day training session, the developers of the test
items acquire the ability to evaluate the item solutions of the test participants (rater
training). The review of interrater reliability—the degree to which their assessment is
consistent—shows that this is regularly successful.
The drafts of the test items represent the technical understanding and problem-
solving patterns of the teachers at the beginning of a project (prior to rater training).
Above all, the task profiles show whether the eight-solution criteria were taken into
9.2 Professional Competence and Work Ethic 349
Fig. 9.5 Comparison of the competence profiles of the selected test items in the pre-test and in the
main test
account by the development teams in the situation descriptions of the test tasks, and
if so, with what significance. The competence profiles of the test items, therefore,
represent not only the competence characteristics of the pre-test participants, but also
those of the teachers (developers of the test items) (Fig. 9.5).
Fig. 9.6 Correlation (r ¼ 0.64) between V and TS in 28 E-B and E-EG classes of the (Hesse)
Electronics Technicians project
The reflected balancing of the criteria relevant for the solution or processing of a
professional task is always associated with value decisions: Sustainability, function-
ality, environmental and social compatibility must be balanced against each other in
relation to the situation. Professionals who plan and carry out their work tasks
competently are, therefore, inevitably involved in the responsible balancing of
values. Professional competence and professional work ethic are, therefore, inextri-
cably linked. Matthew Crawford explained this thesis with an example from his
motorcycle repair shop: “Rattling pistons (on a motorcycle) can indeed sound like
too much valve clearance, which is why a good mechanic must always be attentive
and keep in mind the possibility that he is pursuing the wrong hypothesis. This is an
ethical virtue” (Crawford, 2016, 132). In general, he comes to the conclusion: “In
contrast to the assessment of cognitive psychologists (or better said: outside the
scope of their discipline defined by them) this cognitive competence—to ponder
one’s own way of thinking—seems to spring from a moral quality” (ibid., 131).
The COMET competence model and the competence profiles recorded on this
basis illustrate what this insight means in detail.
The consideration of the highest possible functionality and at the same time an
equally high utility value in the task solution—within a given cost framework—as
well as the inclusion of the regulations for environmental and social compatibility
refer to the complex structure of responsibilities that professional specialists cannot
avoid. This includes informative consulting for customers all the way to dealing with
conflicts with the client, when specialists are confronted with the unrealistic or
irresponsible requirements of their customers.
Professional work ethic forms with the development of professional competence
and professional identity and leads to a certain tension between professional and
organisational identity. For example, a sense of professional responsibility—as an
expression of work ethic—can conflict with the company’s business interests and,
therefore, with its own organisational identity when it comes to carrying out a
business assignment. Professional competence at the level of action-reflecting
knowledge is, therefore, an essential prerequisite for responsible professional action
and, therefore, for professional shaping competence.
In this context, Helmut Heid rightly criticises the mediation of abstract, econom-
ically desirable, moral values—detached from the content of vocational education
and training: In the “personal and qualificational statements about the increasing
importance of moral components of desired qualifications [...]—provided that the
values are ‘purified’ or separated from their contents in prevailing debates—the
predominating abstracts include willingness to perform, sense of responsibility,
ability to adapt, criticise and cooperate and other ‘virtues’ apostrophised as simple
‘key qualifications’” (Heid, 2006, 40 f.). If one “cleanses” the values to be taken into
352 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
account when solving professional tasks, such as utility value, environmental and
social compatibility, etc., then one splits professional competence into pure profes-
sional competence and a moral education abstracted from the real work processes, as
it is justifiably established, for example, in Ethics as a school subject. The didactics
of moral action in profession and economy and the relevant research were based on
the moral psychology of Kohlberg (1996). With their categories and models
abstracting from the content of VET, they contribute to widening the distance
from the central idea of vocational education and training oriented towards learning
fields.
Alexander Lenger suggests the opposite approach—but for higher education. The
study of economics should be guided by the insights gained in the training of social
workers. He refers to Becker-Lenz and Müller-Hermann (2013), who advocate the
position that ethical competence is an integral part of technical competence
(cf. Lenger, 2016, 168).
If one follows the proposals for the vocationalisation of higher education as set
out in the Bologna reform,4 it would be consistent to regulate “higher vocational
education” in an amended Vocational Training Act. This would pose the advantage
that the job descriptions and curricula would be developed with the significant
participation of experts from the social partners and the responsible federal minis-
tries. This highlights a previously rarely clarified weakness in the understanding of
science. In the chapter on “University and the Scientific System” written by Jürgen
Klüver in the Encyclopaedia of Educational Science, the basic principles of scientific
research and teaching are presented with rare clarity. The text, written in 1995, also
shows what remains of the scientific ideal of the 1990s.
“The special aspect of university when compared to all other educational institu-
tions is that it is fundamentally concerned with generating science. University
didactics must therefore be above all science didactics. [...] The educational function
of the university clearly recedes in the classical self-understanding of the university
[...] and can only be understood [...] in its specific form if it is understood as largely
determined by the respective scientific discipline.” Klüver then refers to the connec-
tion between research and teaching: “To this end, the student is introduced to the
most important sub-areas, basic concepts, procedures, theories and assured results of
this discipline” (Klüver, 1995, 78, 84).5
The German Research Foundation (DFG) and the German Science Council
(Wissenschaftsrat), as the entities responsible for implementing the Excellence
4
The USA has a long tradition of professionalising higher education. It is primarily fed by the
“College for All” policy (cf. Wyman, 2015).
5
A way out of the difficulty of integrating vocational and higher education into a modern educa-
tional architecture and thereby solving the tiresome problem of permeability is offered by an
architecture of “parallel educational paths” with an independent and consistent dual educational
path “from apprentice to master to master (professional)” for the qualification of specialists and
managers and a parallel “excellent” academic-scientific educational path, which could again devote
itself to its actual task: the increase of scientific knowledge in the structure of the further differen-
tiating scientific disciplines. (cf. Rauner, 2017, 2018a)
9.2 Professional Competence and Work Ethic 353
NRW project with its two commercial professions. This would also eliminate the
widespread practice of integrating business ethics into the university curriculum of
economics. Like the ethics of technology, business ethics is a subject of applied
philosophy (cf. Pies, 2016). In project studies, it is of course a good idea to interrelate
the findings and orientations of business ethics, economics and other subjects.
In vocational education and training, it could be possible to orient vocational
learning towards learning fields and vocational design competence on the basis of
vocational specialisations and vocational didactics as well as vocational scientific
research.
9.2.5 Conclusion
The COMET test procedure enables the representation of test results in the form of
competence profiles.
The eight-dimensional competence profiles, therefore, indicate the level at which
professional competence with its sub-competences is achieved and the technical
problem-solving patterns of the test participants. The competence profiles show
whether they have reached employability. The competence profiles represent
the weighting of the values incorporated in the eight sub-competences and, therefore,
the expression of the vocational competence and work ethic of the skilled workers.
The problem-solving patterns of the test participants are, therefore, also the patterns
of their professional ethics. The coefficient of variation V can be used to quantify the
degree of homogeneity of competence profiles and, therefore, also the extent of
professional ethics. This provides competence diagnostics with an instrument to very
precisely and vividly examine the extent to which vocational education and training
succeeds in implementing the guiding principle of vocational education and training:
“the ability to help shape the world of work in socially, ecologically and econom-
ically responsible manner”.
Table 9.2 Participation in the context survey by profession, COMET NRW second test time
Profession Number of participants
2. Test time COMET NRW
Auto-mechatronics engineer 240
Industrial mechanic (n/a due to insufficient number of
participants)
Electronics technician for industrial engineering 64
Electronics technician for energy and building 128
technology
Medical assistant 140
Carpenter 62
Forwarding and logistics clerks 89
Industrial clerks 63
On the basis of the data from the COMET NRW project (COMET Data Report
2015), it was examined how the I-C dimensions correlate with the development of
professional competence (development).
More than 1000 trainees partook in the COMET NRW project (at the second test
time). The majority of test participants partook in the context survey (Table 9.2).
The total scores (TS) of the competence measurement and the data from the I-C
survey of trainees from eight professions were selected as measurement variables.
The following data were available:
• The TS mean values for the vocational school classes of the eight training
occupations involved in the COMET NRW project
• The individual assessments of the trainees on the dimensions of the I-C model
The calculation of the correlation values is profession-specific. A cross-
professional calculation of the correlation coefficients between professional compe-
tence and the dimensions of the I-C model is not possible because the competence
characteristics are profession-specific characteristics. The training professions differ
in their level of demand, with the indicator primarily being the previous schooling of
the trainees. An equally important differentiation between professions are the dif-
ferent skills that are required for successful training, such as linguistic,
mathematical-scientific or creative-artistic skills (cf. Gardner, 1999). It is, therefore,
not possible to determine a cross-professional level of difficulty for test items. For
the calculation of the correlation coefficients, this means determining the profession-
specific correlations. In a second step, the results can then be compared with each
other across different professions.
If the class-specific competence and identity/commitment values are correlated,
the points in a scatter diagram map the values of the competence characteristics (TS)
as the independent variables and the values of the dimensions of the I-C model as the
9.3 Professional Identity and Competence: An Inseparable Link 357
dependent variables. Each point of the scatter diagram represents the two dimensions
of a (school) class to be correlated and, therefore, also its specific learning environ-
ment. A database based on the competence and identity values of all test participants
would not allow the calculation of correlations, as the decisive determinant of
competence development, the learning environment of the specific classes, could
then not be taken into account.
The correlation values confirm the hypothesis that the development of profes-
sional competence and identity (as well as professional commitment) are more
or less closely interwoven.
Based on the class mean values, there are very significant medium to very high
correlations between professional competence development and professional and
organisational identity and the willingness to perform based thereon.
This connects to a number of other sub-results. Professions with below-average
identification potential are characterised by underdeveloped professional identity
and competence development and correspondingly low professional and
organisational commitment (Table 9.3).
If the two electronics professions are combined, there is a very significant high
correlation between identification with the profession and the training company
(r ¼ 0.822; 0.832). Professional/organisational identity correlates highly with
organisational and professional commitment (r ¼ 0.514; r ¼ 0.8).
For electronics technicians, the hypothesis that there is a close connection
between competence and identity can be confirmed.
Car Mechatronics
Training to become a car mechatronic results in a different I-C pattern (Fig. 9.9).
Highly significantly, the professional competence of car mechatronics trainees
correlates at a medium to high level with their professional commitment (r ¼ 0.473)
and work ethic with r ¼ 0.518. There is a rather low correlation between professional
competence and organisational identity.
For the medical assistants, as for the carpenters, medium to high correlations result
for all dimensions of the I-C model on the basis of the class averages. The highest
correlation between professional competence and professional identity is also found
here: r ¼ 0.668.
360 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
The realisation that every vocational training always has two objectives, the devel-
opment of professional competence and the development of professional identity
and the willingness to perform based thereon, is all too often overlooked.
A high level of professional competence and a lack of professional commitment
are the result of a failure to complete vocational training. Conversely, however, it is
also true that skilled workers with a very high level of occupational commitment but
who lack professional competence are a risk in the work process. It is, therefore,
important to convey a balanced relationship between competence and commitment.
It is a great advantage for the development of professional identity if pupils are
given the opportunity from the beginning of their school years to prepare themselves
thoroughly for their career choice when they finally also have the chance to be
trained in their desired profession. Training in the desired profession strengthens the
development of professional identity and a sense of responsibility and quality.
addition to the small group of virtually timeless, traditional craft occupations, the
future belongs above all to the broadband core professions (cf. Rauner, 1988; KMK,
1996) with their open-development professional profiles. The principle of speciali-
sation and differentiation of professional profiles according to specialisations should
be replaced by the principle of exemplarity. The application-oriented design of open
professional profiles is the responsibility of the regional vocational training dialogue,
so that the local and regional fields of application of companies can be better
integrated into vocational training.
The emotional bond between specialists and companies has diminished in recent
decades. This is the result of successful flexibilisation of the labour market. It is
precisely for this reason that a cooperative training and working atmosphere in
companies plays an outstanding role in successful in-company vocational training.
The high values measured in this study for work ethic in numerous training
professions require critical reflection in and with the training companies. Work
ethic is defined as the willingness to obey the detailed instructions of superiors
without understanding and questioning the significance of the work tasks for the
company’s operations. These guiding principles of the tradition of Taylorist working
structures should be a thing of the past. With the operational organisational struc-
tures aligned to the operational business processes, the aim is to introduce flat
hierarchies in the employment structure and, therefore, also to shift competences
and responsibilities to the directly value-adding processes. This applies in particular
to the training of skilled workers. They learn to think in business processes so that
they understand what they are doing and develop the ability to take on quality
assurance tasks—according to the guiding principle of modern organisational devel-
opment: producing quality instead of controlling it.
The fact that the quality of vocational education and training has an impact on
competence development seems immediately obvious and, therefore, does not call
for an empirical examination of this correlation. If we ask more precisely, for
example, about quality criteria in vocational education and training that can be
used to distinguish the influence of learning venues, the differences between training
professions and the competing forms of training at the same level or at levels that
build on one another, then we discover only a few empirical findings. This is also due
to the weaknesses in competence diagnostics, which have only been overcome in
362 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
recent years (cf. Fisher et al., 2015a), and in research on the quality of vocational
training.
In the COMET projects, test participants are regularly questioned about their
assessment of training quality as part of the context analyses in order to obtain data
for interpreting the test results. An analysis of the relationship between the assess-
ment of the quality of dual vocational training by trainees and the development of
skills must take account of the fact that the quality of training is assessed from the
trainees’ subjective perspective. The importance of this factor results from the fact
that the trainees state how they experience the quality of the training. The objecti-
fying element of the survey is based on scales that were developed to assess the
quality of training at the two learning venues of dual vocational education and
training and on learning location cooperation (Tables 7.18 and 7.19). As it is
assumed that the subjective state of mind of learners in their training influences
their learning behaviour, this suggests that the connection between the subjective
assessment of training quality by trainees and the objective values of competence
measurement needs to be clarified. This does not rule out examining the quality of
training according to objective quality criteria.
These studies are based on the data on the development of professional competence
collected using COMET competence diagnostics methods within the framework of
the COMET NRW project. Extensive data on the quality of vocational education and
training—from the trainees’ perspective—were collected in the same project.
As part of the COMET (NRW) project, around 1000 second- and third-year trainees
from eight training professions in two consecutive training years (2013, 2014/15)
were asked about the quality of their training.
The assessment of training quality based on a survey of trainees and vocational
school students (dual vocational schools) refers to the two learning venues of dual
vocational training and learning venue cooperation (see methodological instruments
pp. 230–233).
9.4 Training Qualities and Competence Development 363
To illustrate whether and how test participants are able to objectively assess the
quality of vocational education and training, competence profiles of classes and
professions are compared with the corresponding quality profiles.
The competence profiles and quality diagrams of the eight professions of the
COMET NRW project (N ¼ 700, second test time) are compared in Fig. 9.12.
The competence levels and the homogeneity of the competence profiles show
considerable differences between the professions. The total score differs between a
TS of 25.6 for electronics technicians for energy and building technology and a TS
of 54.9 for medical assistants. This result also confirms the finding that with a high
probability the degree of homogeneity of the competence profiles increases with an
increasing competence level (! 9.2.3).
It is noticeable that the test participants—despite the large differences in the
competence characteristics—assess the quality of their training in the eight pro-
fessions in roughly the same order of magnitude. The quality diagrams differ only in
the weighting of individual quality criteria. The trainees tend to differentiate between
the eight quality criteria when weighing them up, so that profession-characteristic
diagrams are produced. This applies, for example, to the different assessment of
school-based and in-company training. For example, trainees in the industrial and
technical professions of electronics technician for industrial engineering and indus-
trial mechanic have a clear preference for the company as a place of learning. By
contrast, the differences in the assessment of learning venues are less pronounced
among trainees in the skilled trades and in the three service professions (the latter
with significantly higher competence values). There is broad consensus among the
trainees of all eight professions that learning venue cooperation is the decisive
weakness of dual vocational training. They make a very clear distinction between
the contextual and structural dimensions of learning venue cooperation.
In the assessment of in-company training quality, the correlation values for profes-
sional competence are significantly higher than those for school-based learning. A
364 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
Fig. 9.12 For comparison: Competence profiles and quality diagrams for eight occupations
(Rauner, 2018b, 66)
characteristic feature of dual vocational education and training is that trainees are
unanimously convinced that learning venue cooperation—coordinated learning at
the company where training is provided and at the vocational school—hardly
contributes to the content-related quality of their training. On the other hand, they
are convinced that the low quality of the structure of learning venue cooperation
impairs their competence development (see Fig. 9.13).
If, on the other hand, trainees succeed in gaining insights into the significance of
their activities for in-company business processes within the framework of
9.4 Training Qualities and Competence Development 365
Fig. 9.13 Correlation coefficient r (Pearson); **: Correlation is significant at level 0.01
(two-sided), total score (competence level)—quality criteria
in-company training, they will recognise that this will have a positive effect on
competence development (r ¼ 0.238).
According to the trainees, teachers and trainers contribute equally to their train-
ing. The higher the competence of the trainees, the more positively they assess their
trainers and teachers. However, the low correlation coefficient of r ¼ 0.1 is a clear
indicator that profession- and class-specific differences can be expected.
The higher the average competence level of a profession, the more positively the
quality of in-company vocational training is assessed. The mean values vary
between CM ¼ 3.5 (EEB) and CM ¼ 4.0 (IM). If one compares the mean values
of the classes of professions, the positive correlation can also be seen for the
occupations IC, FLSC, J and EIE (Fig. 9.15).
The MA classes form an exception. The higher-performing MA classes rate the
quality of training less favourably (Fig. 9.16).
366 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
Training support in all professions is rated moderately positively. The mean values
vary between CM ¼ 3.3 (E-EC) and CW ¼ 3.8 (IM). The higher-performing classes
tend to experience the training slightly more positively. The MA classes also form an
exception here. The higher their competence level, the more critically they assess
their trainers.
Trainer Assessment
The average class values for all occupations vary between CW ¼ 2.9 and
CW ¼ 3.7.
When assessing teacher competence, trainees clearly distinguish between the
(weaker) practical competence “Our teachers have a good overview of
organisational reality”—on the one hand—and their (stronger) specialist compe-
tence “Our teachers really know the subject well”—on the other.
Almost half of the E-B trainees (45.8%) attest that their teachers do not have an
overview of organisational reality. If, on the other hand, professional competence is
asked for, the values are more positive.
The profession-related mean values vary between CW ¼ 2.9 (IM) and CW ¼ 3.7
(MA).
9.4 Training Qualities and Competence Development 369
Here, too, a clear distinction can be drawn between commercial and technical
professions and service professions. The assessment of teaching quality is particu-
larly heterogeneous among electronics technicians and MAs (Figs. 6.9, 6.10, 9.21
and 9.22).
These values correlate with the pronounced heterogeneity of the competence
levels of the E-EC classes.
Between the professions, the assessment of the learning climate in schools varies
between CW ¼ 2.8 (E-B) to CW ¼ 3.7 (IC).
370 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
If one differentiates between classes in the professions, then the learning climate
shows a clear tendency towards critical assessment among the higher performing
trainees (classes) (Figs. 5.13, 5.14, 9.24 and 9.25).
9.4.4 Conclusion
The results of the survey of trainees from eight industrial, technical and service
professions on the quality of their training and how this affects competence devel-
opment can be summarised in four points:
1. The obvious assumption that trainees in professions in which they achieve a
higher level of competence also rate the quality of their training correspondingly
higher than trainees in professions with a (significantly) lower level of compe-
tence does not apply. The quality diagrams of all eight professions hardly differ in
their quantitative extent. This becomes particularly clear when comparing the two
9.4 Training Qualities and Competence Development 371
in professional skills. This justifies their primary interest in the acquisition of the
action-guiding knowledge (know-that) on which their action competence is
based. The trainees in the service professions have a higher preference for the
professional expertise based on this at the levels of know-how and know-why.
This also explains the different positive assessments of the school as a learning
venue. An evaluation criterion that is independent of the training professions is
the scope of the Vocational Training Act (BBiG), which is anchored in law in the
German version of dual vocational training. The BBiG regulates the design and
management of in-company vocational training. The trainees, therefore, experi-
ence the vocational school as an institution accompanying their training with
fewer rights and duties.
3. If vocational competence (development) is measured with the methods of com-
petence diagnostics (COMET), it can be seen that school as the learning venue
has the higher potential to impart professional competence also at the levels of
action-explaining and action-reflecting knowledge. This also contributes to a
higher degree of homogeneity in competence profiles. However, only a part of
the teaching staff succeeds in exploiting this potential of the school as a learning
venue. In this case, the expansion of their specialist knowledge through active
participation in COMET projects has a positive effect. As the importance of a
high level of knowledge for the professional competence of trainees is not always
directly apparent from the context of vocational action, it is necessary to reflect on
the connection between knowledge and skills in the vocational education pro-
cesses of the school as the learning venue. For open test items in accordance with
the COMET test procedure, test participants are, therefore, requested to provide
comprehensive and differentiated reasons for their item solutions. Only those
with a high level of competence and knowledge are in a position to assume
responsibility for their professional actions and to weigh up alternative solutions
in well-founded manner.
4. The summarising results available for the overall sample show the strengths and
weaknesses of dual vocational training. The trainees rate the structural and
content-related weaknesses of the learning venue cooperation as very critical.
The federal states have (so far and with the exception of Baden-Württemberg) not
established a final examination for vocational school education as the only type of
secondary school. In Austria and Switzerland, the final vocational school exam-
ination is considered part of the final examination of professional competence.
This regulation significantly enhances vocational school learning among trainees
and challenges teachers to design and organise the review of vocational compe-
tence development as an essential element of vocational school learning.
9.5 The Training Potential of Vocational Schools 373
9.5.1 Introduction
The final examinations for trainees serve to check whether vocational training has
been successful. The yardstick is the employability to be imparted. It is assumed that
learning at the two learning venues—the training company and the vocational
school—contributes to the training result (§ 38 BBiG). As no school-based final
examination is planned for vocational education and training in Germany to assess
the success of vocational school-based learning, the question to what extent and in
relation to which skills vocational competence development can be traced back to
school-based learning remains unanswered to this day. Even if a final school
examination is passed, questions remain unanswered concerning the specific contri-
bution of the learning venues to the development of the trainees’ competences.
As the quality of vocational education and training is characterised by the fact that
vocational competence is essentially based on reflected work experience and the
resulting knowledge of the work process (Boreham et al., 2002), it is a particular
challenge for competence diagnostics to examine the significance of learning venues
for competence development. The widespread formula used to explain the “secret”
of dual vocational training to outsiders: “The vocational school imparts the theory
and the company the practical skills” hardly contributes to clarifying the question of
the specific learning potentials of the learning venues—and how these can be used
effectively. Based on the data collected in the COMET projects (test and context
data), the thesis can be substantiated that the competence development of trainees is
essentially shaped by learning at vocational school and that, in contrast, the learning
potential of school as the learning venue is assessed by the trainees as lower than that
of the training companies.
The COMET test procedure is based on three components, which facilitate the
objective, reliable and valid recording and analysis of the competence development
of those to be qualified for a particular profession. (1) First is the measured
competence development of the trainee/student and their identification with the
profession or the training company as well as the professional and organisation
based thereon. (2) Secondly, the personal data allow the test results to be differen-
tiated according to, for example, the influence of previous schooling, migration
background and other personal characteristics. (3) The third resource are the data
and results of the context survey used to record the attitudes and assessments of the
test persons with regard to their training situation. This investigation is based on
these data.
374 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
Of the total of 14 scale descriptions for the context studies, three refer to school-
based learning (Table 7.18): the school learning climate (school climate), teacher
evaluation and teaching quality.
As school-based learning also influences the quality of learning venue coopera-
tion, the two scales for learning venue cooperation are included (Table 7.19).
Learning at school usually takes place in classes. The learning situations in the
classes, according to the largely concurring findings of educational research, are
primarily determined by the teachers. They are the decisive factor for the compe-
tence development of their pupils (cf. Hattie & Yates, 2015, Chap. 10). Students are
aware of this. They are, therefore, occasionally classified as “experts” with good
diagnostic competence (Guldimann & Zutavern, 1992). It is, therefore, now a good
custom to interview pupils about their learning situation and, above all, about their
teachers within the framework of quality assurance procedures and in teaching and
learning research. There is a special feature for vocational schools, as the dual
organisation and design of vocational learning requires trainees/students to weigh
up the significance of the two learning venues for their training. A differentiated
questionnaire, which has been continuously evaluated and optimised since 2008 as
part of the COMET project, is used to ask test participants about their training
situation. Results are now available for more than 15 professions. The test partici-
pants’ assessment of the quality of the learning venues and of the competence and
commitment of the teachers and trainers can, therefore, be compared with the results
of the competence surveys. In addition, a survey on the development of identity and
commitment is taken into account, in which approximately 4000 trainees were
involved (cf. Rauner et al. 2015b).
In the COMET projects, the time interval between the start of the project and the first
main test is on average between 6 and 9 months. The test items are developed during
this period (Fig. 9.26).
Fig. 9.26 Phases of familiarisation with the COMET competence and measurement model:
Informing—Rater training/Rating—Analysis of pre-test results and feedback
Teachers play a central role in the test item development process. Designing test
items requires a brief introduction to the COMET competence and measurement
model as well as an examination of the criteria for developing test items.
After only four trial ratings (usually on 1 day), a high degree of agreement is
achieved in the evaluation of the task solution on the basis of all (!) evaluation
criteria. The value of Finn ¼ 0.7 (! 5.6.3) is already a good value. The quality
criterion of interrater reliability is, therefore, fulfilled.
The first rater training in a very extensive international COMET project (elec-
tronics engineer: Hesse—China) was intended to provide information about the
quality of the rater training. Not only was the development of the rater competence
of the Chinese rater team to be examined, but also whether the Chinese and German
rater teams achieve comparable values. In contrast to the previous national projects,
the rater training was not limited to 1 day, but extended to 3 days. To the surprise of
all concerned, the interrater reliability already reached unexpectedly high values in
the second example (Fig. 9.27).
The fact that the raters of the Chinese and South African electronics project
already reached or exceeded the very high value of Finn ¼ 0.84 in the second trial
rating means that all raters almost completely agreed in their evaluation of the task
solution for all evaluation criteria—and that at the end of the training all three groups
of raters achieved high to very high Finn values.
376 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
The first trial rating is typical for international comparison projects: As a rule, the
Finn values are still far apart. It is, therefore, all the more surprising that at the end of
the training—as in these three cases—a Finn score > 0.7 was achieved by all
national rating groups.
As a rule, all rater training participants used the solution space of the test tasks
only for the first two trial ratings. They then internalised the profession-specific
interpretation of the evaluation of an item. This result also explains that the COMET
test procedure manages with a total of only three very similar evaluation scales for all
professional fields.
The rater team of the Chinese project was reason for a big surprise. Nobody in
the German-Chinese project consortium had expected that 30 Chinese teachers
from vocational schools, technical colleges and higher technical schools in the
Beijing region would assess the task solutions of German trainees selected for
the rater training at a very level of agreement at the second trial rating
(including the rating results of the German project). This result was the first
proof that the COMET method of competence diagnostics for vocational
training can be used to carry out international comparative examinations
without any problems.
The results of two repeat trainings (Beijing, Hesse)—after 1 year—showed
that the once achieved competence of the raters—the new technical under-
standing—is maintained (COMET Vol. III 2011, 107).
examination tasks does not require lengthy further training. In a 1-day training
session, it is possible to convey this ability (Sect. 4.6.1). This can only be explained
by a “Eureka” effect, which is triggered by a spontaneous insight on the part of the
participants in the rater training, which does not require any lengthy justification:
Professional tasks must always be completely solved (“What else?”). If even one of
the solution criteria is not observed, this may entail incalculable risks for the
environment, the company or the employees themselves.
If outsiders are confronted with this method and the values of interrater reliability
it achieves, this usually triggers great astonishment. “I would not have thought it
possible,” said a vocational training expert at an IHK conference at which the results
of an international COMET project were presented.
The COMET test procedure is, therefore, at the stage where it is being profes-
sionally handled. Now at the latest, the subject researchers (teachers/trainers)
actively involved in the project are in a position to apply the COMET concept as a
didactic model in their teaching. Thomas Scholz: “The discursive process among the
teaching staff that accompanies the project is just as complex and multi-layered as
the effect of COMET on teaching. Mutually influencing conversations occur at
different levels and in the associated social relationships. Meta-communication is
created between all participants. COMET has initiated a pedagogical-didactic dis-
cussion with us from the very beginning. However, it took another two years until
we understood the COMET concept in all its depth and were able to use it didacti-
cally” (Scholz, 2013, 28).
The subjective importance that the trainees attach to the vocational school and its
teachers for their competence development becomes particularly clear in the context
analyses in the BBiG professions, which refer to learning venue cooperation.
When weighting the importance of school and company as learning venues for
learning a profession, trainees trained in BBiG professions have a clear preference
for the company as a learning venue. This applies in particular to industrial and
technical training professions. In the COMET NRW project (cf. Piening, Frenzel,
Heinemann, & Rauner, 2014), 71.3% of industrial mechanic trainees and 65% of
electronics technicians for industrial engineering “fully agree” and “partly agree”
with the statement: “I learn much more at work than at vocational school”.
Trainees rate the importance of school-based learning as consistently low. A clear
majority of trainees in industrial and technical professions negate the statement
“Vocational school teaching helps me to solve the tasks and problems of
in-company work”. The assessment of the statement “The vocational school lessons
and my everyday work in the company have nothing to do with each other” is
similar. Obviously, the fit between theoretical and practical learning content seems
to be limited.
It is noticeable that trainees differentiate between the relatively highly rated
technical and methodological competences of their teachers (Fig. 5.18) on the one
hand and their knowledge of company reality on the other. The latter is considered to
be rather low (Fig. 5.19). If one compares the assessment of the learning situation at
the vocational school with that of the companies, one can see that in-company
vocational training is valued much more highly. The fact that trainees can learn a
lot from their trainers is undisputed among those surveyed, irrespective of their
profession. Therefore, they also come to the conclusion that they learn much more at
work than at school (Figs. 9.28 and 9.29).
A clear picture emerges when summarising these assessments of school learning
and teachers by the trainees, who rate the significance of the vocational school and its
teachers for their competence development as rather low. They believe that they
learn significantly more for their profession in the company than in the vocational
school.
If one compares these training quality assessments of vocational school as the
learning venue by the trainees with the results of the competence survey, it can be
seen that the learning situation in the vocational school classes is the decisive
determinant of the development of the trainees’ competence. Almost every second
trainee in Class 5 (Fig. 8.11) to become an electronics technician for industrial
engineering achieves the highest level of competence. In Class 21, on the other
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 379
Fig. 9.28 “Our teachers really know the subject well” (ibid., 115)
Fig. 9.29 “Our teachers have a good overview of organisational reality” (ibid., 115)
380 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
hand, none of the trainees reaches this level of competence. According to the state of
COMET research, the very pronounced spread of competence development between
classes can be attributed to the teacher factor (cf. Rauner & Piening, 2014).6
Trainees in professions with a high level of competence (IC, FLSC, MA and J)
rate the quality of teaching significantly higher—with mean values between
CW ¼ 3.6 and CW ¼ 3.9—than trainees in industrial and technical occupations
(E-B and IM) with their low competence levels of 27.2 (IM) and 28 (E-B). They rate
the quality of teaching as below average with CW ¼ 2.5.
The assessment of teaching quality correlates with the assessment of teacher
competence. The E-B trainees rate the practice-related competence of their teachers
as below average CW ¼ 2.6, while the IC trainees rate their teachers as below
average CW ¼ 4.0.
In the following, a further scale on vocational school learning, the vocational
school learning environment, will be used to examine the thesis of the contradiction
between the empirically proven great influence of the factor of vocational school
learning on the one hand, and the quality of the school as a learning venue and its
teachers for their vocational competence development on the other, which is rated by
the trainees as low.
The vocational school, as a partner of the training companies in the dual system of
vocational training, is involved in imparting professional competence and in prepar-
ing for the final examinations regulated by the BBiG, in which teachers take part as
examiners. However, in the German dual vocational training system, the results of
school-based learning are not recorded in the final examinations. Therefore, the
vocational school is experienced by the trainees as a learning venue of lesser
importance—as a “junior partner”. This also has an effect on the learning climate.
The evaluation of the statement, “I feel comfortable at school” gives a first
indication of the different perception of school learning.
While electronics technicians specialise in energy and building services, engi-
neering and automotive mechatronics technicians largely feel comfortable at voca-
tional school (55.5% and 54.8% respectively); this applies only to 28.4% (!) of
industrial mechanics.
The reasons given by the motivated trainees are the lack of cooperation in the
learning environment. More than half of the industrial mechanics complain that their
classmates have little consideration for other pupils (51.3%). This assessment is not
shared to the same extent by the other two occupational groups. The vocational
school has a compensatory function for trainees in craft trades. They also perceive
and value it as a learning venue that compensates for the weaknesses of their
in-company vocational training.
6
Also refer to the results of relevant learning research (e.g. that of Hattie & Yates, 2015).
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 381
The extent to which teaching plays a role in this is described in more detail below
(Fig. 9.30).
For the industrial mechanics, the factor “teaching disrupted by classmates” turns
out to be an influential quality aspect.
While the risk pupils (trainees who did not exceed the level of nominal compe-
tence in the test) and the pupils with a low and very low level of competence do not
perceive the “teaching disrupted by classmates”, the high-performing pupils per-
ceive these disruptions caused by classmates as a serious problem.
There are two possible causes for the interpretation of the paradox: the high
learning potential of VET schools and their low assessment by trainees.
1. When learning within the work process, the trainees experience their competence
development directly, especially in the industrial and technical professions. The
development of their professional skills, which they experience within the work
process, is the yardstick for their assessment of the training quality of the learning
venues. How the acquisition of vocational work process knowledge in the
processes of school learning contributes to the development of professional skills
is not immediately apparent to many trainees. They, therefore, agree with the
statement that although their teachers are professional competent, they are less
convinced of their knowledge of the realities of work. Only learners with a higher
level of competence are aware that they (can) acquire the action-explaining and
reflecting knowledge of the work process characterising employability, especially
at school.
382 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
This example shows that the management of dual vocational education and training
“from a single source” and the equivalence of learning venues leads to a much more
positive attitude among students towards learning in vocational schools: “Satisfac-
tion with training at the surveyed schools for health and nursing [...] is very high
overall. 79 % of respondents are more or less satisfied with their training” (Fischer,
2013, 219).
The teachers at the nursing schools are also rated positively. 71% confirm that
their teachers have a good overview of professional reality and 83% consider them to
be more or less technically competent and up to date (ibid., 222).
It is, therefore, no surprise that learning venue cooperation in dual nursing
training is rated significantly more positively than in vocational training regulated
under the BBiG. “[...] 70 % of the students are therefore of the impression that the
teachers of the technical schools cooperate with the practice instructors and nursing
services in the hospital more or less or completely—this statement does not apply to
only 3 % of the respondents” (ibid., 237). The students of Swiss nursing training rate
the learning venue cooperation “even more positively than the trainees [of the
German vocational schools]” (ibid., 238) (Fig. 9.31).
Renate Fischer concludes that the training between technical school and practical
training, which was assessed as positive by students at technical colleges, and the
good cooperation between teachers and practical instructors (e.g. also in joint pro-
jects) have a “highly beneficial effect on the development of professional identity
and commitment” (ibid., 272).
9.6.4 Conclusion
The competence surveys carried out within the framework of COMET projects in
dual vocational training programmes show very clearly that school as a learning
venue and teachers are decisive determinants of professional competence develop-
ment. This applies above all to achieving the highest level of competence, as
provided for in the learning field concept: “the ability to help shape the world of
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 383
Fig. 9.31 Learning venue cooperation, comparison of trainees and students on the statement: “My
practical workplaces and the school coordinate their training”. (ibid., 239)
above all in the polarisation of the levels of competence achieved by the classes
taking part in the tests. Despite comparable profession-specific educational pre-
requisites of trainees and students, part of the classes (of a profession) regularly
reach a (very) high level of competence and another part a (significantly) lower level
of competence. This means that teachers exploit the educational potential of voca-
tional schools to a very different degree.
The quality of learning venue cooperation is of overriding importance for
exploiting the training potential of schools as a learning venue. Using the example
of nursing training in Germany and Switzerland, it has been demonstrated that
the management of dual vocational training “from a single source”—and therefore
the equal participation of vocational schools in dual vocational training—enhances
the quality of their training. This is reflected in the appreciation of vocational
(technical) schools by trainees/students.
A statistical comparison between the test results of the pre-test participants and the
participants of the first main test is possible, however, if both test groups represent
the test population and if comparable test items are used for both tests. If the pre-test
participants are distributed among the training centres participating in the test, then a
comparison of the pre-test results with the results of the first main test can be used to
examine whether and to what extent competence development has taken place.
Eighty-two second- and third-year trainees from two vocational colleges (VC) took
part in the first main test of the COMET Project NRW pilot study for industrial clerks
(cf. Stegemann et al., 2015; Tiemeyer, 2015). Fifty-two trainees from the same VCs
took part in the pre-test. The results are, therefore, not representative for the test
population of the federal state. The comparability of the pre-test and main test
participants is, however, given, as the number of participants in both tests hardly
differs. Both test groups are representative of the industrial clerk trainees at the two
vocational training centres (Fig. 9.32).
The result impressively shows that the competence level of the trainees has
increased significantly in a period of about 6 months between pre-test and the first
main test. The increase in the competence level of the test group is reflected above all
in a significant increase in the proportion of test participants who have reached the
highest competence level (Shaping Competence): from 21.8% in the pre-test to
69.5% in the first main test.
A similar effect can be seen in the COMET NRW Carpenter project. The high
Finn coefficient, which was reached in the rater training is an indicator for the fact
that all raters mastered the COMET competence and measurement model following
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 385
Fig. 9.32 Distribution of competence in pre-test and main test for industrial clerks (INK-A)
rater training. A comparison of the pre-test and main test groups is also possible here,
as both test groups from two VET centres were involved in this pilot project.
77% of the test participants reach one of the two upper competence levels during
the first main test. In the pre-test, this was only 51.8%. The decline in the number of
risk students from 30% (pre-test) to 17% in the first main test is particularly marked.
The example of the training of nursing staff at higher technical colleges in Switzer-
land also shows a significant increase in the competence level of students at technical
colleges in the period between the pre-test and the first main test. The proportion of
students who reach the third (highest) competence level has increased significantly,
while the proportion of the risk group has decreased significantly (Fig. 9.33).
These three examples represent a development that has been demonstrated in
almost all COMET projects.
386 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
Fig. 9.33 Distribution of competence levels, COMET project nursing training, Switzerland: Pretest
2012 and first main test 2013 (n ¼ 115)
9.6.5 Conclusion
The hypothesis that the active participation of vocational trainers in the development
of test items and their evaluation and optimisation within the framework of a pre-
test—including rater training—has a positive effect on their competence develop-
ment was confirmed.
This form of further training takes place as an implicit learning process, as the
feedback workshops show, in which the project groups, when interpreting the test
results, did not recognise the changed didactic actions of the teachers as a (decisive)
cause for the increase in competence of their students. The fact that vocational
trainers (also) implicitly transfer their specialist knowledge to their pupils/students
was proven in an extensive large-scale study in which 80 teachers/lecturers took part
in the student test (cf. Zhou, Rauner, & Zhao, 2015; Rauner, Piening, & Zhou, 2015
[A + B-Forschungsbericht Nr. 18/2014]).
Thomas SCHOLZ sums up the experiences of the project group, which they
gathered and reflected on during the implementation of the COMET Industrial
Mechanic (Hesse) project, as follows: “With the experiences from the pre- and
the two main tests as well as the development of test tasks, the working groups
approached the design of learning tasks with a holistic solution approach. A
new dimension of task development opened up, tasks that highlighted
COMET’s influence on teaching change. The discussion about methodology
(continued)
9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . . 387
and didactics with regard to COMET tasks in the classroom became the focus
of the working groups. The group of industrial mechanics decided to introduce
this new form of learning: the ability to solve tasks according to the COMET
competence model. The introduction of this new learning form, as suggested
by the learning field concept, had an impact on the test results. The more
advanced the new teaching practice is, the better the test results will be”
(Scholz, 2013, 25).
The didactic actions of the teaching staff are characterised by the tension between
their specialist knowledge, which is shaped by their university studies and develops
in their specialist studies on the one hand, and the knowledge of work processes
incorporated into their professional activities on the other (Bergmann, 2006; Fischer
& Rauner, 2002). With the acquisition of the COMET Competence Model, the
professional knowledge of action (the work process knowledge) moves into the
centre of didactic action and the scientific knowledge becomes rather a background
knowledge which retains its significance for the reflection of complex work and
learning situations. The theories and research traditions on which the COMET test
procedure is based (again) experience their fundamental significance in competence
diagnostics:
• Research into the knowledge of work processes (cf. Boreham, Samurçay, &
Fischer, 2002)
• The theory of multiple competence and the associated guiding principle of
holistic problem solving (cf. Connell, Sheridan, & Gardner, 2003; Rauner,
2004b; Freund, 2011)
• The novice-expert paradigm and the associated insight that one is always a
beginner when learning any profession and that the path to becoming an expert
follows the rule that one grows with one’s tasks (cf. Dreyfus, 1987; Fischer,
Girmes-Stein, Kordes, & Peukert, 1995)
• The theories of “situated learning” (cf. Lave & Wenger, 1991)
• The concept of practical knowledge (cf. Holzkamp, 1985; Rauner, 2004b)
• The theory of “developmental tasks” (cf. Gruschka, 1985; Havighurst, 1972) and
the related concept of paradigmatic work situations (cf. Benner, 1994)
• The theory of “Cognitive Apprenticeship” (cf. Collins, Brown, & Newman,
1989)
• The “Epistemology of Practice” (cf. Schoen, 1983)
Teachers/trainers who actively participate in the COMET projects as test item
developers and as raters already have the ability to assess the professional compe-
tence of trainees and students at a high level of interrater reliability after 1 day of
rater training.
388 9 The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .
Teachers and trainers change their understanding of the subject and their
didactic actions in the sense of the COMET competence model by participat-
ing in the development of test and learning tasks, in rater training and in the
rating of task solutions as well as by reflecting on and interpreting the test
results with their pupils, in their subject groups and with scientific support.
This change in thinking and acting does not take place as laborious additional
education, but rather casually as a Eureka effect and to one’s own surprise;
“Oh of course, it’s as clear as day” or “I have the feeling that I’ve been in
charge for years”. The new or expanded understanding of the subject is
reflected in the development of learners’ competences. Above all, their com-
petence profiles are an expression of the new quality of training. They chal-
lenge teachers and trainers and make it easier for them to reflect on and change
the strengths and weaknesses of their own didactic actions. In a team, it creates
a very effective form of learning from each other.
Chapter 10
Measuring Professional Competence
of Teachers of Professional Disciplines
(TPD)
The Conference of the Ministers of Education and Cultural Affairs of the Federal
States of Germany (KMK) published standards for teacher training (report of the
working group) in 2004. As an introduction, Terhart explains: “An [...] assessment
of the impact and effectiveness of teacher training based on competences and
standards is [...] the prerequisite for being able to introduce justified improvements
if necessary” (KMK, 2004a, 3). His indication that the professional competence of
teachers ultimately depends on the quality of their teaching is also confirmed by the
results of the project “Competence Diagnostics in Vocational Education and Train-
ing” (COMET). In addition to the teacher factor, the previous schooling of trainees
or students of technical colleges and the in-company learning environment in dual
vocational training have proven to be further determinants of professional compe-
tence development (COMET Vol. III, Chap. 8).
When measuring the professional competence of vocational school teachers, a
distinction must be made between two aspects. It has proven useful to encourage
teachers to take part in their students’ tests. This has been tested both in PISA studies
and in the COMET project. Naturally, this is not enough to measure teacher
competence, which requires the development of a competence and measurement
model. The linchpin is the definition and operationalisation of the requirements
dimension as well as the justification of competence levels (refer to KMK, 2004b, 3).
An extensive quantitative study of occupational competence development
revealed that there are extraordinarily large—unanticipated—differences in compe-
tence between the 40 test classes (Hesse/Bremen) of (electronics technicians)
trainees and students at technical colleges, 31 of which were from Hesse. Figure 9.6
shows this for the third competence level (“Holistic Shaping Competence”).
The heterogeneity of competence development within the test groups (classes)
turned out to be just as unexpectedly large (Fig. 10.1).
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 389
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_10
390 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
Fig. 10.1 Percentage of test participants (Hesse) who reach the level of “Holistic Deign Compe-
tence” per class: E-B ¼ Electronics technician for industrial engineering, E-EG ¼ Electronics
technician for energy and building technology, F-TZ ¼ Technical college students, part-time
measured. If this succeeds, it would be a major step forward for the quality
development of vocational education and training.
The aim to gain deeper insights into the professional competence (development)
of vocational school teachers with the methods of Large-Scale Competence Diag-
nostics can be justified by the following reasons:
(1) The results of empirical vocational training research, according to which only
very limited success has been achieved to date in enabling vocational school
teachers to implement the introduction of the learning field concept agreed by
the KMK in 1996 into the development of framework curricula for vocational
school programmes (cf. Przygodda & Bauer, 2004, 75 f.; Lehberger, 2013,
Chap. 2)
(2) The high degree of heterogeneity that occurred in the vocational competence
surveys of trainees/students between the test groups of comparable courses of
study (see above)
(3) The large proportion of trainees and students who, at the end of their training, do
not have the ability to complete professional tasks in professional and complete
manner
In justifying the KMK standards for teacher training, the educational sciences are
emphasised as the essential basis for the acquisition of teacher competences. These
are above all the educational and didactic segments of the studies and the compe-
tences based thereon (KMK, 2004a, 4). From the perspective of general education,
this restriction can possibly be justified as, particularly in the tradition of humanistic
pedagogy with the paradigm of exemplarity, the meaning of the contents of teaching
and learning was reduced to the function of a medium in the educational process
(cf. Weniger, 1957). In vocational education and training, on the other hand, training
content is of constituent importance. The job descriptions and training regulations
prescribed by the Vocational Training Act (BBiG) specify the knowledge, skills and
abilities to be mastered in an occupation. This applies in particular to the examina-
tion requirements, which form the basis for the examination of employability in the
individual occupations. Skills and knowledge which are necessary (!) for the exer-
cise of a profession are examined separately as the right to exercise a profession is
not infrequently acquired with a qualification. Germanischer Lloyd, for example,
verifies the mastery of various welding techniques by industrial mechanics appren-
tices (specialising in ship mechanics) in accordance with its own quality standards.
Whenever safety, environmental and health-related training contents are involved,
vocational school teachers and trainers are particularly challenged to communicate
the corresponding training contents and to check their safe mastery. It follows that a
“vocational school teacher” competence model must have a content dimension.
392 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
If one looks at the standards for initial and continuing training of teachers, then
the descriptions of the skills that teachers must acquire in their initial and continuing
training are found to be largely identical, albeit differently distinguished. It is
striking that, in the majority of training concepts, the dimension of specialist
knowledge is often ignored. Fritz Oser (1997), for example, proposes 88 “standards”
for teacher training. These are 12 higher-level training areas (“standard groups”),
which are broken down into 88 detailed training objectives (standards). On the other
hand, a remarkable aspect here is that the content dimension of competence is
missing in this compilation. In the training guidelines of the “National Board for
Professional Teaching Standards” (NBTS) quoted by Oser, one of the “five ideals for
the collection and verification of teaching standards” under b) is “knowledge of the
content that is learnt...” (cited from Oser, 1997, 28). In another current compilation
of professionalisation standards for teacher training, the subject contents are even
given special emphasis. The “Professionalisation standards of the Pedagogical
University of Central Switzerland” read:
1. The teacher has specialist knowledge, understands the contents, structures and central
research methods of their subject areas and can create learning situations which make these
subject-specific aspects significant for the learners (professionalisation standards of the
Pedagogical University of Central Switzerland [2011]).
The analysis by Andreas Frey (2006) and Johannes König (2010) of methods and
instruments for the diagnosis of professional competences of teachers confirms that
competence diagnostics (teachers) is primarily aimed at recording interdisciplinary
pedagogical-didactic competences. Frey summarises his findings as follows: “The
list [of 47 methods and instruments] shows that the social, methodological and
personnel competence classes are already well covered by instruments. However,
the specialist competence class, in particular the various specialist disciplines, is
insufficiently documented in the specialist literature. In this case there is a need for
scientific research and development” (Frey, 2006, 42)1.
The project of the International Association for the Evaluation of Educational
Achievement (IEA) to measure the competence of mathematics teachers (TEDS-N)
focused on the subject and didactic competence of teachers (cf. Blömeke & Suhl,
2011). However, the format of the standard-based test items and a supplementary
questionnaire limit the scope of this test procedure. The “professional competence”
of teachers can, therefore, only be recorded to a very limited extent.
1
cf. also König (2010).
10.2 Fields of Action and Occupation for Vocational School Teachers 393
Oser, Gian, Curcio and Düggeli have developed and psychometrically evaluated a
method for measuring competence in teacher training. Methodologically, the con-
cept is based on situation films (multi-perspective shots and film vignettes generated
from them) and an expert rating of the competence profiles of teachers (Oser, Curcio,
& Düggeli, 2007)2. OSER and his research group have good reasons for opting for the
methodological middle course between direct observation and self-evaluation pro-
cedures, as both procedures have not yet led to the desired results. The method of
direct observation is already ruled out for test-economic reasons. Even if it were
possible to develop a reliable rating procedure, this procedure would not cover a
decisive dimension of competence: the knowledge on which teacher behaviour is
based. Even if one assumes that one can deduce from the observable action the
action-leading knowledge, then this method leaves open to what extent teachers can
reflect their actions in a well-founded way or whether they act more intuitively and
on the basis of practised “skills”. If one wants to capture teacher competence at the
competence level of reflected professional knowledge and actions, then the
observers cannot avoid reflecting the observed behaviour with the observed teachers.
There are narrow limits to decoding the competences incorporated in observable
behaviour as a domain-specific cognitive disposition. This limitation also applies to
the “advocatory procedure” proposed by OSER and his team, which is also based on
an observation procedure. In forms of teacher training based on video documents,
the common reflection of those observed is, therefore, an essential element. The
observed teacher has the opportunity to explain why he or she behaved in this way
and not differently in specific situations (video feedback). Without the reflection of
visual documents with the actors, video-based observation methods for measuring
competence have only a limited reach.
The OSER approach for identifying competence profiles at a medium level of
abstraction is interesting because it avoids merely determining competence levels.
The profiles were developed or identified according to the Delphi method. Using the
example of competence profile A 2.3 (A2: “forms of mediation” is one of nine
standard subgroups): “The teacher organises different types of group teaching...”
(ibid., 16).
With the concept of competence profiles, it is possible to approach the quality of
teacher competence to be described and recorded. The project shows the empirical
effort involved in developing this method for measuring vocational school teacher
competence.
2
The project was carried out with 793 teachers from vocational schools. However, it is a project that
is not limited to vocational training.
394 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
Here, Oser et al. rightly indicate a very sensitive point of competence diagnostics.
The high validity of measurement procedures or test items can only be confirmed if
one can prove how teacher competence affects the development of pupils’ compe-
tences. “However, whether or not the quality characteristics of a standard are
actually recorded with the present diagnostic instrument must be checked with the
aid of cross-validation” (ibid., 19). The planned procedure of an expert rating
exhausts the possibilities offered by this procedure as a whole. Ultimately, evidence
must be provided as to whether pupil performance is due to the competence of their
teachers.
Fig. 10.2 Example of an individual intelligence profile as well as two different intelligence profiles
and the different spaces for competence development they define (Connell et al., 2003, 138 and
140 f)
For each of these four training and task areas, a distinction is made between the
knowledge to be acquired in the course of study and the skills to be acquired or
mastered in teacher training and teacher work.
If the training and task areas (1) and (4) are combined into one field of action—
“participation in school development” (see KMK, 2004b, 3; item 5)—and if it is
assumed, in line with the concept of complete action, that teaching also includes the
evaluation and review of learning processes and the assessment of student perfor-
mance (2) and (3)3, then two fields of action remain.
For vocational school teachers, the following four task fields can be justified.
The central task of every teacher at vocational schools is the design and evaluation of
vocational training processes and their individual evaluation (cf. KMK, 2004b, 3).
The COMET competence model is of particular importance in this context. Coop-
eration with other teachers (e.g. the subject group) and with out-of-school cooper-
ation partners (e.g. trainers) is, therefore, the rule.
Study labs and workshops are of particular didactic relevance for the design of
vocational training processes. Their quality is a decisive factor in the implementation
of “action-oriented” forms of learning. The study labs and their equipment are,
therefore, often a “trademark” of specialist departments or vocational schools.
How study labs and workshops can be designed under the conditions of technical
3
Teachers carry out their assessment and advisory duties in the classroom [...] (ibid., 3., No. 3).
10.2 Fields of Action and Occupation for Vocational School Teachers 397
With the shift of operational tasks of school development to the level of vocational
training institutions, the participation of vocational school teachers in quality devel-
opment and assurance is one of their original tasks. The extraordinarily large
diversity of professions, vocational training programmes and the regionally specific
embedment of vocational training in economic structures calls for a change from
vocational schools to regional competence centres (BLK, 2002). It is foreseeable that
the emphasis will increasingly shift to continuing vocational education and training.
The international development towards “Further Educational Colleges” and “Com-
munity Colleges” (USA) is already further advanced here. In this context, the
traditional concepts of school development are losing importance. The transforma-
tion of vocational schools into competence centres requires the development of new
forms of organisational development and their institutionalisation alongside univer-
sities and general upper secondary levels.
When developing test items for a project “Measuring the professional compe-
tence of vocational school teachers”, the four action fields should be represented by
at least one complex test item each.
In the teacher training discussion, reference is made to the different dimensions or
sub-competences of professional teachers. This concerns the sub-competences of
teaching, educating, counselling, evaluating and innovating, as they were already
founded by the German Education Council (1970) and reformulated by the KMK
Teacher Training Commission (1999). If these dimensions of teacher competence
are “taught” in the form of modules as self-contained skills during the phase of
familiarisation with teaching activity in seminars, then there is a risk that these
sub-competences will be misunderstood as mutually isolated characteristics of
teacher action. For the standards of teacher training, this means that competence in
counselling, teaching, education, etc. can only be demonstrated in the context of
domain-specific design and evaluation of the (training) processes, something to
which the KMK expressly refers in its standards. The educational task at school is
closely linked to teaching. Similarly, assessment and counselling are not isolated
fields of action, but tasks that are integrated into teaching (KMK, 2004b, 3).
In teacher training, structured according to disciplines and modules, disciplinary
knowledge can be imparted and tested in exams. On the other hand, the professional
competence of teachers only becomes apparent in the domain-specific concrete fields
of action (see above).
A special feature of vocational education and training is its overriding guiding
objective: “Empowerment to help shape the world of work in socially, ecologically
398 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
and economically responsible manner” (KMK, 1991, 196). The central idea of
design-oriented vocational training has far-reaching consequences for the
professionalisation of vocational school teachers and the realisation of learning
environments in the institutions involved in vocational training: vocational schools,
training companies and inter-company training centres.
For vocational education and training, this means that its contents cannot be
obtained by means of the didactic transformation of scientific contents. Professional
work process knowledge has its own quality. When implementing an electrical
lighting system in a residential or office space, a butcher’s shop or the vegetable
department of a food discount store, the selection and arrangement of lighting
fixtures in terms of brightness and colour temperature is extremely varied, taking
into account the respective standards for workplaces, sales rooms, etc., not least also
taking into account aesthetic criteria as well as ease of operation and repair. The
decision for a low or normal voltage solution is also a question of weighing
competing criteria. If the classes were propaedeutically geared to the basics of
electrical engineering, the focus would be on switching logic and the functionality
of the lighting fixtures. The content “electric lighting” would become one of the
applied natural sciences. The real world of work with its professional require-
ments—the actual contents of vocational education and training—as well as the
central idea of co-designing the world of work would then be excluded. In the world
of professional work, professionals are always faced with the task of exploiting the
respective scope for solutions and design—“with social and ecological responsibil-
ity” (KMK, 1999, 3, 8).
The implications of the learning field concept based on this central idea are
obvious for the fields of action of vocational school teachers. The quality of the
learning environments for vocational education and training must be measured by
whether they are designed according to the concept of the holistic solution of
vocational tasks. Donald Schoen, with his insightful-theoretical paper “The Reflec-
tive Practitioner”, corresponding to the category of practical intelligence, has dem-
onstrated the fundamental importance of practical competence and professional
artistry as independent competence not guided by theoretical (declarative) knowl-
edge. At the same time, this leads to a critical evaluation of academic (disciplinary)
knowledge as a cognitive prerequisite for competent action. Schoen summarises his
research results in the insight:
I have become convinced that universities are not devoted to the production and distribution
of fundamental knowledge in general. They are institutions committed, for the most part, to a
particular epistemology, a view of knowledge that fosters selective inattention to practical
competence and professional artistry (Schoen, 1983, S. VII).
In this context, he cites from a study in medical expertise: “85 % of the problems a
doctor sees in his office are not in the book”. Schoen sees the deeper cause for the
inability of the education system to impart knowledge that establishes vocational
competences in disciplinary, subject-systematic knowledge.
The systematic knowledge base of a profession is thought to have four essential properties. It
is specialized, firmly bounded, scientific and standardized. This last point is particularly
10.3 The “TPD” (Vocational School Teacher) Competence Model 399
Fig. 10.3 On the relationship between the objectives and theories of vocational education and
training, the initial and continuing training of TPD and the design, evaluation and measurement of
their competences
The central task of teachers at vocational schools is to empower pupils and students
to help shape the world of work and society in socially and ecologically responsible
manner (KMK, 1991/1999). The guiding ideas and objectives of vocational educa-
tion and training as well as teacher training and teacher activity form the explanatory
framework for a “TPD” competence model (Figs. 10.3 and 10.4).
The development of a “TPD” competence and measurement model is needed to
mediate between the guiding principles, goals and theories of vocational education
and training and teacher training and to develop test tasks and describe their solution
spaces. The didactic relevance of the competence model can be seen above all from
the fact that it is also suitable as a guide—among others—for TPD training.
The “TPD” competence model comprises the usual dimensions of competence
modelling
400 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
Functional Competence
Procedural Competence
Vocational school teachers have procedural competence and are also in a position to
apply their vocational knowledge in vocational training practice in a manner appro-
priate to the situation, to reflect on it and to further their education. A characteristic
feature of this level of competence is the ability to design and organise vocational
training processes under the real conditions of school or training reality. The teachers
have a vocational educational work concept. They are part of the professional group
practice.
Shaping Competence
Building on the previous levels, the highest level of competence represents the
ability to holistically (completely) solve vocational pedagogical tasks. This includes
the criteria of socially compatible teaching as well as the ability to socio-culturally
embed vocational training processes. The level of holistic competence includes the
ability, with a certain amount of creativity, to weigh up the various demands placed
on the holistic task solution in a situation-specific way: for example, between the
requirements of the curriculum, the resources available and the most pronounced
individual support for learners possible. The teacher is familiar with the relevant
professional and pedagogical-didactic innovations in their field.
Nominal Competence
teacher training and activities. There is, therefore, an urgent need for further training
for teachers whose cognitive domain-specific performance disposition (competence)
is below the first competence level (functional competence) and who nevertheless
work as teachers. The training scope and content can be identified relatively pre-
cisely using the COMET test procedure.
When justifying the behavioural dimension of the competence model, reference can
be made to the justification of the COMET competence model. The concept of
complete working and learning action applies to teacher action in particular. A spin-
off of the review of training success, as provided for by the Vocational Training Act
for intermediate and final examinations (or Part 1 and Part 2 of the examination),
impairs the professional design of the feedback structure and practice for the
vocational school as a learning venue (cf. COMET Vol. 3, 222 et seq.). The
operationalisation of the behavioural dimension, therefore, includes the examination
of competence development during training.
The behavioural dimension of the competence and measurement model is taken
up by the occupational research concept of “complete tasks” (Ulich, 1994, 168)4:
4
Ulich refers to Hellpach (1922), Tomaszewski (1981), Hacker (1986) and Volpert (1987).
10.4 The Measurement Model 403
Fig. 10.5 The professional competence of teachers with professional disciplines: levels,
sub-competences (criteria) and dimensions
procedure with which the conceptual planning competence of the teachers can be
measured within the framework of large-scale projects (rating scale A). For the
evaluation of teaching, it is necessary to develop a modified variant of the measure-
ment model with which both teaching design and teaching itself can be evaluated—
for example, within the framework of demonstration lessons (rating scale B).
The contextual reference point for the design of vocational learning processes entails
the characteristic vocational work processes/tasks and the work process knowledge
incorporated therein. This has an objective and subjective component, given by the
technical/scientific connections, as well as a pronounced subjective component,
given by the action leading, action explaining and action reflecting knowledge
(cf. Lehberger, 2013). A particular challenge for teachers at vocational schools is,
on the one hand, long-lived structural knowledge and, on the other hand, “superficial
knowledge” to be found at the surface of technical-economic development. The
decision as to whether professional knowledge has to be acquired or whether it is a
406 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
question of the ability to acquire this knowledge per situation using the
corresponding media requires a high level of professional competence from teachers.
• Are the professional contexts presented correctly?
• Does the solution or design space correspond to professional reality?
• Is the objective and subjective knowledge of the work process taken into account?
• Is the functionality of the task solution adequately challenged?
• Is a distinction made between appropriation and research when dealing with
professional knowledge?
The concept of professional action competence and the professional profiles defined
in the training regulations require a targeted approach in the selection and application
of forms of learning and mediation. For example, the acquisition of safety-related
skills requires different forms of teaching and learning than the more general aspects
of vocational education and training. Equally important is the consideration of the
10.4 The Measurement Model 407
10.4.8 Sustainability
Teaching and training always aim at the sustainable acquisition of skills. This is most
likely achieved through a high degree of self-organised learning and when learning
is accompanied by strong feedback (cf. Hattie & Yates, 2015, 61 ff.). In project-
based learning, the success of the project, the presentation of the results and the
experience that a teaching project has “achieved” something are decisive for the
acquisition of knowledge that is memorised as well as basic skills that form the basis
for the ability to act in a variety of situations.
• Is superficial learning of professional work/knowledge (Know That) avoided?
• Is the aspect of “prospectivity” (future possibilities of professional skilled work)
taken into account?
• Is competence regarded as cognitive disposition (cognitive potential)—and not
only as a qualification requirement?
• Are forms of valid and reliable quality assurance used?
• Is the aspect of developing professional identity taken into account?
10.4.9 Efficiency
The optimal use of resources is a particular challenge for the design and organisation
of vocational education and training. This concerns the equipment and use of the
study labs. The form of cooperation between teachers and between teachers and
trainers includes the possibility to increase not only the quality of teaching and
training, but also the efficiency in the planning, implementation and evaluation of
teaching and training.
• Is the time and effort required for the preparation of the teaching project
appropriate?
• Are the opportunities for teamwork used?
• Are the individual learning outcomes (competence developments) the learners
have achieved verified?
408 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
In vocational education and training, the teaching and training organisation places
particularly high demands on teachers and trainers. This concerns above all the
interaction between theoretical and practical learning in dual vocational training and
the organisation and design of practicums in vocational and technical schools. The
educational contents must be coordinated with each other, and joint projects require
a high degree of coordination. With the introduction of learning fields in vocational
schools, the demands on cooperation between teachers have increased significantly.
When classes are formed, it must be decided whether company-specific or mixed
classes are to be established in dual vocational training.
• Are the premises and equipment resources of the school used appropriately?
• Are the opportunities for learning venue cooperation exploited?
• Are the opportunities for cooperation between teachers used?
• Is Internet access secured for teachers and learners/students?
• Is feedback on learning outcomes adequately established?
Social compatibility in teaching refers above all to the aspect of humane work
design, health protection and the social aspects of teacher activity that extend beyond
the professional work context (e.g. dealing with the most varied interests of school
management, education administration, parents, companies and trainees).
• To what extent does the didactic action of the teachers (planning, teaching,
follow-up of the lessons) correspond to the criteria of humane work design?
• Are aspects of health protection and safety at work (for teachers and learners)
taken into account?
• Is the aspect of creating a good learning climate considered?
• Is handling of disturbances and conflicts (organisation, school, pupils, col-
leagues) taken into account?
• Does the teaching team consider lesson planning and design as a “common
cause”?
10.4 The Measurement Model 409
Teaching is increasingly confronted with questions of the cultural and social context
of vocational training. On the one hand, this concerns the family situation of pupils
and trainees (e.g. single parents) and the economic situation (e.g. poverty) of
learners. The migration background of pupils and students (language, social
norms, religion, etc.) is a central aspect of teaching, especially in cities and
conurbations.
• Are the anthropogenic and socio-cultural preconditions of the participants in the
lessons taken into account in lesson planning?
• Are the circumstances of the social environment taken into account?
• To what extent is the currently expected role behaviour (learning guide, moder-
ator, consultant, organiser, role model) taken into account?
• To what extent is the potential for conflict arising from the learners’ socio-cultural
background taken into account?
• Is the economic situation of the learners taken into account?
10.4.13 Creativity
If this test procedure does not measure teacher behaviour but only teacher compe-
tence as a cognitive disposition, the test items for the action field Teaching refer to
the conceptualisation of teaching as it takes place in practice in the form of class
preparation. It remains to be seen whether teachers with a high level of competence
in lesson planning also have a high level of competence in teaching. The clarification
of this connection requires a special empirical investigation. This restriction does not
apply to the action fields “Development of educational programmes” and “Planning,
development and design of the learning environment”, as in these tasks the planning
activities determine the quality of the result.
Each test item comprises:
The maximum processing time for test items is 180 minutes. The test items are
designed in such a way that this time is sufficient to process the items without time
pressure.
Within the framework of a pilot project with a group of student teachers from the
professional fields of electrical and metal engineering, two test tasks were used, one
412 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
each for the action field of teaching and the action field of designing learning
environments.
Sufficient time was available to process a test item. The prescribed time frame of
3 h proved to be appropriate. Double rating of the task solutions was carried out by
two experienced technical managers with extensive experience in rating. The very
high degree of agreement of the ratings allowed a precise recording of the compe-
tence development of the test participants.
The most important result of this pilot study was certainly the willingness of the
student teachers to take part in this test. The development of the test tasks for
teachers of different professional disciplines was relatively simple in that the voca-
tional fields of action for teachers of all professional disciplines are the same (see
above). The didactic actions of the teachers of different professional disciplines,
therefore, do not differ in the structure of the actions, but in their content. All
professions, for example, are concerned with the design of study labs (see the third
field of action) as “learning venues” for experimental and action-oriented learning.
The occupation-specific teaching/learning contents are different. This can be easily
checked using the example of the study lab to be set up, where there is a fundamental
difference to the competence of diagnostic vocational training of professionals in
different professions, which differ in their fields of action.
Comparable overriding pedagogical-didactic criteria apply to the study lab to be
set up (Fig. 10.6) for metal or electrical professions. This also applies to the action
field of teaching. This is of great advantage for the execution of tests and above all
for the interdisciplinary comparability of the results.
Test Results
Fig. 10.6 Example of a COMET test task for vocational school teachers
414 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
model were first evaluated didactically. This was based on an analysis of the
vocational tasks and fields of action of vocational school teachers using the method
of expert workshops for professionals (Kleiner et al., 2002; Spöttl, 2006). The fields
of action on which the competence model is based were confirmed, but their content
was further differentiated (Zhao & Zhuang, 2012). With reference to this result, the
complex test tasks were modified without changing their core (Fig. 10.8).
10.6 State of Research 415
Pretest (China)
Main Test
The test was attended by 321 teachers of metal technology and automotive service
(technology) from 35 vocational training centres—a representative selection for
China. The number of raters was increased to 13.
The test results proved to be very informative (Fig. 10.9).
By far the highest level of competence in China is held by lecturers at technical
colleges (Technicians Colleges). Nearly 85% of these teachers achieve the highest
level of competence. The competence level of teachers at vocational colleges is
416 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
Fig. 10.9 (Total) Distribution of competences of TPD in metal and automotive service
significantly lower—but still high. 61% reach the second level of competence. By
contrast, the competence level of teachers in the professional branch at high schools
(vocational schools) is very low. 43% have no professional competence.
For the first time, the competence profiles provide a very precise picture of the
competences of Chinese vocational school teachers in the various vocational training
programmes (Fig. 10.10).
Test Reliability
When verifying test reliability, values above 0.5 are considered acceptable and
values of 0.9 and higher are considered very high. The values of reliable consistency
achieved in the psychometric evaluation of the competence and measurement model
are 0.983 and the value of split-half reliability is 0.974 (ibid., 17). High values were
also achieved in the verification of empirical validity (Fig. 10.11 and Table 10.2).
Referring to COMET teachers’ professional model, the assessment model and its opera-
tional definition on professional competence, we construct a basic factor model, i.e., a first-
order 9-factor model consisting of 9 indexes and 45 items (as shown in Fig. 9.10) (. . .) In
summary, these results show that the professional competence test exhibited high empirical
validity, discriminating teachers with exceptional skills from those with average skills
(ibid., 22).
10.6 State of Research 417
Fig. 10.10 Competence profiles of Chinese vocational school teachers, by training courses
Fig. 10.11 Basic data model for TPD professional competence. (Source: Own compilation)
418 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
With the participation of more than 100 teachers in student tests (electronics
technicians and automotive mechatronics technicians) within the framework of
Chinese COMET projects (cf. Zhuang & Li, 2015; Zhao, Rauner, & Zhou, 2015;
Zhao, 2015), a new quality in teacher training was achieved, according to the
assessment of the test participants and their school principals. The measured com-
petence development of the teachers in the form of competence profiles is of very
high diagnostic significance with regard to the technical understanding underlying
the teaching activity (Figs. 10.9 and 10.10). If teachers take part in student tests, in
rater training and in the rating of student solutions or if they use the COMET
competence and measurement model as a didactic instrument for the design and
evaluation of teaching, then they acquire the concept of the complete (holistic)
solution of professional tasks in a relatively short time. This has a formative effect
on the design and organisation of vocational training processes.
On the basis of this recognition that teachers/lecturers transfer their professional
understanding and problem-solving patterns to their students, it is now possible to
ascertain the professional competence and problem-solving patterns of teachers/
lecturers in VET from the competence profiles of their students.
10.7 Evaluation of Demonstration Lessons in the Second Phase of Training. . . 419
The candidate (TPD-C) prepares a teaching draft for each demonstration lesson
within the framework of their state examination in the vocational specialisation as
well as in a general subject or a further focal point in a vocational specialisation and
justifies the embedment of these lessons in the respective educational programme. In
dual education programmes, the aspect of learning venue cooperation should be
taken into account. In the examination regulations, the scope of these elaborations is
limited, since the aim is to achieve a realistic examination effort, which can be
extended to a justifiable extent under the framework conditions of an examination.
The lesson plans should be sent to the members of the examination board for
evaluation a few days before the demonstration lesson. According to the examina-
tion procedure outlined here, the examiners evaluate the lesson draft on the basis of
the evaluation sheet variant A (Appendix B). A double rating (two examiners) is
useful here. They create a group rating on the basis of their individual ratings. This
rater practice contributes to the fact that ultimately high to very high values of the
agreement (interrater reliability) are reached. The marked items on the rating scale
are only used during class observation.
Two rating scales are used for the evaluation of teaching: the evaluation sheet variant
A used for the evaluation of the teaching design and the evaluation sheet variant B,
which consists of the rating scale A and modified rating items. Furthermore, the
impression of the observed lesson facilitates the correction of evaluations of the
teaching project on the basis of the lesson plan.
The main focus of class observation lies on the evaluation of the social-
communicative competence of the TPD-C. The reflections of the Kollegiums des
Studienseminars für das Lehramt an Berufskollegs [collegium of the study seminar
for the teaching profession at vocational colleges] in Hagen (NRW) on “character-
istics of good teaching” have been incorporated into the extension of the competence
model by an essential component: the social-communicative competence of the
420 10 Measuring Professional Competence of Teachers of Professional Disciplines. . .
The examination board evaluates the examination performance on the basis of its
ratings (evaluation forms variant B and C). They communicate via a group rating.
The “weight” with which the individual examination sections are included in an
overall assessment of the second state examination is laid down in the examination
regulations (Fig. 10.12).
10.9 Outlook 421
Fig. 10.12 COMET rating for the assessment of the performance of teacher candidates (TPD-C) in
the context of the second state examination
The heads of the study seminar for the teaching profession at vocational colleges in
Hagen (NRW) have named characteristics and items for successful teaching in a
vocational pedagogical discussion process with reference to the relevant vocational
pedagogical research as well as their extensive teaching and training experiences,
from which a model for the sub-competence “social-communicative competence of
teachers” was developed. The large number of heads of department involved and the
scope of this discussion process form the basis for the only possible methodological
validation of this model in the form of discursive validity (cf. Kelle, Kluge, & Prein,
1993, 49 ff.; Kleemann et al., 2009, 49).
In a second step, the reliability of the rating scale and the competence criteria was
examined on the basis of a two-stage rating training. The rating was based on two
video recordings of demonstration lessons. After the individual rating of the first
video recording, the rating groups agreed on a group rating. It was expected that the
values of interpreter reliability achieved in the rating of the second video recording
would increase. The results of this rating procedure yielded high interrater-reliability
values for both groups. With this extension of the COMET model for recording
teacher competence (TPD), a set of instruments is available both for competence
diagnostics and for teacher training and further education, which has the potential to
increase the quality of the didactic actions of these teachers.
10.9 Outlook
curriculum or national standards for the training of vocational school teachers. The
international connectivity of the COMET competence model for vocational school
teachers was demonstrated with the project described above.
The psychometric evaluation of the competence model was a decisive step in the
research process “Competence Diagnostics for Vocational School Teachers”. The
concept of open complex test tasks includes high demands on the test methodology.
The COMET project was able to show which psychometric evaluation methods are
suitable for this research (Martens & Rost, 2009, 96 ff.; Erdwien & Martens, 2009,
62 ff.; Haasler & Erdwien, 2009, 142 ff.).
In its introductory standards for teacher training, the KMK rightly emphasises: “The
professional quality of teachers is determined by the quality of their teaching”
(KMK, 2004b, 3). This fundamental insight requires an exploration of this connec-
tion. The sense in measuring teacher competence with the methods of Large-Scale
Competence Diagnostics and thereby reducing the abilities of teachers to a domain-
specific cognitive disposition (e.g. to lesson planning) is only given if the relation-
ship between the measured teacher competence and the quality of teaching can be
empirically proven. Only then is the measured level of competence an indicator of
the vocational competence of teachers. An external criterion for checking the
validity of the content of the test items in teaching is the competence development
of the pupils. After it has been empirically proven that the competence profiles of the
teachers correspond to a degree with those of their pupils and that the competence
profiles of the pupils can, therefore, be traced back to the problem-solving patterns of
their teachers, there is a high plausibility for the thesis: “Good teachers train
competent pupils” (Zhao, 2015, 443).
Chapter 11
The Didactic Quality of the Competence
and Measurement Model
For decades, vocational education was torn between two basic guiding principles:
science orientation (pure education) versus qualification to suit the needs of the
labour market (utilitarianism). The central idea of pure education goes back to
Alexander von Humboldt. Heinrich Heine sums it up particularly frankly and briefly:
“Real education is not education for any purpose, but, like all striving for perfection,
finds its meaning within itself”. For the implementation of this guiding principle, the
orientation of pure education towards the sciences—towards pure scientific exper-
tise—appeared to be the adequate path to be pursued by all education. The German
Education Council has, therefore, elevated science orientation to a fundamental
didactic principle of all education. For vocational education and training, this
promised to cast off the stigma of utilitarianism, that is, education aimed at useful-
ness. This, however, posed a new problem for vocational education and training.
Attempts to derive vocational knowledge from (academic) scientific knowledge, to
use it to develop systematically structured educational plans and to establish voca-
tional competence led to a dead end. The success story of the science system can be
seen in the exponential multiplication of generalisable disciplinary knowledge,
based on a system of scientific disciplines with a high division of labour. Scientific
knowledge is regarded as pure, resulting in the relationship between genuine—
pure—education and the pure knowledge produced by the sciences. However, this
ignores the fundamental realisation that the historically grown world can only be
understood as a process of objectifying purposes and the underlying interests and
needs. The world in which we live and work, therefore, inevitably means interacting
and dealing with values and responsibility.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 423
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2_11
424 11 The Didactic Quality of the Competence and Measurement Model
In the 1980s, the insight described above was taken into account with the
central idea of empowering those who are to be professionally educated to help
shape the world of work in socially and ecologically responsible manner.
Therefore, it is not the scientific abstract knowledge that forms the basis for
the development of professional competence, but the knowledge of the work
process as the basis for competent and responsible professional action
(Rauner, 1988, 32–51; Heidegger, Adolph, & Laske, 1997; KMK, , 1991,
590–593; 1999).
The world in which we live and work, in whose development we—in all spheres of
society—participate as consumers every day (through our purchasing decisions)—
as producers of utility values, as voters or members of social movements, and
constantly, consciously or subconsciously, both to a large and small extent, is not
a pure world. There are no cars, no buildings, no furnishings, no services that are
pure or without a purpose.
The learning field concept turns the historically grown (working) world as an
objectification of purposes and goals as well as the interests incorporated
therein, that is, as a world with values, into the object of vocational education
and training. It is a matter of understanding and exploiting the scope for
creativity in order to help shape a world of work that is increasingly dependent
on participation.
Patricia Benner’s project from the “Nursing” faculty of the prominent Uni-
versity of California (Berkeley) is still regarded as groundbreaking in the
international vocational pedagogical discussion for training nurses in accor-
dance with the novice-expert paradigm. She describes the “significant” work
situations of nurses, which she and her team empirically identify as the basis
for curriculum development, as “paradigmatic work situations” (Benner,
1997). Successfully coping with these work situations triggers competence
development. Paradigmatic work situations have the quality of development
tasks. Successfully “passing” as well as reflecting on a paradigmatic work
situation teach trainees to see their working world from a broader perspective
and to take a recognisable step in their competence development. This is the
basis for the KMK’s change of perspective from an objectivistic-systematic
learning tradition to a subject-related structuring of vocational development
and learning processes. The development of the ability to solve vocational
tasks—more precisely to solve them completely—becomes the yardstick
when logically structuring the contents of vocational training programmes.
426 11 The Didactic Quality of the Competence and Measurement Model
The risk of failing with the ambitious goal of introducing the groundbreaking idea
of vocational education and training structured according to learning fields has not
been averted. Under the conditions of accelerated social change, the willingness and
ability to try out new things is seen as an indicator of innovative competence in
individuals and institutions. Following the rather cumbersome process of introduc-
ing learning fields to structure educational plans and processes in vocational educa-
tion and training, the orientation towards the COMET competence model should
open a new approach to the learning field concept. A challenging project, it is also a
competence model that is suitable for mediating between educational goal and
learning task.
The learning field concept is characterised by a number of key terms which
should be recalled in order to avoid conceptual misunderstandings in further
explanations.
1. The learning field concept is based on the orientation of vocational training
processes towards work situations whose potential for professional competence
development is assessed as “significant” by experts in the respective profession.
2. In principle, competence-promoting work situations and tasks are the linchpin for
the design and organisation of vocational learning, that is, the imparting of
vocational action and shaping competence. The KMK manual on the learning
field concept, therefore, refers to them as “situations that are significant for the
exercise of a profession”.
3. The description of work and learning tasks as effective forms of in-company and
school learning, therefore, requires both a description of the competence-
promoting (significant) work situations and the respective work assignments. It
is only through this linkage that work and learning tasks challenge targeted
vocational action and learning.
4. The distinction between action and learning fields points to the fundamental
difference between working and learning and to the fact that both—in vocational
education and training—are constitutive for each other. The didactic reference
point for the learning fields are the vocational action fields. At the same time,
learning fields—prospectively—point beyond vocational practice. While the
action fields are concerned with the professional execution of a company order,
the learning fields are concerned exclusively with learning. Within the learning
fields, it is, therefore, possible and, pursuant to the educational objective of
co-designing in social and ecological responsibility, also necessary for the
description of learning tasks to go beyond the limited operational framework of
the work situation in accordance with the formulated characteristics of a learning
task (refer to p. 505). In nursing or commercial professions, case situations or case
studies are often used which are characterised by a stronger link between learning
and action fields (Fig. 11.1).
The term “learning task” is not used in the learning field concept and, therefore,
requires classification. The learning field concept has produced the blurred term
“learning situations”, which take up “professional tasks and courses of action” and
“didactically and methodically prepare them for implementation in teaching” (KMK
11.1 The Learning Field Concept Provides Vocational Education and Training with. . . 427
certification systems and assessment methods based thereon, such as the British
system of National Vocational Qualifications (NVQ), promise stronger ground
under the feet of those who are looking for tried and tested recipes. In contrast to
the seemingly diffuse learning field concept, which after almost two decades of its
introduction appears as a ruin of innovation, competence-based learning promises a
handy formula which, it seems, is also in line with the EU projects of the European
Qualifications Framework and the ESCO project (European Skills, Competences
and Occupations).
One problem with both EU initiatives is the programmatic formula that voca-
tional education and training is defined as a process of acquiring qualifications
“irrespective of place and time”. In this context, vocational curricula and developed
methods of vocational learning are regarded as input factors—and, therefore, as
yesterday’s methods. From this perspective, vocational training programmes appear
to be a considerable disruption potential that stands in the way of establishing a
profitable and flexible service sector (in line with a relevant GATS recommendation)
(Drexel, 2005).
It looks as if the educational policy and planning reception of this qualification
concept in Germany is meeting with considerable resistance and that dual vocational
training is being rediscovered internationally, above all as a means of combating
youth unemployment. At their meeting at the end of September 2011 in Paris, the
G-20 employment ministers emphasised the introduction of dual vocational training
systems in their catalogue of recommendations for action to combat youth unem-
ployment. Modern vocational training (Sennett, 1998) based on the concept of
European core occupations (Rauner, 2005), vocational training structured according
to learning fields and competence diagnostics based on vocational shaping compe-
tence (Rauner et al., 2011) are gaining in importance in this context—also interna-
tionally. There is, therefore, much to suggest that the learning field concept is still
proving to be a highly innovative reform project for vocational education and
training.
The working world for which vocational training prepares teaches us that a
heating or lighting specialist, a specialist in the retail trade or a specialist in education
is always faced with the challenge of balancing a variety of possible solutions and
procedures when solving a professional task. The amount of time available, the
variety of professionally possible solutions, their practical value and sustainability,
their environmental and social compatibility and not least their economic feasibility
are criteria that must be weighed against each other in every situation.
(continued)
11.1 The Learning Field Concept Provides Vocational Education and Training with. . . 429
world. True education enables us to answer this question: Why are the realities
of the working world (and society) like this and not like that? And: Is there
another way? True education enables us to help shape the (working) world,
which inevitably means facing up to the responsibility associated with it.
Fig. 11.2 Identification and determination of training and teaching content in terms of vocational
qualification requirements and educational objectives (Rauner, 2000)
430 11 The Didactic Quality of the Competence and Measurement Model
The concepts of active learning pose a mystery: How do beginners become experts
without first acquiring the corresponding knowledge? This may sound paradoxical,
but it corresponds exactly to what vocational pedagogy understands by active
learning. Beginners in a profession become experts by doing what they want to
learn. Trainers support them by confronting learners with work situations that are
challenging to master. At the same time, it is also true that professional skills are
based on professional knowledge.
With the introduction of the learning field concept, the formula “Professional
action requires professional knowledge” is a thing of the past.
Gottfried Adolph (1984, 40 f.) reported on an informative example from his
teaching practice.
inserting and removing the light, the phenomenon is repeated over and over again, as
if one needed repetitive [...] confirmation of what is intrinsically “impossible”
(exclamation of a student: “This is impossible!”).
Gottfried Adolph comments on this typical event: “... Everything that happened
was not expected by the students, who expected that a ‘correctly’ connected light
would also light up. If it does not, then it is ‘broken’. It is expected that twisting a
light in and out of its socket will influence that light and not on the other”.
He, therefore, concludes: “The preceding theoretical teaching on the series
connection of resistors has not changed the expectations expected in practice—the
school theory has not reached the personal, secret theory [...]. It turns out that the still
widely used organisational model (first so-called ‘theory teaching’... followed by
‘practising application’... is wrong in its approach.” (ibid., 41).
If the teacher had asked the pupils to experiment with the series connection of
lights of different wattages instead, then the pupils, possibly supported by the
teacher, would have finally understood in a process of testing and experimenting
(in line with experimental cognitive activity) not only the laws according to which a
series connection works, but also the important aspects of connecting lights in series.
The decisive point for this form of acquisition of professional knowledge, however,
is that the pupils would not only be taught formulae for calculating the series
connection of Ohm’s resistors, but that they would be challenged to experiment
and acquire these findings themselves. If these technical findings are taught by the
teacher, then their value for practical tasks is not only limited, but the teacher has
missed an important learning opportunity, namely, the acquisition of the ability to
gain knowledge by experimenting.
Only work tasks that encompass a potential for the learner’s competence develop-
ment have the quality of “development tasks” and can be transferred to learning
tasks. Learning tasks can be completed in a few hours if—as with open test tasks—
they are restricted to conceptual planning. This distinguishes learning tasks from
projects. Projects always have two results:
11.2 Designing Vocational Education Processes in Vocational Schools 433
A “Product”
A Learning Outcome
The learning outcome is the main concern of a project within training and must,
therefore, not be lost sight of. It is important to exchange ideas at the beginning of a
project and ascertain what can be learnt in the planning and implementation of a
project.
Learning situations aim at professional competence development. They belong to
the project-based forms of learning, as the intrinsic learning tasks are realistic and
complex. They are, therefore, also based on the concept of a complete task solution.
If learning tasks are also solved practically, then it is useful to speak of “work and
learning tasks”.
Learning situations pose a practical advantage. Project-based learning is
maintained, especially if the didactic concept of complete task solution is observed.
However, the organisational and temporal framework conditions for the implemen-
tation of learning situations are uncomplicated. This also means that learning tasks
can be worked on and justified in varying depth and breadth.
The following explanations serve as orientation for the step-by-step design of
learning tasks on the basis of work situations/tasks (Fig. 11.4).
trainees gain their work experience and are trained practically in these company-
specific contexts.
In many cases, it will make sense to conduct a more in-depth investigation of the
work situation or task together with the trainees in order to explore the company or
the company’s expertise. For this purpose, a detailed questionnaire or an exploration
grid with the most important aspects to be considered should be developed. Only
such an instrument turns an unsystematic inspection into a targeted exploration.
Despite thorough preparation, however, the operational events can never be fully
11.2 Designing Vocational Education Processes in Vocational Schools 435
(continued)
436 11 The Didactic Quality of the Competence and Measurement Model
Table 11.2 Checklist for verifying the suitability of in-company work tasks for training purposes
Trainees
• Do the trainees have sufficient previous knowledge and practical experience to cope with the
task?
• Can the trainees learn anything while working on the tasks in line with their training?
• Is the time and organisational effort required to complete the work task clear and manageable for
the trainees?
• Trainers and teachers
• Do the trainers and teachers possess the necessary technical, social and methodological-didactic
competences or can they acquire missing competences?
• Companies
• Is it possible to reconcile work task processing by trainees with the interests of the training
companies?
• Is there any benefit for the training companies or for the learning venue network?
• Are the burdens for the training companies distributed fairly?
• Can the production or the service be taken out of the company’s time-critical process for training
purposes?
• Do those responsible in the company agree?
• Is there enough time available for the part-time trainers?
• Vocational school
• Do those responsible at the school agree?
• Are the colleagues whose lessons may also be affected informed and do they agree?
• Resources
• Are the necessary resources available or can they be procured?
• Are there suitable learning and work locations available for processing the work task?
• Framework curricula
• Can a reference be made to the framework curricula?
• Is the work task relevant for examinations?
• Possibilities for design and potential
• Does processing the task allow alternative approaches and solutions?
• Skilled work/craftsmanship
• Does the processing of the work task for the skilled work or craft work place exemplary
demands on the trainees?
• Financing
• Can any necessary funding be raised?
The following example of a checklist for the selection of suitable work tasks
contains selection criteria that can be checked by means of partial questions that can
be answered with “Yes” or “No” (Table 11.2):
Basically, work processes are always learning processes. Trainees—but also all
professionals—gain experience, gain confidence in handling specific professional
11.2 Designing Vocational Education Processes in Vocational Schools 437
tasks (exercise effect), learn how to deal with mistakes and solve unforeseeable
problems, work together with colleagues, superiors and trainees. It usually also deals
with the concern of the consequences of their own actions regarding
• The superordinate work result
• The clients
• The team
In this respect, work tasks are always associated with work experience. It depends
on the in-company training—the trainers and the company practice group—whether
and to what extent the work experience is reflected.
These questions are all within the context of operational circumstances and the
scope for design. Nevertheless, the reflection of the operational work and the
exchange of ideas with the operational actors are a first step towards the transfor-
mation of a work task into a process of generalising the situational work experience.
The result is knowledge that detaches itself from the work process and opens up the
possibility of dealing prospectively with the specific work processes in technical
discussions with colleagues, trainers and teachers: What could be improved in the
implementation of the work processes?
At school, the relationship between work and learning—between the work and
learning task—is fundamentally changing. It is no longer a matter of professionally
carrying out a work task—embedded in a company work process. It is exclusively
about learning. In this respect, it is consistent that we are talking here about learning
tasks and learning situations. The term “work and learning tasks”, which is occa-
sionally used, is intended to remind us that the learning tasks are directly related to
concrete work tasks. That would speak for this designation. It should, however, be
reserved for projects which are carried out in cooperation between schools and
companies and which are embedded in real work processes. School-based learning
tasks, on the other hand, have as their reference point “significant work situations” or
work tasks and processes which teachers consider to be characteristic of the profes-
sion and adequate for the respective situation of the learner’s competence
development.
For learning tasks, it is, therefore, not important that they are based on the
subjective experience of the trainees, but that the trainees are able to build on
their own work experience by working on a learning task in the process of
school learning.
438 11 The Didactic Quality of the Competence and Measurement Model
Prospectivity
The following six design features can be derived from the COMET competence
model and the theoretical integration of the learning field concept for the design of
learning tasks: transcending professional reality: prospectivity.
Trainees from different companies have similar or different experiences in the
same professional action field. In total, they point beyond the problem-solving
horizon of individual companies. The school, therefore, has the potential to think
and experiment prospectively and beyond the current company situation. When
designing the learning situations, it is, therefore, very important to make full use
of the experimental possibilities for a prospective interpretation of the learning tasks.
To this end, study labs must be equipped accordingly. Unfortunately, they rarely
have this quality: they are usually intended to experimentally comprehend and apply
theoretically acquired knowledge.
Learning tasks allow and suggest highlighting of work situations and aspects and
neglecting other—less learning-relevant—aspects, as long as the authenticity and
objectivity of the work situation is not affected. This achieves a certain dramatisation
of the work situation/task, which strengthens the motivation of the learners to deal
with the given task with commitment.
Learning tasks are formulated with reference to realistic work situations from the
perspective of “customers”. The learners are, therefore, challenged to lead a problem
analysis based on the customer’s description and to ultimately develop a profes-
sional procedure and solution of the task. This concept of open tasks requires a more
or less wide scope for design through the form of the situation description taking into
account the criteria of the concept of the complete task solution.
Representativity
The learning task represents work situations that are typical for the profession and
contain problems with adequate learning and development potential. They have the
quality of development tasks. Focal points of operational organisational develop-
ment, for which there are no fixed solutions, are also suitable.
Competence Development
The following structure has proved to be useful for the description of learning
tasks that are intended to challenge the complete solution of tasks (Lehberger, 2015,
67):
• Specification of the learning task, which shows the reference to the action
• A description of the situation which relates to a typical and problematic profes-
sional work situation (if necessary, with illustrations), which is formulated from a
customer perspective and which is open to alternative solutions—in line with
professional practice
• A task that clarifies the perspective from which the situation is to be viewed and
from which the objective of the action is to be derived
Experience from the COMET projects shows that active use is made of the option of
publishing tried and tested learning situations/tasks via the Internet—for example,
using net-based groupware. As these are open learning situations/learning tasks and
not conventional teaching designs, there is no standardisation of the teaching-
learning processes, in which the situation-specific peculiarities remain unaltered.
In this respect, there is every reason to establish such platforms.
However, one condition should be fulfilled before learning tasks are “published”:
Each learning task includes a solution space, so that teachers can recognise the
learning potential in a learning task from the perspective of the developers. In
principle, solution spaces cannot be complete. However, they define the possible
solutions accordingly—related to all aspects of the solution. Therefore, in the course
of time, the solution spaces are expanded by new users.
With some practice, experienced teachers are able to develop learning tasks
virtually “on a continuous basis”. Practice shows that whenever learning tasks
have not been tested and are tasks that the authors have only somehow thought up,
the quality suffers considerably (cf. Lehberger, 2015, 213 f.).
Learning tasks that are placed on the Internet and published should always be
tested in class and include a description of the solution space.
It is recommended to develop learning tasks in a team. According to all
experience, this increases the quality of the tasks.
Fig. 11.6 Distribution of competence levels across two locations (shipping clerks)
442 11 The Didactic Quality of the Competence and Measurement Model
Developing Competences
The COMET Competence Model offers a solution that is oriented towards the
guiding principle of imparting professional competence by working on and solving
work tasks that demonstrate the quality of development tasks. The overarching
educational objective, the ability to completely solve work tasks, cannot be called
into question because incompletely solved work tasks entail risks to a greater or
lesser extent. Empirical competence research shows that the great heterogeneity
within and between the test groups (e.g. classes) persists even if the teacher succeeds
in raising the competence level of their class (Fig. 11.8).
If one depicts the professional competence (development) of trainees or technical
college students in the form of competence profiles (Fig. 11.9), then learners and
teachers can answer important questions such as
• Has the trainee/student completely solved the work/learning task?
• If not, which aspects of the solution were not or insufficiently considered?
• Is the level of competence similar in all learners?
11.2 Designing Vocational Education Processes in Vocational Schools 443
Trainees grow into a profession by learning to Teachers define the learning objectives for
solve increasingly complex professional tasks their lessons: target-learning behaviour of the
completely and responsibly: The professional pupils = lesson planning.
skills as well as the understanding and They organise learning by the optimal
responsibility of what one does is an arrangement of learning steps:
indissoluble connection.
This is about the attempt to achieve the target-
Therefore, the potential of the learning task to learning behaviour of the pupils = lesson
trigger competence development is organisation; then the teacher checks whether
particularly important. Professional the pupil Sch has become a pupil Sch’ as a
competence arises from the reflected work result of learning: learning control.
experience.
The degree to which prospective professionals The didactic action of the teacher is based on
(trainees/students) are able to exploit the scope the type of purposeful rational action and
for solutions or design given by vocational corresponding didactics, as expressed, for
tasks and justify their solutions is the indicator example, in the concept of programmed
for the development of professional learning.
competence.
Teachers as “Development Supporters” and
Teachers as “Teaching System”
“Learning Consultants”
• If this is the case, then the teacher is challenged to promote the sub-competences
developed, for example, by means of corresponding learning tasks.
• At what level of knowledge were the task solutions based?
The competence profiles of the trainees/students are a good basis for the design of
differentiated teaching (individual support).
This form of diagnostics (evaluation) also shows the level of knowledge at which
trainees/students can solve work tasks/learning tasks: At the level of action leading,
explaining or reflecting knowledge of the work process.
444 11 The Didactic Quality of the Competence and Measurement Model
Fig. 11.8 Percentile bands for professional competence via test groups at class level for apprentices
(results 2009)
Fig. 11.9 (a) Average competence profile of a test group of vocational school students (type
“VET”), n ¼ 27 and (b) Differentiation of the competence profiles according to the total score (TS)
and the coefficient of variation: (a) E-B, Class No 7, n ¼ 26; (b) E-B, Class No 5, n ¼ 18; (c) E-B,
Class No 24, n ¼ 19; (d) E-B, Class No 23, n ¼ 17 (Results 2009).
446 11 The Didactic Quality of the Competence and Measurement Model
The planning and preparation of project-based forms of learning are confronted with
a dilemma. Detailed planning largely determines the goals, contents and course of
didactic action. However, good lesson planning is only given if it opens up leeway
for the learners/students. A first hint on how to deal with the described dilemma is
provided by the training paradox already considered: Professional beginners
become experts by doing what they want or should learn. The teacher is, therefore,
not allowed to spoon-feed the students what they want to learn. This is where the
new role of teachers comes into play, namely, opening up design and decision-
making spaces for trainees/students. The saying “from knowledge mediator to
learning process facilitator” is first made concrete here.
The following, therefore, deals with the design and organisation of teaching-
learning processes which enable the individual learner/student to deal with learning
tasks which have a suitable learning potential. In order for them to learn something,
it is particularly important that they can contribute their work experience and
determine where they need to learn something in order to be able to work on the
tasks they have derived from the learning tasks. The individual discussion of a
learning task includes cooperation with other trainees/students if the work or learn-
ing process requires it.
The selection of suitable customer orders and their formulation as a learning task
plays a very central role in the design and organisation of vocational training
processes. Two sources for the selection of customer orders have already been
described in the form of the works survey and the job portal. Based on the selected
assignments, the teacher can create learning tasks with corresponding situation
descriptions. If there is a job portal, then of course the learners/students can also
choose learning tasks that correspond to their level of development and that have the
right potential for something novel. The question from the learner’s point of view is:
Where do I see the challenges that the task holds in store for me? The learner will
only find an answer to the question of what they can actually learn by dealing with
the new situation once the task has been solved. At this point, the teacher has a
special responsibility to ensure that the learners are able to learn something while
448 11 The Didactic Quality of the Competence and Measurement Model
completing the learning task. In order to fulfil this responsibility, it must clarify very
precisely what experiences the (individual) learners have already had and what
knowledge they have acquired. Only then can the challenge be described, by the
accomplishment of which they can gain new work experience.
In current teaching practice, it can be observed time and again that learners are
usually able to master learning tasks through the use of their existing work and
learning experience. Very often nothing new is learnt!
The selection of a suitable learning task is not critical in view of the heterogeneity
between learners or between different learning groups, as this form of learning leaves
open the depth and breadth to which individual learners or learning groups work on
the task. There is, therefore, not only a task-specific learning potential or a learning
potential related to a competence level that can be summarised in “learning objec-
tives”. Just as in sports, an improvement in the long jump from, for example, 4.20 m
to 4.40 m may be a great personal success for some, while the 5.20 m mark is not a
success for others if they have already jumped 5.40 m.
Learning tasks with their possible solutions leave open the level at which they
can be solved. They have an individual development potential for the trainees/
students.
What teachers should bear in mind when taking this first step:
• The learning task must be selected so that it has an appropriate learning potential
for the learning group and all trainees/students on their way to achieving employ-
ability (also refer to the job profiles and vocational curricula).
• The learning task should be a challenge for both the weak and the strong learners
and offer correspondingly challenging solutions.
• The learning task must be described from the customer’s perspective (refer to
p. 541 and p. 549). In the case of extensive tasks, the question arises as to a
division of labour in groups. This form of learning organisation is very demand-
ing because the coordination of division of labour learning involves cooperation
between groups and all participants should benefit from the learning processes
and outcomes:
• The combination of sub-solutions and new knowledge must be carefully planned
• How should the group inform each other about what they have learnt (refer to
p. 508).
450 11 The Didactic Quality of the Competence and Measurement Model
In this step, it is particularly important that the teacher succeeds in getting the
trainees/students to adopt the respective learning task as their own. For this purpose,
they first clarify the customer’s (externally or internally) formulated requirements
and wishes on the basis of the situation description. This analysis gives the trainee/
student an initial orientation regarding the questions as to the result of processing the
learning task from a technical perspective and what needs to be done (tasks) in order
to achieve appropriate solutions (technical specification). At this point, it is also a
matter of identifying possible solutions and deciding which approaches to solutions
“remain in play” for the time being, that is, which will be pursued further.
All learning tasks are described from the customer’s perspective. The task of the
learners—as prospective professionals—is then to:
• Check the customers’ requirements for their feasibility
• Check whether individual requirements contradict each other and how these
contradictions can be resolved by comparing all requirements in their weighting
• Check whether the customer has overlooked requirements that are technically
feasible, for example
The most important thing then is to incrementally translate the customer’s wishes
into a specification.
It remains to be seen whether the specification formulated in a first step will prove
feasible and whether the further steps of the task solution will result in new insights
and “better” solution options. It is likely that an initially formulated specification will
only take its final form when the procedure and its justification are documented.
If a learning task is in the form of a specification given by the teacher, then the
trainee/student becomes the executor of the detailed solution, as shown by the
following example:
How teachers and trainers can hinder the process of competence development:
• When they formulate situation descriptions that do not reflect the cus-
tomer’s requirements and wishes
• If they give the trainee/student learning tasks in the form of specifications,
thereby telling them exactly what they have to do
(continued)
11.2 Designing Vocational Education Processes in Vocational Schools 451
In practice, teachers frequently set tasks without considering what their trainees/
students can learn. Their professional task concept may be based on a
misconception, at least if it is the task concept of operational work preparation
(WP) that ensures through detailed specifications that the task solution is
implemented as planned. This unintentionally destroys the learning opportunities
of trainees/students.
A misconception: The aim of the lesson is not achieved when the learning
tasks have been solved, but when learning tasks are solved with a learning
potential identified in advance by the teacher and when the trainees/students
“learn” to pursue this question while reflecting on the work experience.
It is, therefore, part of the professionalism of the didactic actor to estimate the
degree of difficulty of learning tasks so that the trainee/student is not over- or
underchallenged. With heterogeneous learning groups, it might be difficult to find
the “right level of difficulty”. Here, the concept of the “open learning task” requires a
rethink. It is not important to adjust the level of difficulty of a learning task, as there
cannot be an appropriate level of difficulty for all learners in a learning group!
Rather, the teacher formulates realistic learning tasks that have development poten-
tial in learning a profession. These are learning tasks that
• Are appropriate for the “level of training” (beginner, advanced beginner, etc.)
• Do not restrict the scope for design
• Enable trainees/students to base their tasks on a level of knowledge
corresponding to their individual competence development
The solution variants of the individual learners and those of the working group, as
well as the depth and breadth of their justifications then represent the competence
level and the competence profiles of the trainees. When learners give their best, there
is no underchallenging of the stronger. The challenge for the teacher is to provide
“process-related help”, so that the weaker also find a solution to the problem.
452 11 The Didactic Quality of the Competence and Measurement Model
After the analysis and technical specification of the learning task, the trainees/
students are able to reflect on their learning opportunities together with the teacher.
With the learning task analysed in this way, the trainees/students now link the two
types of objectives: “learning objectives” and “work objectives”. They have been
concretised to such an extent that they represent the orientation basis for working on
the task—alone or in a team.
Of course, the trainees/students can only answer the preceding questions if the
analysis of the situation description is successful: Have the customer’s requirements
and wishes become clear to them and could they make an initial technical specifi-
cation? In tuition practice, however, it is not uncommon for trainees/students to have
difficulty understanding the description of the situation. They then have no access to
the learning situation: “I don’t understand the task and don’t know what to do.” The
challenge for the teachers now is to help the trainees/students without depriving
them of the chance to find access themselves. This is where process-related help is an
obvious option, for example, in the form of questions and requests that open up the
learner’s own access to the task at hand.
Once the approximate result from the solution of the tasks or the processing of a
project and which alternative solutions and procedures must be weighed up has been
clarified, it is necessary to define the evaluation criteria for the task solution. The
COMET rating procedure can be used as an orientation framework here. The
didactic benefit of this step is obvious: learners now know exactly what is important
when developing a task solution (Table 11.3).
The results of empirical teaching research show that the development of evalu-
ation criteria (and their application in the self-evaluation of work and learning
outcomes) increases learning success in terms of
• The scope and evaluation of alternative solutions
• The possibilities for the design and organisation of the task solution (work
process)
• The reflection of what was learnt and the learning process
As the evaluation criteria describe not only the expectations of the result, but also
of the task solution process, they are a good basis for reflecting on what has been
learned and the quality of the task solution.
In this phase of teaching, teachers are challenged to become aware of their
expectations of the learners’ individual competence development and to assess the
learning potential of the learning task on the basis of the following questions:
Table 11.3 Evaluation criteria for task solution, approach and competence
Criteria for evaluating the
Acquisition of new
Task solution Method competences
• Does the task solution have • Has the planned procedure • What work experience and
an appropriate utility value for proven worthwhile? knowledge could be drawn on?
the “customer” (client)? • Was it possible to translate • What knowledge and skills
• Was the task solved the situation descriptionØ had to be acquired in order to
completely? into technical specifications? solve the task?
• Was there a well-founded • Was it necessary to deviate • Where and how was the
balance between alternative from the client’s require- knowledge and know-how of
solutions? ments—if so, why? the teacher used?
• Was the presentation of the • In which steps did prior • What means were used to
results successful (for whom)? knowledge not suffice to solve the task (reference books,
• How is the quality of the task solve the task? internet, etc.)?
solution—based on the evalu- • What aids were used to • Did the know-how of indi-
ation criteria—assessed? solve the task? vidual pupils (pupils learn from
• Which errors and dead ends pupils) help?
occurred and how were they • What role did trying out and
corrected? experimenting play in the
acquisition of new
competences?
454 11 The Didactic Quality of the Competence and Measurement Model
the processing of learning tasks and in the execution of projects. This “gradual
approach” (Böhle, 2009) is an explanation for the dissolution of the described
training paradox in connection with action learning (refer to p. 495) (Fig. 11.11).
When observing current teaching practice, it is noticeable that in the practical
implementation of the theory of complete action this very “gradual approach” is
often ignored as a possibility of knowledge acquisition. Instead, the complete
handover phases are used to structure the entire work and learning process, and it
is assumed that the entire knowledge required for planning can be made available in
advance via the one-off procurement of information. This practice takes the concept
of acting learning ad absurdum, because it only draws on objectively available
knowledge and excludes learning through reflection of experience. The preceding
explanations naturally do not exclude the possibility that at the beginning of the
discussion with a task, initial planning decisions can be made and approaches to
solutions developed by accessing the available knowledge. The development of
professional competence is not only about obtaining missing information, but
especially about developing concepts for:
• Professional learning
• Professional work
• Professional cooperation (Bremer, 2006, 293 ff.)
It usually some time for learners to understand how work and learning are
interrelated and that they are two sides of the same coin. Teachers and trainees/
students are challenged in vocational education and training to understand what
distinguishes work process knowledge and the vocational skills based thereon. The
concept of collegial cooperation is based on experiences of cooperation in opera-
tional work processes.
The possibilities for dealing with a new challenge, which initially appears to be an
insurmountable hurdle to solving a problem, are manifold. First of all, the reflection
of the learning experience when solving new tasks—under the guidance of the
teacher—is an essential part of tuition. This is about the development of a
456 11 The Didactic Quality of the Competence and Measurement Model
professional learning concept. It is not enough for this to emerge randomly, but for
trainees/students to become aware of their possibilities of learning on the path to
employability.
The support provided by the teachers should be process-related and not product-
related. References to sources of information, methods of learning, experimental
possibilities, software tools or even mathematical methods belong to the process-
related aids which give the trainees the opportunity to solve the task themselves.
Process-related support also includes requests or questions expressed by teachers to
learners. Product-related support, on the other hand, is aimed directly at solving a
task or a problem.
In the school learning processes, the trainees/students tie in with their own
experiences or those of their fellow learners. It is, therefore, important to understand
that in vocational education and training, “group work” for teachers and trainees/
students is not a question of changing the social form, as is often the case in
textbooks.
It is not unusual for trainees/students to exclaim “not group work again!” when
teachers prescribe group work according to the principle of method change in order
to practice the ability to work together. If cooperation in a working or learning group
is also to be lived and experienced subjectively as meaningful, then working and
learning is a prerequisite for a common cause. If learners are aware of this and have
adopted the corresponding learning task as their own, then it is also a question of
how the learning process can be shaped together.
458 11 The Didactic Quality of the Competence and Measurement Model
This is where the teacher comes in, who can draw on the results of learning
research regarding the organisation and design of group work. In in-company
vocational training, trainees are assigned different functions—consciously or sub-
consciously—namely, that of
• Minions
• Spectators, observers
• Worker’s assistants
• Employees or colleagues
These functions can ultimately result in stable roles with a lasting impact on the
success or failure of training.
Trainees who remain in the role of the assistant for too long and become
accustomed to someone always telling them what to do and how to do it run
the risk of not achieving the objective of vocational training “professional
action competence”.
Very similar traps lurk in the implementation of learning situations and projects at
school. Teachers and trainers, therefore, have the important task of making trainees
aware of these traps.
Cooperative learning
Especially in the practical manual on “Cooperative Learning” by Bruning
and Saum (2006), the methods of group work are presented vividly and in
detail on the basis of extensive international experience and research. Some
central elements of cooperative learning should, therefore, be pointed out here.
The most important principle in advance: “Individual work is a core element of
cooperative learning” (Fig. 11.12).
11.2 Designing Vocational Education Processes in Vocational Schools 459
Fig. 11.12 Principles of cooperative learning according to Brüning and Saum (2006, 15)
Table 11.4 Comparing the advantages and disadvantages of homogeneous and heterogeneous
learning groups (Bauer, 2006, 208)
Learning
group Advantages Disadvantages
Homogeneous • Tendency to favour high- • Real existing differences threaten to
performing learners. be ignored.
• Classroom-style teaching “suffi- • Learners with learning disabilities
cient”, therefore, less pedagogical miss out.
effort. • Pupils with high social status are
• Reduced complexity. favoured/supported more strongly.
• Teachers feel less overwhelmed. • Development inequality is promoted
• High resistance to pedagogical and consolidated.
errors. • Sustainable Pygmalion effects, early
fixation on a certain performance level.
• Fear of loss of control.
• Teacher-centred.
Heterogeneous • Favouring learners with learning • Not to be mastered by classroom-style
disabilities. teaching.
• More equal opportunities. • Higher pedagogical effort.
• Better support of individual per- • Less resistance to pedagogical errors.
sonality development.
• Familiarisation with different per-
spectives and life plans.
• Confrontation with other perspec-
tives.
• Promotion of social learning, for-
mation of social competences.
• Reflection on own positions.
• Improved preparation for modern
social challenges.
• Basis for the use of versatile
methods.
• Learning-centred.
(continued)
11.2 Designing Vocational Education Processes in Vocational Schools 461
This sharing phase requires the establishment of rules. “If no rules are
introduced and the discussion and evaluation process remain unregulated, then
the speaking-time slots allotted to the contributions is often distributed very
unequally among the members of the group. Some tend to be more reluctant to
contribute to discussions, others “talk faster than they think”. Therefore, rules
that allow for a balanced exchange are important. The eloquent group mem-
bers learn to listen and to take a back seat and the reserved ones are given the
(learning) chance to argue, discuss and present (Rauner & Piening, 2014, 43).
With a method of group work tested in Canada, good experiences were made with
so-called talking chips (Johnson & Johnson, 1999, 69). Each student receives an
equal amount of talking chips. In the exchange phase, the rule applies that members
of the group may only speak if they hand in one of their chips. If the supply of cards
of a group member is used up, then they can only participate in a new round of talks
(with new chips).
Talking chips ensure “that the speaking-time slots of group members balance
themselves, while they also have an educative effect, as both the reserved and
the eloquent pupils very quickly become aware of their speaking behaviour”
(Brüning & Saum, 2006, 34).
Finding the solution to the problems only means half the distance covered in the
processing of the learning tasks. Now it is important to evaluate the quality of the
task solution. It generally turns out that different solutions were developed by
individual trainees/students or working groups. The competence profiles
(Fig. 11.13) show where the differences lie. This is where the teacher comes into
play, who can show how the solution space given for a learning task has been
exploited with the individual solutions.
The results obtained in different ways are to be evaluated and assessed by the
learners with the help of the agreed evaluation criteria. The primary focus here is on
the utility value of the results for the customer: it is about developing a professional
work concept.
Assessment forms have proven useful as a tool for self-assessment in teaching
practice (Table 11.5). If the tasks are not solved to the satisfaction of the learners
themselves (not OK), the task-solving procedure must be reconsidered. The assess-
ments then lead to corrections or additions within the planned and implemented
462 11 The Didactic Quality of the Competence and Measurement Model
Fig. 11.13 Different exploitation of the solution space (Final Report 2010, Hesse)
work and learning processes, if necessary. The trainees/students also decide to what
extent their results meet the requirements of a complete task solution. Only when the
tasks have been satisfactorily solved (OK) does it make sense to reflect on the work
and learning processes carried out in context.
Both the learners (self-assessment) and the teachers (external assessment) can
use the assessment form specially developed in the COMET project for use in
teaching as a tool for evaluating and assessing the task solution.
Only documented facts can be evaluated (don’t “read between the lines”)!
The assessment form can be modified for use in a comparison of self-assessment
and external assessment. The assessment results can be transferred into a network
diagram for illustrative presentation in the plenum. The result, which is judged to be
sustainable by the learners, is then prepared for presentation in the plenum.
Table 11.5 Assessment form for use in class (e.g. electronics technician) (Katzenmeyer et al.,
2009, 202)
In no
Fully Partly Not way
Criteria/indicators Comments met met met met
CLARITY
1 Presentation appropriate for client?
For example:. description, operating
instructions, cost plan, component list
2 Representation appropriate for special-
ists?
For example: circuit diagrams, installation
diagrams, terminal diagram, cable diagram,
programme printout with comments
3 Solution illustrated?
For example: technology scheme, site plan,
sketches
4 Structured and clear?
For example: cover page, table of contents,
page numbers, company contact info, cus-
tomer contact info
FUNCTIONALITY
5 Functional capability?
For example: dimensioning/calculation o.k.,
fuse protection, necessary interlocks, limit
switch
6 Practical feasibility considered?
For example: electrical and mechanical
design possible?
7 Are presentations and explanations cor-
rect and state of the art considered?
8 Solution complete?
For example: are all required and necessary
functions in place?
UTILITY VALUE
9 Utility value for client?
Are useful and helpful functions consid-
ered? Possible automatic error detection,
interventions and changes
10 User friendliness?
Operability, operator guidance, clarity,
alarm and operating displays
11 Low susceptibility to faults considered?
For example: preventive error information,
redundancy, partial running capacity, are
material properties optimal for application?
12 Longer-term usability and expansion
options considered?
(continued)
464 11 The Didactic Quality of the Competence and Measurement Model
The presentation and evaluation of task solutions, work and learning processes as
well as learning outcomes are of high didactic value. Among other things, this helps
to
• Clarify the satisfaction/dissatisfaction of the “customer” with the offered task
solution
• Assess the ability of the expert listeners (co-learners, teachers, trainers and
similar) to solve the problem
• Clarify questions as to whether the procedure (including the methods used) has
proven itself, where there have been problems and how it can be optimised for use
in the next learning situations/projects
• Evaluate explanations, justifications and considerations of alternative solutions
and procedures according to whether they are professional and conclusive
• Exchange experiences with different learning methods and work-organisation
structures
• Describe and evaluate gains in knowledge, new experience and new abilities and
considers the question: “What else did you learn?” In this case, it all depends on
• The methodical procedure
• The capacity for cooperation
• The reflection on conflict settlement
It is also an opportunity to reflect on the importance of vocational learning for the
non-working world.
The evaluation is based on the evaluation criteria defined at the beginning (refer
to Table 2.1, p. 568). It may also turn out that the evaluation concept contains
weaknesses that can be avoided in the next learning situation/project.
In this tuition phase, teachers must decide on the role they want to play.
• Do they leave the assessment of the work results to the trainees/students?
• Do they assume a governing role?
• Do they evaluate the work results themselves?
• In the case of group work, it also makes sense for the working groups to evaluate
their documentation mutually.
When working on tasks in small groups, all group members should be involved in
presentation and reporting. A prerequisite is that a group member takes over the
moderation of the presentation and that the roles in the presentation are precisely
agreed upon. This also includes the form of presentation. “Ad hoc” reports and
presentations should be avoided as they tend to discredit project learning. When
presenting learning and work results, students with weaker language skills should be
given the opportunity to present and demonstrate their presentation in practical
manner.
11.2 Designing Vocational Education Processes in Vocational Schools 467
The documentation and presentation of project results should meet high formal
standards. They should be presentable to “customers”. The more successful
this is, the more likely the participants are to identify with their learning
outcomes. This strengthens the self-confidence and motivation of the learners.
In the case of outstanding projects, the public exhibition of project results at
school or in public is also an option. The experience that pupils proudly show
presentable learning and work results to family members and friends is an
indication that the form of documentation and presentation (in addition to the
learning and work results themselves) contributes considerably to the devel-
opment of professional identity and, therefore, also to the strengthening of
self-esteem. This technical aspect of task/project learning is, therefore, of
considerable social-pedagogical importance.
In the preceding remarks, the requests and requirements of the customer are again
referred to, which are expressed here in the satisfaction or dissatisfaction with the
presented task solution. This again refers to the high formal standards that the
presentation has to meet. At this point, it is only about the result—the objective
dimension of the learning process and its evaluation (product-related presentation).
The other learners and the teacher can take on the role of “customer” in this phase of
the presentation and give feedback from this role to the presenter(s).
In the other phases of the presentation, the other learners and the teachers are
addressed as “experts”. This deals with the
• Completeness of the task solution
• Technically sound justifications (knowledge-explaining action) and conclusive
balancing between different solution variants (knowledge-reflecting action)
• Working and learning concepts and concepts for cooperation and reflection on the
experience gained in their use
• Unresolved questions or questions that arose during the presentation, and finally
the question: What did I/we learn?
The second phase of the presentation (process-related presentation) deals with
learning and the competences acquired—the subjective dimension of the learning
process.
The co-learners and the teacher, therefore, take on the role of the teacher. In terms
of content, they refer to knowledge of the individual work process and the previously
agreed evaluation criteria (refer to Table 11.3, p. 520). The teacher also has their own
described solution space in mind.
The question: What have I/we learnt has a special meaning, because the students’
understanding of “learning” is shaped by general schooling.
468 11 The Didactic Quality of the Competence and Measurement Model
While the presentation of the work and learning outcomes reveals the individual
work-process knowledge of the trainee/student in relation to the current learning
task, this phase of tuition is about the specific task for the vocational school to
generalise this knowledge. A knowledge that can be traced back to the reflection of
the experiences that were made while working on the learning task.
Generalisation is about uncoupling the work experience gained from the concrete
learning task and the task solution achieved in order to make it available for
subsequent customer orders. It is now up to the teachers to ensure that their pupils
become aware of their broader understanding of the subject and are able to use this in
their thinking, acting and skills in professional and appropriate manner.
Practical experience with numerous learning groups shows that the absence of
the generalisation described above among learners/students means that the use
of the developed task-solving approaches is limited to the learning task for
which it was developed. As a result, the subsequent customer orders are often
not considered in the light of previous experience but are treated as completely
new challenges.
The experience that technical terms, which are already known, and action con-
cepts, which are already available, gain extended significance and that connections
between initially independent concepts become conscious, characterises the
profession-related extension of the fields of meaning of action-relevant concepts,
which in their sum and combination constitute work-process knowledge (Lehberger,
2013) and the development of technical language based thereon.
For example, a nurse at the beginning of his/her training expands his/her prior
understanding of how to put on a bandage with ever new aspects of meaning in
dealing with the diversity of bandages in equally diverse and always different
individual cases. “Bandaging” as a semantic field quickly develops into a compre-
hensive and professional concept of acting, thinking and ability.
The rudimentary prior understanding of a tool mechanic apprentice of the surface
quality of tools—and how to achieve this quality—is expanded by the alternation of
reflected work experience and the expansion of the semantic field of the concept
11.2 Designing Vocational Education Processes in Vocational Schools 469
As certain subjects of work and only certain aspects are taken into consideration
depending on the situation and task, the aim of this phase of tuition is to convey that
the development of work process knowledge is a process of subjective development
of vocational concepts with their semantic fields, which the learners have to place
within their work-process knowledge. The dimensions of working and learning refer
to a possibility of systematisation oriented towards the vocational work process.
In this phase of tuition, the teacher must steer the learning process so that the
learners feel challenged to realise the processes of generalisation and systematisation
in such a way that their individual professional concepts are further developed. This
also applies to processes of social learning within the framework of teamwork
(Fig. 11.14).
470 11 The Didactic Quality of the Competence and Measurement Model
In the Swiss educational landscape, there are currently two options for training in the
professional care and assistance for people. On the one hand, studies can take place
at a higher technical college (HF). In addition to a successful entrance examination,
the admission requirements are a vocational or school-leaving certificate at second-
ary level II1. The second option is to study at a university of applied sciences (FH).
The admission requirement is a Matura degree (university entrance qualification) or
completed HF training. Switzerland, therefore, has two equivalent variants of higher
(tertiary) continuing vocational education and training for nurses, one more practice-
oriented and one more academically oriented.
1
After compulsory schooling (9 years), young people enter upper secondary education. Secondary
level II can be subdivided into general education (grammar schools and technical secondary
schools) and vocational training (learning a trade in a training company with supplementary
schooling).
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 471
2
http://www.sbfi.admin.ch/bvz/hbb/index.html?lang¼de&detail¼1&typ¼RLP&item¼17
3
Practical training covers six fields of work in the care and support of: (1) people with long-term
illnesses/(2) children, adolescents, families and women/(3) mentally ill people/(4) people in reha-
bilitation/(5) somatically ill people/(6) people at home
4
https://www.bz-gs.ch/bildungszentrum/lernen-am-bz-gs-1/konkrete-kompetenzen
5
The Bildungszentrum Gesundheit und Soziales [health and social education centre] (BZ-GS) is a
part of the Berufsbildungszentrum Olten (BBZO). The BBZO is a regional vocational training
centre with over 4200 apprentices and students in 28 professions. Refer to http://www.bbzolten.so.
ch/startseite/ and http://www.bbzolten.so.ch/bz-gs-olten/
472 11 The Didactic Quality of the Competence and Measurement Model
In 2012, the BZ-GS, together with five other Swiss educational centres6 in the health
and social sectors, launched the first Swiss COMET project under the title “Survey-
ing and imparting vocational competence, professional identity and professional
commitment in the training occupations of nursing in Switzerland” (cf. Gäumann-
Felix & Hofer, 2015).
This example from the first year of an HF training course in somatics shows a rather
simple case description to introduce the students to the COMET method and to give
them an understanding of the criteria and items of the competence dimensions. After
a brief introduction to the understanding of the eight competence criteria
(sub-competences), the students received the case description from
Ms. G. (Table 2.2). In a first step, they analysed the case study in working groups
with the aid of instructions in which the competence criteria and their breakdown
into five items are presented. They tried to assign the information in the case
description to the competence criteria and to discuss the first possible interventions
(Table 11.6).
After initial processing of the case in groups, the results were presented and
discussed in the plenum. The following questions were examined:
• What are the family, social and cultural factors influencing Ms. G.?
• What influence do they have on her current state of health?
• What would be interventions that support sustainable exit planning?
• What other services should be included?
• How does the student manage the situation, as human resources are scarce due to
absenteeism?
• How does the student set priorities? What kind of interventions are necessary?
During this expert discussion, the COMET competence criteria were gradually
filled with content. The holistic view of the patient’s situation became increasingly
clear and this triggered important “Eureka!” moments among the students.
Due to the various starting points and possibilities provided by the situation
description, it was also possible to consider the heterogeneous previous education
of the students in the educational programme. All were able to build on their current
state of knowledge and experience and reflect on their individual competence
development.
6
In addition to Solothurn, the Cantons of Aargau, Basel, Berne, Lucerne/Central Switzerland and
Zurich.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 473
In the setting described, the competence criteria and items were used as work aids
with the help of a simplified representation (Fig. 11.15). The learning outcome was
discussed together. As the aim was to enable an initial examination of the COMET
competence model, no written evaluation was carried out. It turned out that all
students benefited from this setting several times. On the one hand, they learnt to
view a situation holistically with the help of the competence criteria and to recognise
“blind spots” in their knowledge and skills. On the other hand, they were able to
build on their current state of knowledge and experience and identify topics for
further in-depth study in subsequent lessons.
This form of tuition has meanwhile been tested in a variety of ways. For example,
in an open book7 test, like the test tasks in the COMET project, a situation was
processed and assessed with slightly adapted rating items.
7
Open-book examinations allow students to use all available documents during the examination.
They have free access to their own documents and books. They have free Internet access with their
laptops and, therefore, also access to the online library and other resources. The only thing
forbidden is mutual exchange among each other.
474 11 The Didactic Quality of the Competence and Measurement Model
Fig. 11.15 Simplified presentation of criteria and items (This representation of the requirements
dimension of the COMET Competence Model adapted to nursing training was simplified linguis-
tically and graphically for the students (cf. Fischer, Hauschildt, Heinemann, & Schumacher, 2015)
This example is set in the third academic year, 6 months before the diploma
examinations. During a whole week, the students dealt with the topics: Nursing
relatives, Caring and Palliative Care. The initial situations were 20-minute encounter
sequences, which the students carried out with simulation patients8. In a fully
equipped (teaching) hospital room, an initial contact was simulated with a bedridden
cancer patient and his wife, who had taken care of him at home until the current
emergency occurred. The students were confronted with this simulation without long
preparation time. Table 3.1 shows which preliminary information the students
received shortly before their assignment. As there was little preparation time avail-
able, they had to rely on their previous knowledge and experience. The 20-min
simulation sequence was recorded on video and then handed over to the students.
Each student received a video documentation of their own sequence on a USB stick
8
At the BZ-GS we use amateur actors to simulate (“play”) certain given roles. Depending on the
lessons they receive a more or less given script. Within this teaching setting, they orient themselves
to a basic starting position, supplemented with possible questions and topics.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 475
for further processing and reflection during the week of the event. The questions
described in the initial situation for the simulation patients already show how the
sequences were designed. The following teaching days were finally oriented towards
the eight competence criteria of the COMET competence model. The simulated case
study, which the students experienced very realistically, was viewed, analysed and
discussed from different perspectives. The lack of specialist knowledge was also
dealt with and the students were able to reflect on their own recorded situation. At the
end of the week, a similar sequence was played and again recorded on video. The
students were impressed by the increase in their competence in “Nursing caring
relatives and Palliative Care” during this week: how they learnt to analyse the
simulated cases in all their complexity and to derive conclusions for their caring
actions from their analyses.
Conclusion After this week, the students reported a significant amount of learning
progress. It was precisely because this teaching took place at a time shortly before
the diploma examinations that it was important for the students to be able to assess
their own level of knowledge and experience. Due to the always very real experi-
enced settings with the simulation patients and the video recordings, the identifica-
tion of competences that still needed to be acquired was multi-layered. With the help
of video recordings, they were able to reflect on their appearance and behaviour
(appearance, interaction with patient and wife, reproduction of information, com-
munication, technical language, facial expressions, gestures, etc.). They were always
confronted with their expertise (Which questions was I able to answer? Where did I
lack expertise?). In addition, the targeted holistic approach based on the eight
COMET criteria drew their attention to other problems that they would otherwise
not have “discovered”. Through this continuous process of reflection throughout the
week, a variety of interrelated topics could be explored in depth.
The differences between the video recordings at the beginning and end of the
week were impressive. As central to their learning process, the students stressed that
they did not work on “strange” learning examples, but that their own experienced
examples formed the basis for the teaching week (Tables 11.7 and 11.8).
Resuscitation courses are regular units during the 3 years of study. As a rule, the
resources required for basic life support (BLS) and advanced life support (ALS) are
offered together with other topics over 2 days. The instructions for the resuscitation
measures are primarily characterised by flow charts and algorithms. This suggests
that there can be no doubt about what is right and what is wrong. However, the
dimension of ethical decision-making already clarifies the fact that resuscitation is
also about standard-oriented, clever solutions. It quickly became apparent to us that
the COMET dimensions were also suitable structuring aids in this area.
Here is an example of one of three BLS/ALS units in the second academic year:
The students were provided with a document with two pictures. The pictures are
starting points for teaching about shock management, cardiac arrhythmias, cardiac
output and resuscitation measures. The students were given the task of forming
groups of a maximum of four people and then describing a realistic story, a situation
they had experienced, which matched the pictures. We hoped, among other things,
that the narratives would explicate “implicit” knowledge. The students also had the
opportunity to use their portfolios or the patient documentation tool to draw on a
concrete situation that they had already described. Then the students had to choose
one of the stories to work on. The questions shown in Table 3.3, most of which were
based on the COMET dimensions and items, were to be dealt with (Table 11.9).
After the discussion within the groups, the examples, answers and findings were
discussed and further deepened. Very soon, it became clear that at first glance rather
“technical” situation of a resuscitation covers all dimensions in broad and complex
manner. Various questions were, therefore, discussed and transferred to the dimen-
sions. As an example, it became clear how important it is within the dimension of
work-process orientation to argue with the involved services (e.g. medical service)
using the correct technical terms.
After this sequence, the students reported great learning success at various levels.
478 11 The Didactic Quality of the Competence and Measurement Model
11.3.5 Examinations
The examples described above show a wide range of possibilities for integrating
COMET into exam settings. Two variants are explained here as examples, which
demonstrate very well the creative use of the basic COMET principles.
The first example is an oral synthesis examination in the second year of study. The
basis was the tuition of the entire 12-week school block as well as the knowledge
imparted since the beginning of studies. Based on a real complex patient situation9,
the students dealt with the COMET competence model during the preparation period
and described their thoughts and assessments with concrete reference to the compe-
tence criteria (see information on preparation, general conditions and assessment
criteria for the students in Table 3.4). For example, the synthesis expert discussion
focused on work-process orientation, social and environmental compatibility, sus-
tainability, economy and other competence criteria. The discussions revealed broad
thematic diversity, which was summatively assessed with the help of the items
(Table 11.10).
With this and similar examination settings over the course of the three academic
years, we introduce the examination interviews that will take place at the end of
training.
9
The written 3-page patient situation contains information on medical history, diagnosis, admission
reason and situation, procedure, medication, treatment plan, nursing diagnoses and previous course
of hospitalisation. The personal data are anonymised for data protection reasons, but originate from
a real situation, which is why the content is not reproduced here.
480 11 The Didactic Quality of the Competence and Measurement Model
The final examination interview is explained here as a second example. And this was
certainly one of the most decisive moments for the training construct to demonstrate
the stringency and ultimately the credibility of our competence-oriented training.
The oral interview is one of three elements of the final qualification process10,
lasts 40 min and takes place in the last 12 weeks of the last year of training. The
training companies are also involved in the examination interview and its evaluation
by an expert.
The basis for the interview at our school is a real patient situation from the field of
work of the person to be tested. This creates the framework within which the
candidate can give a broad presentation of his or her planning and reasoning skills.
In this form of final examination, complexity is not trivialised, knowledge content is
not atomised and broken down into subjects. Considering the nature of the final
qualification methods, it will probably result in the preliminary situation that those
teaching will teach and those learning will learn. This is also where the stringency of
a genuinely competence-oriented education becomes apparent in the final analysis.
The dimensions that are evaluated:
• The description of the situation contains the essential information and is presented
systematically.
10
The other two parts are a practice-oriented written diploma or project thesis one the one hand and
the practical training qualification on the other: The final practical assessment is conducted by the
training company in the second half of the last practical training period.
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 481
• Nursing problems, focal points and objectives are identified and justified.
• Natural and social science problems, focal points and objectives are identified and
justified.
• Activating, preventive and/or health-promoting measures are identified and a
position is taken on their possibilities and limits.
• Professional policy problems, focal points and objectives are identified and
justified.
• Management problems, focal points and objectives are identified and justified.
• Ethical aspects are critically reflected.
• Concepts, models and theories are used for analysis, planning and justification,
including evidence.
• Linguistic expression is differentiated, and professional terminology is used
correctly.
If we compare our items with the COMET model’s criteria of the complete
(holistic) solution of professional tasks, we can assign all evaluation groups to the
COMET criteria. We have described the solution space with “fulfilment standards”.
Consequently, we can also use the COMET criteria to evaluate the final
examinations.
Examinations: Conclusion
The two examples above and our experience with examination settings during the
course of training and in the oral diploma examinations at the end of the three-year
course of study in HF nursing confirm that the COMET competence criteria and the
corresponding items meet the quality criteria of validity, objectivity and reliability to
a high degree. Our experience with the quality criteria is thus clearly in line with the
statements made by Rauner et al. (2015a) in their “Feasibility study on the use of the
COMET test procedure for examinations in vocational education and training”.
Based on our experience, we also clearly agree with the conclusion written by
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical. . . 483
Rauner et al. (cf. Rauner et al., 2015a, 31–32). The inclusion of the basic idea of the
COMET model confirms our conviction that our examination settings can compre-
hensively capture the competencies of our students.
It should be mentioned that the teaching team is also convinced that the acqui-
sition of factual knowledge is an indispensable part of competence development.
However, competence-oriented teaching should ideally not only ask for factual
knowledge in exam questions, since holistic problem solutions involve valid open
exam tasks. Consequently, there cannot simply be a right or wrong, but acceptable
justifications and strategies within a defined solution space. We implement this
continuously and successfully in our understanding and design of examinations.
As can be seen from the examples, the COMET method is not only a scientific
evaluation tool, but also a didactic model to learn what can be understood in the
discussions by the term “the whole”. Since the start of the project in 2012
(Gäumann-Felix & Hofer, 2015), the COMET competence model has also found
its way into the didactic concept for lesson planning, as the following example
shows. The “suggestion for lesson preparation” is composed as follows:
• Personal reference
– My reference, my resources and competences in relation to the topic/problem
• Meaningfulness of schooling for learners/students
– What relevance does the topic/problem have in the specific occupational field?
– What relevance does the topic/problem have in general?
– Significance of the topic/problem
In the past
Currently
In the future
From the practical perspective
From the theoretical perspective
• General conditions
– Curricular guidelines and focal areas
– What needs to be tested?
• The situation
– Which current and concrete situation/problem fits?
With regard to the work area
With regard to the learners
• Key points of the situation and the solution space (according to COMET incl. the 40 items).
– With regard to the:
– Clarity/presentation
– Functionality/professional solutions
– Sustainability
– Efficiency/cost-effectiveness
– Work-process orientation
– Social and environmental compatibility
– Family/sociocultural context
– Creativity
(continued)
484 11 The Didactic Quality of the Competence and Measurement Model
In the above examples, it has not yet been mentioned that the BZ-GS works
intensively with portfolios managed by the students and the electronic patient
documentation tool. Both are instruments based on real-life situations and patient
examples. Here, too, there are innumerable variants for integrating COMET.
The aim of our patient documentation tool is to provide teachers and students with
an instrument with which real patient situations can be recorded, processed, further
developed and reflected upon.
The patient documentation tool is available at all learning venues (school & LTT
school, practice & LTT practice) and can be used in various fields (at the BZ-GS
Canton of Solothurn, specifically in acute somatics, psychiatry, long-term care and
Spitex).
The password-protected tool facilitates:
• Recording of new cases
• Processing and further developing existing cases
• Reflection on individual steps of the process and making considerations regarding
the individual steps transparent for others
• The design of examination—module degrees as well as the final qualification
procedure
• Feedback—teachers/students or “peer-to-peer”
In this project phase, the competence criteria are integrated into the COMET tool.
This makes it possible to illuminate real or fictitious patient situations with the help
of the corresponding criteria. Here, too, COMET enables quality assurance in the
sense of a holistic approach to patient situations.
Two real extracts in Table 3.6 show how the competence criteria are examined in
the context of the documentation of real practical situations for the final oral expert
discussions (part of the diploma examination) (Table 11.12).
11.3.7 Conclusion
The examples illustrate how the COMET competence model can be used for the
planning, implementation and evaluation of teaching and examinations. Students
learn to look at situations from different perspectives, ones that they had previously
often forgotten or neglected and which at first glance do not appear to be common
“everyday topics” in student practice.
Teachers experience the COMET method as an ideal supplement for the prepa-
ration and implementation of vocational training based on the guiding principle of
holistic education, which contributes to not losing sight of the interrelationships
between complex professional tasks. The teachers further emphasise that the
COMET method prevents the complexity of professional fields from being
trivialised and the knowledge content from being atomised and broken down into
subjects. The facts are not taken out of context. This was also ensured by the
concrete and authentic working and learning situations that form the centre of the
lessons.
The results of the competence measurements carried out within the framework of
the COMET project prove the success of the strategy documented in the teaching
examples, to take the complexity of the work and learning processes seriously and to
understand the heterogeneity in the educational programmes as a resource and a
challenge. For teachers, this also means meeting the contextual requirements of
educational practice with a high degree of didactic creativity and flexibility.
The results of the COMET project confirmed our competence orientation, which
has been anchored for years. Consequently, it is not surprising that even after
completion of the project, the COMET model remains an integral part of our
everyday lives. With the eight competence criteria, there is an optimally suitable
reference standard for the “holism” construct, which can guide many processes at a
school (cf. Gäumann-Felix & Hofer, 2015). Kapitel 11: Verzeichnisse.
Appendix A: The Four Developmental Areas
(continued)
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 487
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
488 Appendix A: The Four Developmental Areas
At this second level of vocational learning, the basic concept of the occupation
formulated at the first level and the integrated professional knowledge can lead
to a reflected professional identity when the educational potentials of the
corporate work environment are exploited.
Developmental area 3: Problem-oriented specialwork tasks—knowl-
edge of details and functions
The professional knowledge for orientation and overview, the integrated
knowledge and the ability to solve tasks systematically enable the trainees at
the third level to work on problem-oriented special work tasks. The solution of
these tasks is no longer possible on the basis of pre-defined rules and patterns.
The task includes some novelty that is not fully covered by the problem-
solving strategies applied to former tasks. The trainees need to analyse the task
first and to identify the problem in order to plan their activities.
The paradigm of the holistic and complex work activity, which was devel-
oped in the 1980s, and the associated capacity of independent planning,
implementation, control and evaluation of professionalwork tasks, corresponds
to the third step of the logical structuring of vocational education. At this level,
the professional identity leads to professional responsibility as a condition for
performance (intrinsic motivation) and quality awareness as an essential con-
dition for the fulfilment of complete work tasks in problematic work contexts.
Developmental area 4: Unpredictablework tasks—experiential and
systematic in-depth knowledge
When the trainees have developed a sufficient understanding of the tasks of
professional work, they can gain experience with the handling of non-routine
situations and problems. Unpredictablework tasks that are too complex to be
fully analysed in the concrete work situation so that they cannot simply be
mastered systematically put high demands on the trainees on their way to the
level of competent professionals. Competence in this case is based on knowledge
about previous tasks where the constellation was at least similar, on the antici-
pation of possible strategies, on theoretical knowledge and practical skills as well
as on intuition. Problems are solved in a situative way without the necessity to
calculate the activity with all its preconditions and consequences in detail.
The aim at the fourth level of this model of vocational education is to
integrate reflected professionalism with subject-specific competence in order
to open the opportunity for higher education. The aptitude for higher education
emerges from an extended self-conception, which is not so much rooted in a
narrowly defined occupational profile, but rather in a career path that is
associated with this occupation.
The four developmental areas according to which vocational training courses can
be arranged in a developmentally logical manner
Appendix B: Rating Scale
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 489
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
490 Appendix B: Rating Scale
Note
Example Millwright
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 497
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
498 Appendix C: Examples for Test Tasks
rail track
road
Controls 5 W continuously
For each passage of a train 5 seconds yellow light, then ca. 1 minute red light
Power consumption for each light 60 W
Alternatively:
• Standard module with 40 cells (10 10 cm each), output approximately
50 W under favourable conditions.
• A desired maximum output of 125 W would require three modules plus one
module in order to allow for series circuits with two modules each.
• Input with uninterruptible power supply or generator (here, the voltage
difference must be considered as well as the separation of circuits).
• The power supply delivers a maximum of 250 W.
• Appropriate protective means were selected (difference between network-
powered and solar-powered operation).
• Appropriate control sensors were selected and adequately placed, for
example, induction loops for detecting the axes (minimal distance
s ¼ 80 km/h 30 s ¼ 667 m).
• Functionality of the control system is guaranteed.
• Disturbances are taken into consideration.
(continued)
500 Appendix C: Examples for Test Tasks
Indicator 3: Utility
• The system can be adapted to other situations (e.g. time adjustment,
transferability to other systems)
• Maintenance is possible by client staff, instructions of use were provided
• Malfunctions are signalled appropriately
Indicator 4: Economy
• The system requires little maintenance.
• Standard components are used.
• Procurement/production costs, operational costs, maintenance costs,
follow-up costs.
Indicator 5: Work and business process
• It is clearly indicated what needs to be arranged with the client for the
installation (At what time can the work be done? Who is responsible for the
safety? Will the railroad be closed during the installation works?)
• Does the work plan take into account more than one worker?
• Third-party suppliers were included, for example, for preparing the
foundations.
• There are suggestions for a maintenance plan.
Indicator 6:Social compatibility
• The safety of the workers is guaranteed (protective equipment, posts).
• Appropriate technical equipment, for example, hoists, barriers
• Is the status of the system communicated to the train driver?
• Were alarms taken into consideration?
Indicator 7:Environmental compatibility
• No hazardous material (e.g. battery acid, fuel) can leak into the
environment.
• If hazardous material is used, instructions are given for safe use and
disposal.
• Application of LED instead of light bulbs
Indicator 8:Creativity
• The system is equipped with an automatic signalling and alarm system.
• The design and structure of the system suit the surroundings.
• The system detects when a train is stopping on the railway crossing.
Appendix C: Examples for Test Tasks 501
Example Electrician
Fig. C.2 Close-up of a skylight and sketch of the assembly hall- “The skylights ought to be opened
and closed centrally.”- “When the temperature in the workspace within the hall gets too high, the
skylights have to open.”- “There is an enlargement of the assembly hall scheduled for the next
year.”
(continued)
Appendix C: Examples for Test Tasks 503
• 1 temperature sensor
• 1 controller
• 4 motors with clockwise and counterclockwise rotation (possibly with
automatic stop, in which case end sensors are unnecessary)
• 4 relay for clockwise rotation
• 4 relay for counterclockwise rotation
• Wiring material
• Installation material
• Fuses or circuit breakers as necessary
The control can be installed in the existing distribution board.
Wiring (example):
• Distribution board to skylight motors (4 wires)
• Wind sensor (on the roof) to distribution board (4 wires)
• Rain sensor (on the roof) to distribution board (4 wires)
• Temperature sensor (at a representative place in the hall) to distribution
board (3 wires)
• Button panel (near the door) to distribution board (9 wires)
When a programmable logic controller is used, the integrated time switch
can be used for closing the skylights at the end of the working day.
• Would a proposed skylight control be operative from a technical point of
view?
• Are the explanations and diagrams correct from a technical point of view?
• Have the stop switches been implemented correctly?
• Have the sensors (wind, temperature) been implemented correctly?
• Is it possible to open and close the skylights?
Indicator 3: Utility
• Easy operation, easy adaptation to changing requirements by programma-
ble controller, choice and placement of sensors, instructions for mainte-
nance (e.g. for the motors)
• Can the explanations and diagrams be understood by a non-expert as well?
• How convenient is the operation of the skylights for the user?
• Are there status signals and alert signals?
• Is a time switch (integrated into the controller) used for changing between
the weekday/Saturday/Sunday modes?
Indicator 4: Economy
• Easy expansibility of the system in the event of an enlargement of the
assembly hall, cost-efficient use of a programmable logic controller, capacity
of the control system, application of standard sensors, use of existing
equipment
(continued)
504 Appendix C: Examples for Test Tasks
• Were costs and workload of different control systems taken into account?
• Is the solution economical?
• Was the cost-benefit ratio taken into account?
Indicator 5: Work and business process
• Compliance with the requirements of the management, coordination with
master/foreman, use of existing equipment.
• Have the requirements of the client been taken into account?
• Does the proposal refer to particular circumstances of the installation
(e.g. installation during holidays)?
• Does the proposal foresee the involvement of professionals from other
departments in the installation works (e.g. installation of the motors by
in-house mechanics)?
• Has the handing over to the client been planned?
• Is there a time schedule?
Indicator 6:Social compatibility
• Consideration of work safety, automatic opening of the skylights in case of
high temperature.
• Does the proposal comply with particular work safety regulations, for
example, with regard to the installation of components on the roof?
• Does the proposal comply with safety regulations for electrical equipment?
• Is there a kill switch?
• Is there a feature for closing the skylights in case of fire?
Indicator 7:Environmental compatibility
• Saving energy by appropriate opening and closing of the skylights.
• Does the proposal refer to environment-friendly materials (e.g. wires with-
out PVC or halogen)?
• Does the proposal consider energy-saving measures (e.g. opening the
skylights only for a short time when the outdoor temperature is below
zero)?
Indicator 8:Creativity
• Proposals for the extension of the control system, for example, integration
of the rolling gate, projected enlargement of the hall, integration of the
heating control
• Ideas that go beyond the assignment, for example, to use the roof for the
installation of solar panels
• Did the students come up with special functions for the control system?
Appendix C: Examples for Test Tasks 505
Example Welder
(continued)
Appendix C: Examples for Test Tasks 507
(continued)
508 Appendix C: Examples for Test Tasks
6. Social Responsibility
• Does the explanation of the failure of the original design talk about
safety implications?
• Will the solution be safe (i.e. hold under load and allow secure attach-
ment of the lifting gear)?
• Does the solution specify (or make mention of the need to specify) the
load capacity of the lug?
• Is there provision for safely testing the new design?
• Does the solution discuss safety precautions while making the lug?
• Does the solution comply with or make reference to all relevant codes
and standards?
7. Environmental Responsibility
• Does the solution consider responsible use and disposal of materials?
• Does the solution consider recycling/reuse?
8. Creativity
• In examining the reasons for the failure of the first design, has the
candidate explored multiple options?
• Does the solution consider a wide range of options?
– Processes
– Materials
– Designs
• Does the solution include original aspects in excess of the solution
space?
Appendix D: Four-Field Matrix (Tables)
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 509
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
510 Appendix D: Four-Field Matrix (Tables)
Occupational Organisational
Profession commitment commitment
Plant mechanic 0.58 0.48
Industrial mechanic 0.26 0.08
Mechatronic 0.27 0.07
Process mechanic 0.24 0.03
Cutting machine operator 0.20 0.15
Car mechatronic 0.21 0.15
Surface coater 0.28 0.34
Vehicle painter 0.18 0.02
Glass constructor 0.42 0.45
Office clerk 0.11 0.02
Managementassistantinhotelandhospitality 0.11 0.01
Management assistant in real estate 0.36 0.30
Salesman 0.06 0.12
Retail dealer 0.05 0.05
Gardener 0.12 0.29
Farmer 0.18 0.30
Fully qualified groom 0.31 0.38
Warehouse operator 0.79 0.50
Specialist employee for bathing 0.02 0.04
establishments
Warehouse logistics specialist 0.06 0.14
Specialist for hospitality industry 0.07 0.14
Specialist for vehicle operations 0.13 0.36
Hairdresser 0.21 0.03
Cook 0.23 0.21
Carpenter 0.20 0.13
Painter and varnisher 0.18 0.07
Stonecutter 0.40 0.07
Duct builder 0.12 0.09
Occupational and organisational commitment: List of professions in the four-field matrix
Appendix E: Correlation Values for the
Correlation Between Occupational Competences
and I-C Averages
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 511
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
512 Appendix E: Correlation Values for the Correlation Between Occupational. . .
Correlations carpenter
*
Correlation is significant at level 0.05 (two-sided)
**
Correlation is significant at level 0.01 (two-sided)
516 Appendix E: Correlation Values for the Correlation Between Occupational. . .
Chapter Reference
2 COMET IV: 1.1; 1.4
3 COMET I: 2
COMET III: 1
COMET IV: 2.1
4 COMET I: 3
COMET III: 2
4.7 A + B 01/2016;
Rauner, F.; Frenzel, J; Piening, D.; Bachmann, N. (2015): Engagement und
Ausbildungsorganisation. Einstellungen sächsischer Auszubildender zu ihrem Beruf
und ihrer Ausbildung. Eine Studie im Rahmen der Landesinitiative Steigerung der
Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen
(QEK). Bremen: Universität Bremen, I:BB.
5.1 Kleiner, M.; Rauner, F.; Reinhold, M.; Röben, P. (2002): Curriculum design I:
Identifizieren und Beschreiben von beruflichen Arbeitsaufgaben, Arbeitsaufgaben
für eine neue Beruflichkeit. In: Berufsbildung und innovation—Instrumente und
Methoden zum Planen, Gestalten und Bewerten, band 2, Koblenz: Christiani.
5.2 COMET IV: 3.2; 3.3.3
5.3 COMET IV: 2.4
5.5.1 COMET III: 4.2–4.3; COMET IV: S. 53–56
(continued)
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 519
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
520 List of References
Chapter Reference
5.6 COMET IV: 2.5
5.7 COMET IV: 2.5; S. 67–74
6 COMET IV: Abb. 16 (S. 64)
6.1 COMET I: 3.5
6.2 COMET I: 5.1; COMET III: 4.2
6.3 COMET III: 4.3
6.4 Kalvelage, J.; Heinemann, L.; Rauner, F; Zhou, Z. (2015): Messen von Identität und
engagement in beruflichen Bildungsgängen. In: M. Fischer, F. Rauner, Z. Zhou
(hg.). Münster: LIT, 305–326.
6.5 Zhuang, R.; li, J. (2015): Analyse der interkulturellen Anwendung der COMET-
Kompetenzdiagnostik. In: M. Fischer, F. Rauner, Z. Zhou (hg.). Münster:
LIT. S. 341–350.
7.1–7.3 Rauner, F.; Frenzel, J.; Piening, D. (2015): Machbarkeitsstudie: Anwendung des
KOMET-Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen:
Universität Bremen, I:BB.
A + B 16/2015
7.4 A + B 18/2014
7.5.3 COMET III: S. 53–56
8.1–8.2 COMET II: 3; A + B 14/2014; COMET IV: 78–91
8.3 COMET III: 6.2; A + B 14/2014
8.4 COMET IV: 6.3; A + B 15/2014
8.5 A + B 01/2016; COMET III: 3.5
8.7 A + B 01/2016
9.1 A + B 18/2015
Fischer, M; Rauner, F; Zhao, Z. (2015): Kompetenzdiagnostik in der beruflichen
Bildung. Methoden zum Erfassen und entwickeln berufliche Kompetenz: COMET
auf dem Prüfstand. Berlin: LIT
Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In:
Fischer, M.; Rauner, F.; Zhao, Z. (hg.): Kompetenzdiagnostik in der beruflichen
Bildung—Methoden zum Erfassen und Entwickeln beruflicher Kompetenz.
COMET auf dem Prüfstand. Münster: LIT.
9.5 Piening, D.; Frenzel, J.; Heinemann, L.; Rauner, F. (2014): Berufliche Kompetenzen
messen—Das Modellversuchsprojekt KOMET NRW. 1. und 2. Zwischenbericht.
9.6 A + B 11/2013
COMET III: 4.2
10.1–10.4 A + B 11/2013
10.5 Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In:
Fischer, M.; Rauner, F.; Zhao, Z. (hg.) (2015): Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher
Kompetenz. COMET auf dem Prüfstand. Münster: LIT, S. 413–436.
Zhao, Z. (2015): Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer
und Dozenten beruflicher Bildung in China. In: Ebd., S. 437–450.
11 Lehberger, J.; Rauner, F. (2014): Berufliches Lernen in Lernfeldern. Ein Leitfaden
für die Gestaltung und organisation projektförmigen Lernens in der Berufsschule.
Bremen: Universität Bremen, I:BB.
Bibliography
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 521
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
522 Bibliography
sozialen Arbeit. Standpunkte, Kontroversen, Perspektiven (3rd ed., pp. 203–229). Wiesbaden:
VS Verlag für Sozialwissenschaften.
Benner, P. (1984). From novice to expert. Excellence and power in clinical nursing practice. Menlo
Park: Addison-Wesley.
Benner, P. (1994). Stufen der Pflegekompetenz. From novice to expert. Bern u. a. O.: Huber.
Benner, P. (1997). Stufen zur Pflegekompetenz. From novice to expert. (2. Nachdruck). Bern u.a.:
Huber.
Bergmann, J. R. (1995). “Studies of work” – Ethnomethodologie. In U. Flick, E. von Kardorff,
H. Keupp, & L. von Rosenstiel (Eds.), Handbuch Qualitative Sozialforschung. Grundlagen,
Konzepte, Methoden und Anwendungen (pp. 269–272). Weinheim: Beltz.
Bergmann, J. R. (2006). Studies of work. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung
(2nd ed., pp. 640–646). Bielefeld: wbv.
Blankertz, H. (1972). Kollegstufenversuch in Nordrhein-Westfalen – das Ende der gymnasialen
Oberstufe und der Berufsschulen. DtBFsch, 68(1), 2–20.
Blankertz, H. (1983). Einführung in die Thematik des Symposiums. In: Benner, D., Heid, H.,
Thiersch, H. (Hg.) Beiträge zum 8. Kongress der Deutschen Gesellschaft für
Erziehungswissenschaften vom 22–24. März 1982 in der Universität Regensburg. Zeitschrift
für Pädagogik, 18. Beiheft. 139–142.
Blankertz, H. (Ed.). (1986). Lernen und Kompetenzentwicklung in der Sekundarstufe
II. Abschlussbericht der wissenschaftlichen Begleitung Kollegstufe NW. 2 Bde. Soest: Soester
Verlagskontor.
BLK. (2002). Kompetenzzentren – Kompetenzzentren in regionalen Berufsbildungsnetzwerken
Rolle und Beitrag der beruflichen Schulen, BLK-Fachtagung am 3/4. Dezember 2001 in
Lübeck. Heft 99. Bonn.
BLK (Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung). (1973).
Bildungsgesamtplan, Bd. 1. Stuttgart: Klett-Cotta.
Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher
IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und
Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13, 473–505.
Böhle, F. (2009). Weder rationale Reflexion noch präreflexive Praktik – erfahrungsgeleitet-
subjektivierendes Handeln. Wiesbaden: Springer.
Böhle, F., & Rose, H. (1992). Technik und Erfahrung. Arbeit in hochautomatisierten Systemen.
Frankfurt a. M., New York: Campus.
Borch, H., & Schwarz, H. (1999). Zur Konzeption und Entwicklung der neuen IT-Berufe. In
Bundesinstitut für Berufsbildung (Ed.), IT-Best-Practise, Gestaltung der betrieblichen
Ausbildung. Bielefeld: W. Bertelsmann.
Borch, H., & Weißmann, H. (2002). IT-Berufe machen Karriere. Zur Evaluation der neuen Berufe
im Bereich Information und Telekommunikation. In Bundesinstitut für Berufsbildung (Ed.),
IT-Best-Practise, Gestaltung der betrieblichen Ausbildung. Bielefeld: W. Bertelsmann.
Boreham, N. C., Samurçay, R., & Fischer, M. (Eds.). (2002). Work process knowlege. London,
New York: Routledge.
Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und
Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer.
Bozdogan, H. (1987). Model selection and Akaike’s information criterion (AIC): The general
theory and its analytical extensions. Psychometrika., 52(3), 345–370.
Bozdogan, H., & Ramirez, D. E. (1988). FACAIC: Model selection algorithm for the orthogonal
factor model using AIC and CAIC. Psychometrika., 53(3), 407–415.
Brater, M. (1984b). Künstlerische Übungen in der Berufsausbildung. In Projektgruppe
Handlungslernen (Hg.), Handlungslernen in der beruflichen Bildung (pp. 62–86). Wetzlar:
W.-von Siemens-Schule, Projekt Druck.
Brand, W., Hofmeister, W., & Tramm, F. (2005). Auf dem Weg zu einem Kompetenzstufenmodell
für die berufliche Bildung. Erfahrungen aus dem Projekt ULME. In: bwp@-Berufs- und
Wirtschaftspädagogik. Online. 8 (Juli 2005)
Bibliography 523
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for
research. Chicago: Rand McNally.
Carey, S. (1985). Conceptual change in childhood. MIT Press.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale:
Erlbaum.
Cohen, A. (1991). Career stage as a moderator of the relationship between organizational commit-
ment and its outcomes: A meta-analysis. Journal of Occupational Psychology, 64, 253–268.
Cohen, A. (2007). Dynamics between occupational and organizational commitment in the context
of flexible labour market: A review of the literature and suggestions for a future research
agenda. Bremen: ITB-Forschungsbericht 26/2007.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of
reading, writing and mathematics. In L. B. Resnick (Ed.), Knowing, learning and instruction
(pp. 453–494). Hillsdale, NJ: Erlbaum.
COMET-Konsortium (in Zusammenarbeit mit dem Hessischen Kultusministerium und der
Senatorin für Bildung und Wissenschaft der Freien Hansestadt Bremen). (2010). Berufliche
Kompetenzen messen: Das Projekt KOMET der Bundesländer Bremen und Hessen. Zweiter
Zwischenbericht der wissenschaftlichen Begleitung – Ergebnisse 2009. Forschungsgruppe I:
BB: Universität Bremen.
Connell, M. W., Sheridan, K., & Gardner, H. (2003). On abilities and domains. In R. J. Sternberg &
E. L. Grigorenko (Eds.), The psychology of abilities, competencies and expertise (pp. 126–155).
Cambridge: Cambridge University Press.
Cooley, M. (1988). Creativity, skill and human-centred systems. In B. Göranzon & J. Josefson
(Eds.), Knowledge, skill and artificial intelligence (pp. 127–137). Berlin, Heidelberg,
New York: Springer.
Corbett, J. M., Rasmussen, L. B., & Rauner, F. (1991). Crossing the border. The social and
engineering design of computer integrated manufacturing systems. London u. a. O.: Springer.
Crawford, M. (2010). Ich schraube, also bin ich: Vom Glück, etwas mit den eigenen Händen zu
schaffen. Berlin: Ullstein.
Crawford, M. B. (2016). Die Wiedergewinnung des Wirklichen. Eine Philosophie des Ichs im
Zeitalter der Zerstreuung. Berlin: Ullstein.
Dehnbostel, P. (1994). Erschließung und Gestaltung des Lernorts Arbeitsplatz. Berufsbildung in der
wissenschaftlichen Praxis, 23(1), 13–18.
Dehnbostel, P. (2005). Lernen-Arbeiten-Kompetenzentwicklung. Zur wachsenden Bedeutung des
Lernens und der reflexiven Handlungsfähigkeit im Prozess der Arbeit. In G. Wiesner &
A. Wolter (Eds.), Die lernende Gesellschaft. Juventus: Weinheim.
Deitmer, L., Fischer, M., Gerds, P., Przygodda, K., Rauner, F., Ruch, H., et al. (2004). Neue
Lernkonzepte in der dualen Berufsausbildung. Bilanz eines Modellversuchsprogramms der
Bund-Länder-Kommission (BLK). Reihe: Berufsbildung, Arbeit und Innovation (Vol. 24).
Bielefeld: W. Bertelsmann.
Dengler, K., Matthes, B. (2015). Folgen der Digitalisierung für die Arbeitswelt.
Substituierungspotentiale von Berufen in Deutschland. IAB-Forschungsbericht 11/2015.
Deutsche Forschungsgemeinschaft (DFG). (1998). Sicherung guter wissenschaftlicher Praxis.
Denkschrift. Empfehlungen der Kommission “Selbstkontrolle in der Wissenschaft”. Weinheim:
WILEY-VCH. (ergänzende Auflage 2013).
Deutscher Bundestag (11. Wahlperiode). (1990). Berichte der Enquête-Kommission „Zukünftige
Bildungspolitik – Bildung 2000“. Drucksache 11/7820. Bonn.
Dewey, J. (1916). Democracy and education. The middle works of John Dewey 1899–1924 (Vol.
9). Edwardsville: Southern Illinois University Press.
Dörner, D. (1983). empirische Psychologie und Alltagsrelevanz. In G. Jüttemann (Ed.),
Psychologie in der Veränderung (pp. 13–30). Beltz: Weinheim.
Drescher, E. (1996). Was Facharbeiter können müssen: Elektroinstandhaltung in der vernetzten
Produktion. Bremen: Donat.
Bibliography 525
Drexel, I. (2005). Das Duale system und Europa. Ein Gutachten im Auftrag von ver.di und IG
Metall. Berlin: Hausdruck.
Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der
Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt.
Dybowsky, G., Haase, P., & Rauner, F. (1993). Berufliche Bildung und betriebliche
Organisationsentwicklung. Reihe: Berufliche Bildung (Vol. 15). Bremen: Donat.
Efron, B., & Tibshirani, R. J. (1994). An introduction to the Bootstrap. Boca Raton: Chapman &
Hall/CRC.
Embretson, S. E., & Reise, S. P. (2013). Item response theory for psychologists. Hoboken: Taylor
and Francis.
Emery, F. E., & Emery, M. (1974). Participative design. Canberra: Centre for Continuing Educa-
tion. Australian National University.
Erdwien, B., & Martens, T. (2009). Die empirische Qualität des Kompetenzmodells und des
Ratingverfahrens. In Rauner, F. u. a.: Messen beruflicher Kompetenzen. Bd. II. Ergebnisse
COMET 2008. Reihe Bildung und Arbeitswelt. Münster: LIT.
Erpenbeck, J. (2001). Wissensmanagement als Kompetenzmanagement. In G. Franke (Ed.),
Komplexität und Kompetenz. Ausgewählte Fragen der Kompetenzforschung (pp. 102–120).
Bielefeld: W. Bertelsmann.
Euler, D. (2011). Kompetenzorientiert prüfen – eine hilfreiche Version? In E. Severing & R. Weiß
(Eds.), Prüfungen und Zertifizierungen in der beruflichen Bildung. Anforderungen –
Instrumente – Forschungsbedarf (pp. 55–66). Bielefeld: W. Bertelsmann.
Fischer, M. (2000a). Arbeitsprozesswissen von Facharbeitern – Umrisse einer forschungsleitenden
Fragestellung. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches Arbeitsprozesswissen.
Ein Forschungsgegenstand der Berufsfeldwissenschaften (pp. 31–47). Baden-Baden: Nomos.
Fischer, M. (2000b). Von der Arbeitserfahrung zum Arbeitsprozesswissen. Rechnergestützte
Facharbeit im Kontext beruflichen Lernens. Opladen: Leske + Budrich.
Fischer, M. (2002). Die Entwicklung von Arbeitsprozesswissen durch Lernen im Arbeitsprozess –
theoretische Annahmen und empirische Befunde. In M. Fischer & F. Rauner (Eds.), Lernfeld:
Arbeitsprozess. Ein Studienbuch zur Kompetenzentwicklung von Fachkräften in gewerblich-
technischen Aufgabenbereichen (pp. 53–86). Baden-Baden: Nomos.
Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und
beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus:
Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der
Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang.
Fischer, R. (2013). Berufliche Identität als Dimension beruflicher Kompetenz. Entwicklungsverlauf
und Einflussfaktoren in der Gesundheits- und Krankenpflege. Reihe Berufsbildung, Arbeit und
Innovation (Vol. 26). Bielefeld: wbv.
Fischer, B., Girmes-Stein, R., Kordes, H., & Peukert, U. (1995). Entwicklungslogische
Erziehungsforschung. In H. Haft & H. Kordes (Eds.), Methoden der Erziehungs- und
Bildungsforschung. Band 2 der Enzyklopädie Erziehungswissenschaft (pp. 45–79). Stuttgart:
Klett.
Fischer, R., Hauschildt, U., Heinemann, L., & Schumacher, J. (2015). Erfassen beruflicher
Kompetenz in der Pflegeausbildung europäischer Länder. In M. Fischer, F. Rauner, &
Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und
Entwickeln beruflicher Kompetenz (pp. 375–392). Münster: LIT.
Fischer, M., Jungeblut, R., & Römmermann, E. (1995). “Jede Maschine hat ihre eigenen
Marotten!” Instandhaltungsarbeit in der rechnergestützten Produktion und Möglichkeiten
technischer Unterstützung. Donat: Bremen.
Fischer, M., & Rauner, F. (Eds.). (2002). Lernfeld: Arbeitsprozess. Ein Studienbuch zur
Kompetenzentwicklung von Fachkräften in gewerblich-technischen Aufgabenbereichen.
Reihe: Bildung und Arbeitswelt (Vol. 6). Baden-Baden: Nomos.
526 Bibliography
Fischer, M., Rauner, F., & Zhao, Z. (2015). Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand.
Münster: LIT.
Fischer, M., & Röben, P. (2004). Arbeitsprozesswissen im Fokus von individuellem und
organisationalem Lernen. Ergebnisse aus Großbetrieben in vier europäischen Ländern.
Zeitschrift für Pädagogik, 2(2004), 182–201.
Flick, U. (1995). Handbuch qualitative Sozialforschung: Grundlagen, Konzepte, Methoden und
Anwendung. Beltz: Weinheim.
Frank, H. (1969). Kybernetische Grundlagen der Pädagogik. Baden-Baden: Kohlhammer.
Frei, F., & Ulich, E. (Eds.). (1981). Beiträge zur psychologischen Arbeitsanalyse. Bern: Huber.
Freund, R. (2011). Das Konzept der multiplen Kompetenz auf den Analyseebenen Individuum,
Gruppe, Organisation und Netzwerk. Hamburg: Verlag Dr. Kovac.
Frey, A. (2006). Methoden und Instrumente zur Diagnose beruflicher Kompetenzen von
Lehrkräften – eine erste Standortbestimmung zu bereits publizierten Instrumenten. In:
Allemann-Gheonda, C., Terhard, E. (Hg.). Kompetenzen und Kompetenzentwicklung von
Lehrerinnen und Lehrern: Ausbildung und Beruf. Zeitschrift für Pädagogik. 51(Beiheft): 30–46
Frieling, E. (1995). Arbeit. In U. Flick et al. (Eds.), Handbuch Qualitative Sozialforschung (2nd ed.,
pp. 285–288). Weinheim: Beltz.
Ganguin, D. (1992). Die Struktur offener Fertigungssysteme in der Fertigung und ihre
Voraussetzungen. In G. Dybowski, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und
betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat.
Ganguin, D. (1993). Die Struktur offener Fertigungssysteme in der Fertigung und ihre
Voraussetzungen. In G. Dybowsky, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und
betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat.
Gardner, H. (1991). Abschied vom IQ: die Rahmentheorie der vielfachen Intelligenzen. Stuttgart:
Klett-Cotta.
Gardner, H. (1999). Intelligence reframed: multiple intelligences for the 21st century. New York,
NY: Basic Books.
Gardner, H. (2002). Intelligenzen. Die Vielfalt des menschlichen Geistes. Stuttgart: Klett-Cotta.
Garfinkel, H. (1967). Studies in Ethnomethodology. Englewood Cliffs, N.J.: Prentice-Hall.
Garfinkel, H. (1986). Ethnomethodological Studies of Work. London u. a.: Routledge & Kegan
Paul.
Gäumann-Felix, K., & Hofer, D. (2015). COMET in der Pflegeausbildung Schweiz. In M. Fischer,
F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum
Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 93–110).
Münster: LIT.
Georg, W., & Sattel, U. (1992). Einleitung: Von Japan lernen? In Dies (Ed.), Von Japan lernen?
Aspekte von Bildung und Beschäftigung in Japan (p. 7ff). Weinheim: Deutscher Studien Verlag.
Gerecht, M., Steinert, B., Klieme, E., & Döbrich, P. (2007). Skalen zur Schulqualität:
Dokumentation der Erhebungsinstrumente. Pädagogische Entwicklungsbilanzen mit Schulen
(PEB). Frankfurt/Main: Gesellschaft zur Förderung Pädagogischer Forschung. Deutsches
Institut für Internationale Pädagogische Forschung.
Gerstenmaier, J. (1999). Situiertes Lernen. In C. Perleth & A. Ziegler (Eds.), Pädagogische
Psychologie. Bern: Huber.
Gerstenmaier, J. (2004). Domänenspezifisches Wissen als Dimension beruflicher Entwicklung. In
F. Rauner (Ed.), Qualifikationsforschung und Curriculum (pp. 151–163). Bielefeld:
W. Bertelsmann.
Giddens, A. (1972). In A. Giddens (Ed.), Introduction: Durkheim’s writings in sociology and social
psychology (pp. 1–50). Cambridge: Cambridge University Press.
Girmes-Stein, R., & Steffen, R. (1982). Konzept für eine entwicklungsbezogene Teilstudie im
Rahmen der Evaluation des Modellversuchs zur Verbindung des Berufsvorbereitungsjahres
(BVJ) mit dem Berufsgrundschuljahr (BGJ) an berufsbildenden Schulen des Landes NW.
Münster: Zwischenbericht.
Bibliography 527
Glaser, B., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative
research. Chicago: Aldine Publisher Company.
Granville, G. (2003). “Stop making sense”: Chaos and Coherence. In the formulation of the Irish
qualifications framework. Journal of Education and Work, 16(3), 259–270.
Gravert, H., & Hüster, W. (2001). Intentionen der KMK bei der Einführung von Lernfeldern. In
P. Gerds & A. Zöller (Eds.), Der Lernfeldansatz der Kultusministerkonferenz (pp. 83–97).
Bielefeld: W. Berlelsmann.
Griffin, P., Gillis, S., & Calvitto, P. (2007). Standards-referenced assessment for vocational
education and training in schools. Australian Journal of Education, 51(1), 19–38.
Grob, U., & Maag Merki, K. (2001). Überfachliche Kompetenzen: Theoretische Grundlegung und
empirische Erprobung eines Indikatorensystems. Bern u. a. O.: Peter Lang.
Grollmann, P. (2003). Professionelle Realität beruflichen Bildungspersonals im institutionellen
Kontext ausgewählter Bildungssysteme. Eine empirische Studie anhand ausgewählter Fälle aus
den USA, Dänemark und Deutschland. Bremen: Institut Technik und Bildung der Universität.
Grollmann, P. (2005). Professionelle Realität von Berufspädagogen im internationalen Vergleich:
eine empirische Studie anhand ausgewählter Beispiele aus Dänemark, Deutschland und den
USA. Berufsbildung, Arbeit und Innovation (Vol. 3). Bielefeld: W. Bertelsmann.
Grollmann, P., Kruse, W., & Rauner, F. (2003). Scenarios and Strategies for VET in Europe (Vol.
130). Dortmund: Landesinstitut Sozialforschungsstelle Dortmund.
Grollmann, P., Kruse, W., & Rauner, F. (Eds.). (2005). Europäisierung beruflicher Bildung.
Bildung und Arbeitswelt (Vol. 14). Münster: LIT.
Grollmann, P., & Rauner, F. (Eds.). (2007). International perspectives on teachers and lecturers in
technical and vocational education. Dordrecht: Springer.
Grollmann, P., Spöttl, G., & Rauner, F. (2006). Europäisierung Beruflicher Bildung – eine
Gestaltungsaufgabe. Reihe: Bildung und Arbeitswelt (Vol. 16). Münster: LIT.
Grollmann, P., Spöttl, G., & Rauner, F. (Eds.). (2007). Europäisierung beruflicher Bildung – eine
Gestaltungsaufgabe. Münster: LIT.
Gruber, H., & Renkl, A. (2000). Die Kluft zwischen Wissen und Handeln: Das Problem des trägen
Wissens. In G. H. Neuweg (Ed.), Wissen – Können – Reflektion. Ausgewählte
Verhältnisbestimmungen (pp. 155–174). Innsbruck: Studien-Verlag.
Grünewald, U., Degen, U., & Krick, H. (1979). Qualifikationsforschung und berufliche Bildung.
Ergebnisse eines Colloquiums des Bundesinstituts für Berufsbildung (BIBB) zum
gegenwärtigen Diskussionsstand in der Qualifikationsforschung. Heft 2. Berlin: BIBB.
Gruschka, A. (1983). Fachliche Kompetenzentwicklung und Identitätsbildung im Medium der
Erzieherausbildung – über den Bildungsgang der Kollegschule und zur Möglichkeit der
Schüler, diesen zum Thema zu machen. In D. Benner, H. Herd, & H. Thiersch (Eds.), Zeitschrift
für Pädagogik 18 (pp. 142–152). Beiheft: Beiträge zum 8. Kongreß der Deutschen Gesellschaft
für Erziehungswissenschaft.
Gruschka, A. (Ed.). (1985). Wie Schüler Erzieher werden. Studie zur Kompetenzentwicklung und
fachlichen Identitätsbildung. (2 Bände). Wetzlar: Büchse der Pandora.
Gruschka, A. (2005). Bildungsstandards oder das Versprechen, Bildungstheorie in empirischer
Bildungsforschung aufzuheben. In L. A. Pongratz, R. Reichenbach, & M. Wimmer (Eds.),
Bildung - Wissen - Kompetenz (pp. 9–29). Bielefeld: Janus Presse.
Guillemin, F., Bombardier, C., & Beaton, D. (1993). Cross-cultural adaptation of health-related
quality of life measures: literature review and proposed guidelines. Journal of Clinical Epide-
miology, 46(12), 1417–1432.
Guldimann, T., & Zutavern, M. (1992). Schüler werden Lernexperten. Arbeitsberichte.
Forschungsstelle der Pädagogischen Hochschule des Kantons St. Gallen. Band 9. Pädagogische
Hochschule St. Gallen.
Haasler, B. (2004). Hochtechnologie und Handarbeit – Eine Studie zur Facharbeit im
Werkzeugbau der Automobilindustrie. Bielefeld: W. Bertelsmann Verlag.
Haasler, B., & Erdwien, B. (2009). Vorbereitung und Durchführung der Untersuchung. In
F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen beruflicher Kompetenzen.
528 Bibliography
Bd. 1. Grundlagen und Konzeption des KOMET-Projekts. Reihe Bildung und Arbeitswelt.
Münster: LIT.
Haasler, B., Heinemann, L., Rauner, F., Grollmann, P., & Martens, T. (2009). Testentwicklung und
Untersuchungsdesign. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen
beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des KOMET-Projektes (2. Aufl.)
(pp. 103–140). Bielefeld: W. Bertelsmann.
Haasler, B., & Rauner, F. (2012). Lernen im Betrieb. Konstanz: Christiani.
Hacker, W. (1973). Allgemeine Arbeits- und Ingenieurspsychologie. Bern: Huber.
Hacker, W. (1986). Arbeitspsychologie. Psychische Regulation von Arbeitstätigkeiten. Bern:
Huber.
Hacker, W. (1992). Expertenkönnen – Erkennen und Vermitteln. Göttingen: Verlag für Angewandte
Psychologie.
Hacker, W. (1996). Diagnose von Expertenwissen. Von Abzapf-(Broaching-) zu Aufbau-([Re-]
Constuction-)Konzepten. In Sitzungsberichte der sächsischen Akademie der Wissenschaften zu
Leipzig. Bd. 134. Heft 6. Berlin: Akademie-Verlag.
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work. Test of a theorie.
Organizational Behaviour of human Performance, 60, 250–279.
Hastedt, H. (1991). Aufklärung und Technik. Grundprobleme einer Ethik der Technik. Frankfurt/
Main: Suhrkamp.
Hattie, J. A. (2003). Teachers make a difference: What is the research evidence? Australian councel
for education, research annual conference on: Building Teacher Quality.
Hattie, J. A. (2011). Influences on students’ learning. www.arts.auckland.acoz/education/staff.
Hattie, J., & Yates, C. R. (2015). Lernen sichtbar machen aus psychologischer Perspektive.
Hohengehren: Schneider.
Hauschildt, U., Brown, H., Heinemann, L., & Wedekind, V. (2015). COMET Südafrika. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand
(pp. 353–374). Berlin: LIT.
Havighurst, R. J. (1972). Developmental Tasks and Education. New York: David McKay.
Heeg, F. J. (2015). Stellenwert des COMET-Kompetenzmodells für duale Ingenieur-studiengänge.
In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand
(pp. 111–126). Berlin: LIT.
Heid, H. (1999). Über die Vereinbarkeit individueller Bildungsbedürfnisse und betrieblicher
Qualifikationsanforderungen. ZfPäd, 45(2), 231–244.
Heid, H. (2006). Werte und Normen in der Berufsbildung. In R. Arnold & A. Lipsmeier (Eds.),
Handbuch der Berufsbildung (2nd ed., pp. 33–43). Wiesbaden: VS Verlag für
Sozialwissenschaften.
Heidegger, G., Adolph, G., & Laske, G. (1997). Gestaltungsorientierte Innovation in der
Berufsschule. Bremen: Donat.
Heidegger, G., Jacobs, J., Martin, W., Mizdalski, R., & Rauner, F. (1991). Berufsbilder 2000.
Soziale Gestaltung von Arbeit, Technik und Bildung. Opladen: Westdeutscher Verlag.
Heidegger, G., & Rauner, F. (1997). Reformbedarf in der Beruflichen Bildung für die industrielle
Produktion der Zukunft. Düsseldorf: Ministerium für Wirtschaft und Mittelstand, Technologie
und Verkehr NRW.
Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In
F. Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz.
Bd. III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT.
Heinemann, L., & Rauner, F. (2008). Identität und Engagement: Konstruktion eines Instruments
zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität. A+B
Forschungsberichte 1. Universität Bremen: IBB.
Heinz, W. R. (1995). Arbeit, Beruf und Lebenslauf. Eine Einführung in die berufliche Sozialisation.
München: Juventa.
Bibliography 529
beruflicher Kompetenzen. Bd. II. Ergebnisse COMET 2008. Reihe Bildung und Arbeitswelt
(pp. 161–205). LIT: Münster.
Kelle, U., Kluge, S., & Prein, G. (1993). Strategien der Geltungssicherung in der qualitativen
Sozialforschung. Zur Validitätsproblematik im interpretativen Paradigma. Arbeitspapier Nr. 24.
Hg. Vorstand des Sfb 186. Universität Bremen.
Kern, H., & Sabel, C. F. (1994). Verblasste Tugenden. Zur Krise des Deutschen
Produktionsmodells. In N. Beckenbach & W. v. Treeck (Eds.), Umbrüche gesellschaftlicher
Arbeit. Soziale Welt, Sonderband 9 (pp. 605–625). Göttingen: Schwartz.
Kern, H., & Schumann, M. (1970). Industriearbeit und Arbeiterbewusstsein. Eine empirische
Untersuchung über den Einfluss der aktuellen technischen Entwicklung auf die industrielle
Arbeit und das Arbeiterbewusstsein (Vol. I, II). Frankfurt/Main: Europäische Verlagsanstalt.
Kern, H., & Schumann, M. (1984). Das Ende der Arbeitsteilung? Rationalisierung in der
industriellen Produktion. München: Beck.
Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine
praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften.
Kleiner, M. (2005). Berufswissenschaftliche Qualifikationsforschung im Kontext der
Curriculumentwicklung. Studien zur Berufspädagogik 18. Hamburg: Dr. Kovac Verlag.
Kleiner, M., Meyer, K., & Rauner, F. (2001). Berufsbildungsplan für den Industrie-mechaniker.
ITB-Arbeitspapier Nr. 32. Bremen: ITB.
Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-Design I. Arbeits-aufgaben
für eine moderne Beruflichkeit – Identifizieren und Beschreiben von beruflichen
Arbeitsaufgaben. In: Berufsbildung und Innovation – Instrumente und Methoden zum Planen,
Gestalten und Bewerten (Vol. 2). Konstanz: Christiani.
Kliebard, H. (1999). Schooled to Work. Vocationalism and the American Curriculum, 1876–1946.
New York, NY: Teachers College Press.
Klieme, E., Avenarius, H., Blum, W., Döbrich, P., Gruber, H., Prenzel, M., et al. (2003). Zur
Entwicklung nationaler Bildungsstandards: Eine Expertise. Berlin: Bundesministerium für
Bildung und Forschung.
Klieme, E., & Hartig, J. (2007). Kompetenzkonzepte in den Sozialwissenschaften und im
empirischen Diskurs. Zeitschrift für Erziehungswissenschaft. Sonderheft, 08, 11–29.
Klieme, E., & Leutner, D. (2006). Kompetenzmodelle zur Erfassung individueller Lernergebnisse
und zur Bilanzierung von Bildungsprozessen. Beschreibung eines neu eingerichteten
Schwerpunktprogramms der DFG. Zeitschrift für Pädagogik, 53(6), 876–903.
Klotz, V. K., & Winther, E. (2012). Kompetenzmessung in der kaufmännischen Berufs-ausbildung:
Zwischen Prozessorientierung und Fachbezug. Eine Analyse der ak-tuellen Prüfungspraxis.
bwp@-Ausgabe Nr. 22. Juni 2012. Universität Pader-born. URL: http://www.bwpat.de/
ausgabe22/klotz_winther_bwpat22.pdf (Stand: 03.09.2014).
Klüver, J. (1995). Hochschule und Wissenschaftssystem. In: Huber, L. (Hg.) Enzyklopädie
Erziehungswissenschaft. Bd. 10. Ausbildung und Sozialisation in der Hochschule. 78–91.
KMK – Kultusministerkonferenz. (1999). Handreichungen für die Erarbeitung von
Rahmenlehrplänen der Kultusministerkonferenz (Köln) für den berufsbezogenen Unterricht in
der Berufsschule und ihre Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte
Ausbildungsberufe, Bonn (Stand: 05.02.1999).
KMK – Kultusministerkonferenz. (2004a). Standards für die Lehrerbildung –
Bildungswissenschaften, Bonn (Stand: 16.12.2004).
KMK – Kultusministerkonferenz. (2004b). Argumentationspapier Bildungsstandards der
Kultusministerkonferenz, Bonn (Stand: 16.12.2004).
KMK – Kultusministerkonferenz. (2005). Bildungsstandards im Fach Physik (Chemie/Biologie)
für den mittleren Schulabschluss. München: Luchterhand.
KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik
Deutschland. (1991). Rahmenvereinbarung über die Berufsschule. Beschluss der
Kultusministerkonferenz vom 14./15.3.1991. ZBW, 7, 590–593.
Bibliography 531
KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik
Deutschland (Hg.) (1996). Handreichungen für die Erarbeitung von Rahmenlehrplänen der
Kultusministerkonferenz für den berufsbezogenen Unterricht in der Berufsschule und ihre
Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte Ausbildungsberufe, Bonn.
Kohlberg, L. (1969). Stage and sequence: The developmental approach to moralization.
New York: Holt.
König, J. (2010). Lehrerprofessionalität – Konzepte und Ergebnisse der internationalen und
nationalen Forschung am Beispiel fächerübergreifender und pädagogischer Kompetenz. In
J. König & B. Hoffmann (Eds.), Professionalität von Lehrkräften – was sollen Lehrkräfte im
Lese- und Schreibunterricht wissen und können (pp. 40–105). Berlin: DGLS.
Kruse, W. (1976). Die Qualifikation der Arbeiterjugend. Eine Studie über die gesellschaftliche
Bedeutung ihrer Veränderung. Frankfurt/Main: Campus.
Kruse, W. (1986). Von der Notwendigkeit des Arbeitsprozeßwissens. In J. Schweitzer (Ed.),
Bildung für eine menschliche Zukunft (pp. 188–193). Weinheim, Basel: Juventa Verlag.
Kunter, M., Schümer, G., Artelt, C., Baumert, J., Klieme, E., Neubrand, M., et al. (2003). Pisa
2000 – Dokumentation der Erhebungsinstrumente. Berlin: MPI für Bildungsforschung.
Kurtz, T. (2001). Aspekte des Berufs in der Moderne. Opladen: Leske + Budrich.
Kurtz, T. (2005). Die Berufsform der Gesellschaft. Weilerswist: Velbrück Wissenschaft.
Lamnek, G. (1988/89). Qualitative Sozialforschung. Bde. 1/2. Methodologie. München
Laur-Ernst, U. (Ed.). (1990). Neue Fabrikstrukturen – veränderte Qualifikationen. Ergebnisse eines
Workshops des Bundesinstituts für Berufsbildung. Berlin: BIBB.
Lave, J., & Wenger, E. (1991). Situated Learning. Legitimate Peripheral Participation. New York:
Cambridge University Press.
Lechler, P. (1982). Kommunikative Validierung. In G. L. Huber & H. Mandl (Eds.), Verbale Daten
(pp. 243–258). Beltz: Weinheim.
Lehberger, J. (2013). Arbeitsprozesswissen – didaktisches Zentrum für Bildung und Qualifizierung.
Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Berlin: LIT.
Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMET-
Messverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET
auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT.
Lehberger, J., & Rauner, F. (2014). Berufliches Lernen in Lernfeldern. Ein Leitfaden für die
Gestaltung und Organisation projektförmigen Lernens in der Berufsschule. Bremen:
Universität Bremen: I:BB.
Lehmann, R. H., & Seeber, S. (Eds.). (2007). ULME III. Untersuchungen von Leistungen, Moti-
vation und Einstellungen der Schülerinnen und Schüler der Berufsschulen. Hamburg: Behörde
für Bildung und Sport.
Lempert, W. (1995). Berufliche Sozialisation und berufliches Lernen. In R. Arnold & A. Lipsmeier
(Eds.), Handbuch der Berufsbildung. Verlag B. Budrich: Opladen.
Lempert, W. (2000). Berufliche Sozialisation oder was Berufe aus Menschen machen. Eine
Einführung (2nd ed.). Baltmannsweiler: Schneider Verlag.
Lempert, W. (2006). Berufliche Sozialisation. Persönlichkeitsentwicklung in der betrieblichen
Ausbildung und Arbeit. Baltmannsweiler: Schneider Verlag.
Lempert, W. (2007a). Vom “impliziten Wissen” zur soziotopologisch reflektierten Theorie.
Ermunterung zur Untertunnelung einer verwirrenden Kontroverse. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 103(4), 581–596.
Lempert, W. (2007b). Nochmals: Beruf ohne Zukunft? Berufspädagogik ohne Beruf? Postskriptum
zur Diskussion des Buches von Thomas Kurz “Die Berufsform der Gesellschaft”. Zeitschrift für
Berufs- und Wirtschaftspädagogik, 103(3), 461–467.
Lenger, A. (2016). Der ökonomische Fachhabitus – professionstheoretische Konsequenzen für das
Studium der Wirtschaftswissenschaften. In G. Minnameier (Ed.), Ethik und Beruf.
Interdisziplinäre Zugänge (pp. 157–176). Bielefeld: wbv.
Lenk, H., & Ropohl, G. (Eds.). (1987). Technik und Ethik. Stuttgart: Reclam.
532 Bibliography
Lenzen, D., & Blankertz, H. (1973). Didaktik und Kommunikation: Zur strukturalen Begründung
der Didaktik und zur didaktischen Struktur sprachlicher Interaktion. Athenäum: Frankfurt am
Main.
Lüdtke, G. (1974). Harmonisierung und Objektivierung von Prüfungen. PAL Schriftreihe Bd. 1.
Konstanz: Christiani.
Lutz, B. (1988). Zum Verhältnis von Analyse und Gestaltung der sozialwissenschaftlichen
Technikforschung. In F. Rauner (Ed.), “Gestaltung” – eine neue gesellschaftliche Praxis.
Bonn: Neue Gesellschaft.
Martens, T. (2015). Wie kann berufliche Kompetenz gemessen werden? Das Beispiel COMET. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand.
Berlin, Münster: LIT.
Martens, T., Heinemann, L., Maurer, A., Rauner, F., Ji, L., & Zhao, Z. (2011). Ergebnisse zum
Messverfahren [COMET]. In: Rauner, F. et al. Messen beruflicher Kompetenzen. Bd. III. Drei
Jahre COMET-Testerfahrung, 90–126.
Martens, T., & Rost, J. (1998). Der Zusammenhang von wahrgenommener Bedrohung durch
Umweltgefahren und der Ausbildung von Handlungsintentionen. Zeitschrift für Experimentelle
Psychologie., 45(4), 345–364.
Martens, T., & Rost, J. (2009). Zum Zusammenhang von Struktur und Modellierung beruflicher
Kompetenzen. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen
beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des COMET-Projekts
(pp. 91–95). Münster: LIT.
Mayring, P. (1988). Qualitative Inhaltsanalyse. Grundlagen und Techniken (2. Auflage).
Weinheim: Deutscher Studien Verlag.
McCormick, E. (1979). Job analysis. Methodes and applications. New York: Amacom.
Meyer, P. J., & Allen, N. J. (1991). A three-component conceptualization of organizational
commitment. Human Resource Management Review, 1, 61–89.
Meyer-Abich, K. N. (1988). Wissenschaft für die Zukunft. Holistisches Denken in ökologischer und
gesellschaftlicher Verantwortung. München: Beck.
Minnameier, G. (2001). Bildungspolitische Ziele, wissenschaftliche Theorien und methodisch-
praktisches Handeln – auch ein Plädoyer für “Technologieführerschaft” im Bildungsbereich.
In H. Heid, G. Minnameier, & E. Wuttke (Eds.), Fortschritte in der Berufsbildung? ZBW.
Beiheft 16 (pp. 13–29). Stuttgart: Steiner.
Monseur, C., Baye, A., Lafontaine, D., & Quittre, V. (2011). PISA test format assessment and the
local independence assumption. IERI Monographs Series. Issues and Methodologies in Large-
Scale Assessments. 4, 131–158. http://hdl.handle.net/2268/103137
Müller, W. (1995). Der Situationsfi lm – Ein Medium partizipativer Organisationsentwicklung. In
G. Dybowski, H. Pütz, & F. Rauner (Hg.), Berufsbildung und Organisationsentwicklung.
„Perspektiven, Modelle, Forschungsfragen“ (pp. 333–344). Bremen: Donat.
Müller-Fohrbroth, G. (1973). Wie sind Lehrer wirklich? Ideale Vorurteile Fakten. Stuttgart: Klett.
National Automotive Technicians Education. (1996). ASE certification for automobile technician
training programs. VA: Herndon.
Nehls, H., & Lakies, T. (2006). Berufsbildungsgesetz. Basiskommentar. Frankfurt: Bund.
Neuweg, G. H. (1999). Könnerschaft und implizites Wissen. Münster: Waxmann.
Neuweg, G. H. (Ed.). (2000). Wissen – Können – Reflexion. Ausgewählte Verhältnisbestimmungen.
Innsbruck, Wien, München: Studien-Verlag.
Nickolaus, R., Gschwendtner, T., & Abele, S. (2009). Die Validität von Simulationsaufgaben am
Beispiel der Diagnosekompetenz von Kfz-Mechatronikern. Stuttgart: Institut für
Berufspädagogik.
Nida-Rümelin, J. (2011). Die Optimierungsfalle. Philosophie einer humanen Ökonomie. München:
Irisiana.
Norton, R. E. (1997). DACUM handbook. The national centre on education and training for
employment. Columbus/Ohio: The Ohio State University.
Bibliography 533
OECD. (2009). Länderbericht zur Berufsbildung in der Schweiz. Learning for Jobs, OECD Studie
zur Berufsbildung Schweiz. http://www.bbt.admin.ch/themen/internationales/01020/index.
html?lang¼de (Zugriff 11.01.2016).
Oser, F. (1997). Standards der Lehrerbildung. Teil 1. Berufliche Kompetenzen, die hohen
Qualitätsmerkmalen entsprechen. Beiträge zur Lehrerbildung, 15(1), 26–37.
Oser, F., Curcio, G. P., & Düggeli, A. (2007). Kompetenzmessung in der Lehrerbildung als
Notwendigkeit – Fragen und Zusammenhänge. Beiträge zur Lehrerbildung, 25(1), 14–26.
Ott, B. (1998). Ganzheitliche Berufsbildung. Theorie und Praxis handlungsorientierter
Techniklehre in Schule und Betrieb (2nd ed.). Stuttgart: Steiner.
Pätzold, G. (1995). Vermittlung von Fachkompetenz in der Berufsbildung. In R. Arnold &
A. Lipsmeier (Eds.), Handbuch der Berufsbildung (pp. 157–170). Opladen: Leske + Budrich.
Pätzold, G., Drees, G., & Thiele, H. (1998). Kooperation in der beruflichen Bildung. Zur
Zusammenarbeit von Ausbildern und Berufsschullehrern im Metall- und Elektrobereich.
Hohengehren: Baltmannsweiler. Wirtschaft und Berufserziehung, 4, 89/98.
Pätzold, G., & Walden, G. (Eds.). (1995). Lernorte im dualen System der Berufsbildung. Reihe:
Berichte zur beruflichen Bildung, Heft 177. Hg. vom BIBB. Bielefeld: W. Bertelsmann.
Petermann, W. (1995). Fotographie und Filmanalyse. In U. Flick, E. von Kardoff, H. Keupp, L. von
Rosenstiel, & S. Wolff (Hg.), Handbuch qualitative Sozialforschung. Grundlagen, Konzepte,
Methoden und Anwendungen (2. Aufl, pp. 269–272). Weinheim: Beltz.
Petersen, A. W., & Wehmeyer, C. (2001). Evaluation der neuen IT-Berufe. Forschungskonzepte
und Ergebnisse der bundesweiten BiBB-IT-Studie. In A. W. Petersen, F. Rauner, & F. Stuber
(Eds.), IT-gestützte Facharbeit. Gestaltungsorientierte Berufsbildung. Reihe: Bildung und
Arbeitswelt (Vol. 4, pp. 283–310). Baden-Baden: Nomos.
Piaget, J. (1973). Äquiliberation der Kognitiven Strukturen. Stuttgart: Klett.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014). Berufliche Kompetenzen messen. Das
Modellversuchsprojekt KOMET NRW. Zweiter Zwischenbericht (Juli 2014). IBB, Universität
Bremen. http://www.ibb.uni-bremen.de.
Piening, D., & Rauner, F. (2010). Umgang mit Heterogenität. Eine Handreichung des Projektes
KOMET. Bremen: Universität Bremen I:BB.
Piening, D., & Rauner, F. (2014). Kosten, Nutzen und Qualität der Berufsausbildung. Berlin: LIT.
Pies, I. (2016). Individualethik versus Institutionenethik? – Zur Moral (in) der Marktwirtschaft. In
G. Minnameier (Ed.), Ethik und Beruf. Interdisziplinäre Zugänge (pp. 17–39). Bielefeld:
Bertelsmann Verlag.
Polanyi, M. (1966a). The tacit dimension. London: Routledge & Kegan Paul.
Polanyi, M. (1966b). The tacit dimension. Garden City: Doubleday & Company.
Polanyi, M. (1985). Implizites Wissen. Frankfurt/Main: Suhrkamp (orig.: The Tacit Dimension.
1966).
Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a
scientific conception: towards a theory of conceptual change. Science Education, 66(2),
201–227.
Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in
simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4),
717–731.
Prenzel, M., Baumert, J., Blum, W., Lehmann, R., Leutner, D., Neubrand, M., et al. (Eds.). (2004).
PISA 2003. Der Bildungsstand der Jugendlichen in Deutschland – Ergebnisse des zweiten
internationalen Vergleichs. Münster: Waxmann.
Przygodda, K., & Bauer, W. (2004). Ansätze berufswissenschaftlicher Qualifikationsforschung im
BLK-Programm “Neue Lernkonzepte in der dualen Berufsausbildung”. In F. Rauner (Ed.),
Qualifikationsforschung und Curriculum. Analysieren und Gestalten beruflicher Arbeit und
Bildung. Reihe: Berufsbildung, Arbeit und Innovation (Vol. 25, pp. 61–79). Bielefeld:
W. Bertelsmann.
Rademacker, H. (1975). Analyse psychometrischer Verfahren der Erfolgskontrolle und der
Leistungsmessung hinsichtlich ihrer didaktischen Implikationen. In Programmierte Prüfungen:
534 Bibliography
Problematik und Praxis. Schriften zur Berufsbildungsforschung (Vol. 25, pp. 63–100). Hanno-
ver: Schroedel.
Randall, D. M. (1990). The consequences of organizational commitment. Administrative Science
Quarterly, 22, 46–56.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (2nd ed.).
Chicago: University of Chicago Press.
Rauner, F. (1986). Elektrotechnik Grundbildung. Überlegungen zur Techniklehre im Schwerpunkt
Elektrotechnik der Kollegschule. Soest: Landesinstitut für Schule und Weiterbildung.
Rauner, F. (1988). Die Befähigung zur (Mit)Gestaltung von Arbeit und Technik als Leitidee
beruflicher Bildung. In G. Heidegger, P. Gerds, & K. Weisenbach (Eds.), Gestaltung von Arbeit
und Technik – Ein Ziel beruflicher Bildung (pp. 32–51). Frankfurt/Main, New York: Campus.
Rauner, F. (1995). Gestaltung von Arbeit und Technik. In R. Arnold & A. Lipsmeier (Eds.),
Handbuch der Berufsbildung (pp. 50–64). Opladen: Leske + Budrich.
Rauner, F. (1997). Automobil-Service im internationalen Vergleich. In F. Rauner, G. Spöttl, &
W. Micknass (Eds.), Service, Qualifizierung und Vertrieb im internationalen Automobil-Sektor:
Ergebnisse des Automobil-Welt-Congresses am 15. und 16. Oktober 1996 in München
(pp. 35–47). Bremen: Donat-Verlag.
Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur
reflektierten Meisterschaft. Zeitschrift für Berufs- und Wirtschaftspädagogik (ZBW), 95(3),
424–446. Stuttgart: Franz Steiner Verlag.
Rauner, F. (2000). Zukunft der Facharbeit. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches
Arbeitsprozesswissen (pp. 49–60). Baden-Baden: Nomos.
Rauner, F. (2002a). Qualifikationsforschung und Curriculum. In M. Fischer & F. Rauner (Eds.),
Lernfeld: Arbeitsprozess (pp. 317–339). Baden-Baden: Nomos.
Rauner, F. (2002b). Berufliche Kompetenzentwicklung – vom Novizen zum Experten. In
P. Dehnbostel, U. Elsholz, J. Meister, & J. Meyer-Henk (Eds.), Vernetzte
Kompetenzentwicklung. Alternative Positionen zur Weiterbildung (pp. 111–132). Berlin: Edi-
tion Sigma.
Rauner, F. (2004). Eine transferorientierte Modellversuchstypologie – Anregung zur
Wiederbelebung der Modellversuchspraxis als einem Innovationsinstrument der
Bildungsreform (Teil 2). Zeitschrift für Berufs- uns Wirtschaftspädagogik, 100, 424–447.
Rauner, F. (2004a). Qualifikationsforschung und Curriculum. Analysieren und Gestalten
beruflicher Arbeit und Bildung. In Berufsbildung, Arbeit und Innovation (Reihe). Band
25 Forschungsberichte. Bielefeld: W. Bertelsmann Verlag.
Rauner, F. (2004b). Praktisches Wissen und berufliche Handlungskompetenz. Reihe:
ITB-Forschungsberichte, Nr. 14. Universität Bremen: ITB.
Rauner, F. (2005). Offene dynamische Kernberufe als Dreh- und Angelpunkt für eine europäische
Berufsbildung. In P. Grollmann, W. Kruse, & F. Rauner (Eds.), Europäische Berufliche Bildung
(pp. 17–31). Münster: LIT.
Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.),
Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W.
Bertelsmann.
Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische
Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches
Zentrum für die Förderung der Berufsbildung.
Rauner, F. (2015a). Messen beruflicher Kompetenz von Berufsschullehrern. In M. Fischer,
F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung – Methoden
zum Erfassen und Entwickeln beruflicher Kompetenz. KOMET auf dem Prüfstand
(pp. 413–436). Münster: LIT.
Rauner, F. (2015b). Machbarkeitsstudie. Anwenden des KOMET-Testverfahrens für Prüfungen in
der beruflichen Bildung. (Unter Mitarbeit von Klaus Bourdick, Jenny Frenzel, Dorothea
Piening). Bremen: Universität Bremen, I:BB.
Bibliography 535
Rauner, F. (2017). Grundlagen der beruflichen Bildung. Mitgestalten der Arbeitswelt. Bielefeld:
wbv.
Rauner, F. (2018a). Der Weg aus der Akademisierungsfalle. Die Architektur paralleler
Bildungswege. Münster: LIT.
Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen
Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB.
Rauner, F., Bourdick, H., Frenzel, J., & Piening, D. (2015). Anwendung des COMET-
Testverfahrens für Prüfungen in der beruflichen Bildung. Machbarkeitsstudie. Bremen: I:BB.
Rauner, F., & Bremer, R. (2004). Bildung im Medium beruflicher Arbeitsprozesse. Die
berufspädagogische Entschlüsselung beruflicher Kompetenzen im Konflikt zwischen
bildungstheoretischer Normierung und Praxisaffirmation. In: Bildung im Medium beruflicher
Arbeit. Sonderdruck. ZfPäd, 50(2), 149–161.
Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015). Engagement und
Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer
Ausbildung. Eine Studie im Auftrage der Landesinitiative “Steigerung der Attraktivität, Qualität
und Rentabilität der dualen Berufsausbildung in Sachsen”. Bremen: Universität Bremen I:BB.
Rauner, F., Grollmann, P., & Martens, T. (2007). Messen beruflicher Kompetenz(entwicklung).
ITB-Forschungsbericht 21. Bremen: Institut Technik und Bildung.
Rauner, F., Schön, M., Gerlach, H., & Reinhold, M. (2001). Berufsbildungsplan für den
Industrieelektroniker. ITB-Arbeitspapiere 31. Bremen: Universität Bremen, ITB.
Rauner, F., & Spöttl, G. (2002). Der Kfz-Mechatroniker – Vom Neuling zum Experten. Bielefeld:
Bertelsmann.
Rauner, F., Zhao, Z., & Ji, L. (2010). Empirische Forschung zum Messen Beruflicher Kompetenz
der Auszubildenden und Studenten. Beijing: Verlag Tsinghua Universität.
Reckwitz, A. (2003, August). Grundelemente einer Theorie sozialer Praktiken: eine
sozialtheoretische Perspektive. Zeitschrift für Soziologie, 32(4), 282–301.
Ripper, J., Weisschuh, B., & Daimler Chrysler, A. G. (1999). Ausbildung im Dialog: das
ganzheitliche Beurteilungsverfahren für die betriebliche Berufsausbildung. Konstanz:
Christiani.
Röben, P. (2004). Kompetenzentwicklung durch Arbeitsprozesswissen. In K. Jenewein, P. Knauth,
P. Röben, & G. Zülch (Eds.), Kompetenzentwicklung in Arbeitsprozessen (pp. 11–34). Baden-
Baden: Nomos.
Röben, P. (2006). Berufswissenschaftliche Aufgabenanalyse. In F. Rauner (Ed.), Handbuch
Berufsbildungsforschung. 2. aktualisierte Aufl (pp. 606–611). Bielefeld: W. Bertelsmann.
Rost, J. (1999). Was ist aus dem Rasch-Modell geworden? Psychologische Rundschau, 50(3),
171–182.
Rost, J. (2004a). Lehrbuch Testtheorie - Testkonstruktion (2nd ed.). Bern: Huber.
Rost, J. (2004b). Psychometrische Modelle zur Überprüfung von Bildungsstandards anhand von
Kompetenzmodellen. Zeitschrift für Pädagogik, 50(5), 662–678.
Rost, J., & von Davier, M. (1994). A conditional item-fit index for Rasch models. Applied
Psychological Measurement, 18(2), 171–182.
Roth, H. (1971). Pädagogische Anthropologie. Bd. II: Entwicklung und Erziehung. Grundlagen
einer Entwicklungspädagogik. Hannover: Schroedel.
Sachverständigenkommission Arbeit und Technik. (1986). Forschungsperspektiven zum
Problemfeld Arbeit und Technik. Bonn: Verlag Neue Gesellschaft.
Sachverständigenkommission Arbeit und Technik. (1988). Arbeit und Technik. Ein Forschungs-
und Entwicklungsprogramm. Bonn: Verlag Neue Gesellschaft.
Schecker, H., & Parchmann, I. (2006). Modellierung naturwissenschaftlicher Kompetenz.
Zeitschrift für Didaktik der Naturwissenschaften (ZfDN), 12, 45–66.
Scheele, B. (1995). Dialogische Hermeneutik. In U. Flick, E. von Kardoff, H. Keupp, L. von
Rosenstiel, & S. Wolff (Hg.), Handbuch Qualitativer Sozialforschung (pp. 274–276).
Weinheim: Beltz.
Schein, E. (1973). Professional education. New York: McGraw-Hill.
536 Bibliography
Schelten, A. (1994). Einführung in die Berufspädagogik (2nd ed.). Stuttgart: Franz Steiner.
Schelten, A. (1997). Testbeurteilung und Testerstellung. Stuttgart: Franz Steiner.
Schmidt, H. (1995). Berufsbildungsforschung. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der
Berufsbildung (pp. 482–491). Opladen: Leske+Budrich.
Schoen, D. A. (1983). The reflective practitioner. How professionals think in action. New York:
Basic Books, Habercollins Publisher.
Scholz, T. (2013). Beitrag des Koordinators der Industriemechaniker-Arbeitsgruppe. In:
Forschungsgruppe Berufsbildungsforschung (I:BB): Berufliche Kompetenzen messen – das
Modellversuchsprojekt KOMET (Metall). Abschlussbericht. Bremen: Universität Bremen, I:BB.
Scholz, T. (2015). Warum das KOMET-Projekt “Industriemechaniker (Hessen)” eine so
unerwartete Dynamik entfaltete. In M. Fischer, F. Rauner, & Z. Zhao (Eds.),
Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln
beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 149–161). Berlin: LIT.
Schreier, N. (2000). Integration von Arbeiten und Lernen durch eine arbeitsprozessorientierte
Qualifizierungskonzentration beim Einsatz tutorieller Diagnosesysteme im Kfz-Service. In:
Pahl, J.-P., Rauner, F., Spöttl, G. (Hg.) Berufliches Arbeitsprozesswissen. Ein
Forschungsgegenstand der Berufsfeldwissenschaften (pp. 289–300), Baden-Baden.
Sennett, R. (1998). Der flexible Mensch. Die Kultur des neuen Kapitalismus (Originalausgabe: The
Erosion of Charakter. New York). Berlin.
Sennett, R. (2008). Handwerk. Berlin: Berlin-Verlag (aus dem Amerikanischen übersetzt). The
craftman. New Heaven and London: Yale University Press.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability.
Psychological Bulletin, 86(2), 420–428.
Skowronek, H. (1969). Lernen und Lernfähigkeit. München: Juventa.
Skule, S., & Reichborn, A. N. (2002). Learning-conducive work. A survey of learning conditions in
Norwegian workplaces. Luxembourg: Office for Official Publications of the European
Communities.
Spöttl, G. (2006). Experten-facharbeiter-workshops. In F. Rauner (Ed.), Handbuch
Berufsbildungsforschung (2nd ed., pp. 611–616). Bielefeld: W. Bertelsmann.
Stegemann, C., von Eerde, K., & Piening, D. (2015). KOMET Nordrhein-Westfalen: Erste
Erfahrungen in einem kaufmännischen Berufsfeld. In M. Fischer, F. Rauner, & Z. Zhao
(Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und
Entwickeln beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 127–136). Berlin: LIT.
Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2003). The psychology of abilities, competencies, and
expertise. Cambridge: Cambridge University Press.
Steyer, R., & Eyd, M. (2003). Messen und testen. Berlin: Springer.
Straka, A., Meyer-Siever, K., & Rosendahl, J. (2006). Laborexperimente und Quasi-Experimente.
In F. Rauner (Ed.), Handbuch Berufsbildungsforschung. 2. aktual. Aufl (pp. 647–652). Biele-
feld: wbv.
Stuart, M. (2010). The national skills development handbook 2010/11. Rainbow SA.
Suppes, P., & Zinnes, J. L. (1963). Basic measurement theory. In R. D. Luce et al. (Eds.), Handbook
of mathematical psychology. I (pp. 1–76). New York: Wiley.
Taylor, I. A. (1975). An emerging view of creative actions. In I. A. Taylor & J. W. Getzels (Eds.),
Perspectives in creativity (pp. 297–325). Chicago, IL: Aldine.
Tenorth, H.-E. (2009). Ideen und Konzepte von Bildungsstandards. In R. Wernstedt & M. John-
Ohnesorg (Eds.), Bildungsstandards als Instrument schulischer Qualitätsentwicklung
(pp. 13–16). Berlin: Friedrich-Ebert-Stiftung.
Terhart, E. (1998). Lehrerberuf. Arbeitsplatz, Biografie und Profession. In H. Altrichter et al. (Eds.),
Handbuch der Schulentwicklung (pp. 560–585). Innsbruck, Weinheim: Studienverlag.
Tiemeyer, E. (2015). Nordrhein-Westfalen klinkt sich ein. Ziele und erste Erfahrungen mit einem
ambitionierten COMET-Projekt. In M. Fischer, F. Rauner, & Z. Zhao (Eds.),
Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln
beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 73–91). Berlin: LIT.
Bibliography 537
Young, M. (2007). Auf dem Weg zu einem europäischen Qualifikationsrahmen: Einige kritische
Bemerkungen. In P. Grollmann, G. Spöttl, & F. Rauner (Eds.), Europäisierung beruflicher
Bildung – eine Gestaltungsaufgabe. Hamburg: LIT.
Young, M. (2009). National qualification framework: Their feasibility for effective implementation
in developing countries. Skill Working Paper No. 22. Geneva: ILO.
Zentralverband der Elektrotechnischen Industrie (ZVEI). (1973). Ausbildungs-Handbuch für die
Stufenausbildung elektrotechnischer Berufe (Vol. 7, 2nd ed.). Frankfurt/Main: ZVEI-
Schriftenreihe.
Zhao, Z. (2014). KOMET-China: Die Schritte auf dem Weg zu einem nationalen Schlüsselprojekt
der Qualitätssicherung in der Beruflichen Bildung. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 110(3), 442–448.
Zhao, Z. (2015). Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer und Dozenten
beruflicher Bildung in China. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik
in der beruflichen Bildung (pp. 437–449). Münster: LIT.
Zhao, Z., Rauner, F., & Zhou, Y. (2015). Messen von beruflicher Kompetenz von Auszubildenden
und Studierenden des Kfz-Servicesektors im internationalen Vergleich: Deutschland – China. In
M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzentwicklung in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand
(pp. 393–410). Berlin: LIT.
Zhao, Z., Zhang, Z., & Rauner, F. (2016). KOMET based professional competence assessment for
VET teachers in China. In M. Pilz (Ed.), Youth in transition from school to work – vocational
education and training (VET) in times of economic crises. Dordrecht: Springer.
Zhao, Z., & Zhuang, R. (2012). Research and development of the curriculum for the secondary
teachers’ qualification. Education and Training, 5, 12–15.
Zhao, Z., & Zhuang, R. (2013). Messen beruflicher Kompetenz von Auszubildenden und
Studierenden berufsbildender (Hoch)Schulen in China. Zeitschrift für Berufs- und
Wirtschaftspädagogik, 109(1), 132–140.
Zhou, Y., Rauner, F., & Zhao, Z. (2015). Messen beruflicher Kompetenz von Auszubildenden und
Studierenden des Kfz-Service Sektor im internationalen Vergleich: Deutschland – China. In
M. Fischer, R. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung.
Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand
(pp. 393–410). Münster: LIT.
Zhuang, R., & Ji, L. (2015). Analyse der interkulturellen Anwendung der COMET-
Kompetenzdiagnostik. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in
der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz:
COMET auf dem Prüfstand (pp. 341–352). Berlin: LIT.
Zhuang, R., & Zhao, Z. (2012). Empirische Forschung zum Messen Beruflicher Kompetenz der
Auszubildenden und Studenten. Peking: Verlag Tsinghua Universität.
Zimmermann, M., Wild, K.-P., & Müller, W. (1999). Das “Mannheimer Inventar zur Erfassung
betrieblicher Ausbildungssituationen” (MIZEBA). Zeitschrift für Berufs- und
Wirtschaftspädagogik, 95(3), 373–402.
Zöller, A., & Gerds, P. (Eds.). (2003). Qualität sichern und steigern. Personal- und
Organisationsentwicklung als Herausforderung beruflicher Schulen (pp. 333–355). Bielefeld:
Bertelsmann.
A+B 01/2008 Heinemann, L., Rauner F. „Identität und Engagement: Konstruktion eines Instru-
ments zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität“
Bibliography 539
A+B 01/2016 Rauner, F., Frenzel, J., Heinemann, L., Kalvelage, J., Zhou, Y. (2016). „Identität und
Engagement: ein Instrument zur Beschreibung und zum Messen beruflicher Identität und
beruflichen Engagements. A+B Forschungsberichte“ (2. vollständig überarbeitete Auflage des
01/2006)
A+B 02/2009 Rauner, F., Heinemann, L., Haasler, B. „Messen beruflicher Kompetenz und
beruflichen Engagements“
A+B 04/2009 Maurer, A., Rauner, F., Piening, D. „Lernen im Arbeitsprozess – ein nicht
ausgeschöpftes Potenzial dualer Berufsausbildung“
A+B 10/2012 Rauner, F. „Multiple Kompetenz: „Die Fähigkeit der holistischen Lösung beruflicher
Aufgaben“
A+B 11/2012 Rauner, F. „Messen beruflicher Kompetenz von Berufsschullehrern“
A+B 12/2013 Rauner, F. „Überprüfen beruflicher Handlungskompetenz. Zum Zusammenhang von
Prüfen und Kompetenzdiagnostik“
A+B 14/2014 Rauner, F., Piening, D. „Heterogenität der Kompetenzausprägung in der beruflichen
Bildung“
A+B 15/2014 Fischer, M., Huber, K., Mann, E., Röben, P. „Informelles Lernen und dessen
Anerkennung aus der Lernendenperspektive – Ergebnisse eines Projekts zur Anerkennung
informell erworbener Kompetenzen in Baden-Württemberg“
A+B 16/2014 Rauner, F., Piening, D. „Kontextanalysen im KOMET-Forschungsprojekt: Erfassen
der Testmotivation”
A+B 17/2014 Rauner, F., Piening, D., Frenzel, J. „Der Lernort Schule als Determinante beruflicher
Kompetenzentwicklung“
A+B 18/2014 Rauner, F., Piening, D., Zhou, Y. „Stagnation der Kompetenzentwicklung – und wie
sie überwunden werden kann“
A+B 19/2015 Rauner, F., Piening, D., Scholz, T. „Denken und Handeln in Lernfeldern. Die
Leitidee beruflicher Bildung – Befähigung zur Mitgestaltung der Arbeitswelt – wird konkret“
A+B 20/2015 Rauner, F., Piening, D. „Die Qualität der Lernortkooperation“
A+B Forschungsberichte: Forschungsgruppe Berufsbildungsforschung (I:BB) (Hg.), Universität
Bremen. KIT – Karlsruher Institut für Technologie, Institut für Berufspädagogik und
Allgemeine Pädagogik. Carl von Ossietzky Universität Oldenburg, Institut für Physik/
Technische Bildung. Pädagogische Hochschule Weingarten, Professur für Technikdidaktik
Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher
IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und
Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13(2011),
473–505.
Brüning, L., & Saum, T. (2006). Erfolgreich unterrichten durch Kooperatives Lernen. Strategien
zur Schüleraktivierung. Essen: Neue Deutsche Schule Verlagsgesell schaft mbH.
Fischer, M., Rauner, F., & Zhao, Z. (Eds.). (2015b). Kompetenzdiagnostik in der beruflichen
Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem
Prüfstand. Münster: LIT.
Hellpach, W. (1922). Sozialpsychologische Analyse des betriebstechnischen Tatbestandes
„Gruppenfabrikation“. In R. Lang, & W. Hellpach (Hg.), Gruppenfabrikation (pp. 5–186).
Berlin: Springer.
Lehberger, J. (2013). Arbeitsprozesswissen - didaktisches Zentrum für Bildung und Qualifizierung.
Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Münster: LIT.
Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMET-
Messverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der
beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET
auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT.
Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-design I. Konstanz:
Christiani.
Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine
praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften.
Rauner, F. (2018b). Berufliche Kompetenzdiagnostik mit COMET. Erfahrungen und
Überraschungen aus der Praxis. Bielefeld: wbv.
540 Bibliography
COMET-Berichte
Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und
Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer.
Brown, H. (2015). Competence measurement in South Africa: Teachers‘ reactions to feedback on
COMET results. In E. Smith, P. Gonon, & A. Foley (Eds.), Architectures for apprenticeship.
Achieving economic and social goals (pp. 91–95). North Melbourne: Australian Scholarly
Publishing.
Bundesministerium für Bildung und Forschung (BMBF) (Hg.). (2006). Umsetzungshilfen für die
Abschlussprüfungen der neuen industriellen und handwerklichen Elektroberufe. Intentionen,
Konzeption und Beispiele (Entwicklungsprojekt). Stand: 30.12.2005. (Teil 1 der
Abschlussprüfung); Stand: 09.01.2006. (Teil 2 der Abschlussprüfung). Manuskript.
Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der
Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt.
Bibliography 541
Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und
beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus:
Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der
Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang.
Fischer, R., & Hauschildt, U. (2015). Internationaler Kompetenzvergleich und Schulentwicklung.
Das Projekt COMCARE bietet neue Ansatzmöglichkeiten. PADUA Fachzeitschrift für
Pflegepädagogik, Patientenedukation und -bildung, 10(4), 233–241.
Forschungsgruppe Berufsbildungsforschung (I:BB). (2015). KOMET NRW – Ein ambitioniertes
Projekt der Qualitätssicherung und -entwicklung in der dualen Berufsausbildung. Bericht der
Wissenschaftlichen Begleitung. Bremen: Universität Bremen, I:BB.
Hauschildt, U. (2015). Me siento bien en mi centro de formación – I feel good at my training
institution: Results of an international competence assessment in nursing. In E. Smith, P. Gonon,
& A. Foley (Eds.), Architectures for apprenticeship. Achieving economic and social goals
(pp. 100–104). North Melbourne: Australian Scholarly Publishing.
Hauschildt, U., Brown, H., & Zungu, Z. (2013). Competence measurement and development in
TVET: Result oft he first COMET test in South Africa. In S. Akooje, P. Gonon, U. Hauschildt,
& C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls
(pp. 177–184). Münster: Lit.
Hauschildt, U., & Heinemann, L. (2013). Occupational identity and motivation of apprentices in a
system of integrated dual VET. In L. Deitmer, U. Hauschildt, F. Rauner, & H. Zelloth (Eds.),
The architecture of innovative apprenticeship. Technical and vocational education and train-
ing: Issues, concerns and prospects 18 (pp. 177–192). Dordrecht: Springer.
Hauschildt, U., & Piening, D. (2013). Why apprentices quit: A German case study. In S. Akooje,
P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Pre-
mises, promises and pitfalls (pp. 199–202). Münster: Lit.
Hauschildt, U., & Schumacher, J. (2014). COMCARE: Measurement and teaching of vocational
competence, occupational identity and organisational commitment in health care occupations
in Spain, Norway, Poland and Germany. Test instruments and documentations of results.
Bremen: Universität Bremen, I:BB.
Heinemann, L., & Rauner, F. (2011). Measuring vocational competences in electronic engineering:
Findings of a large scale competence measurement project in Germany. In Z. Zhao, F. Rauner,
& U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern
economy (pp. 221–224). Peking: Foreign Language Teaching and Research Press.
Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In F.
Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz. Bd.
III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT.
Ji, L., Rauner, F., Heinemann, L., & Maurer, A. (2011). Competence development of apprentices
and TVET students: A Chinese-German comparative study. In Z. Zhao, F. Rauner, &
U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern
economy (pp. 217–220). Peking: Foreign Language Teaching and Research Press.
Kunter, M. u. a. (2002). Pisa 2000 - Dokumentation der Erhebungsinstrumente. Berlin: Max-
Planck-Institut für Bildungsforschung.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014b). Berufliche Kompetenzen messen –
Das Modellversuchsprojekt KOMET NRW. 1. Zwischenbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014c). Berufliche Kompetenzen messen –
Das Modellversuchsprojekt KOMET NRW. 2. Zwischenbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., & Rauner, F. (2015a). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Elektroniker/-in/Abschlussbericht. Bremen: Universität Bremen, I:
BB.
Piening, D., & Rauner, F. (2015b). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Industriemechaniker/-in/Abschlussbericht. Bremen: Universität
Bremen, I:BB.
542 Bibliography
Piening, D., & Rauner, F. (2015c). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Kaufmann/-frau für Spedition und Logistikdienstleistung und
Industriekaufmann/-frau/Abschlussbericht. Bremen: Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015d). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Kfz-Mechatroniker/-in/Abschlussbericht. Bremen: Universität Bre-
men, I:BB.
Piening, D., & Rauner, F. (2015e). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Medizinische/-r Fachangestellte/-r/Abschlussbericht. Bremen:
Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015f). Messen und Entwicklung von beruflicher Kompetenz in NRW
(KOMET NRW). Teilprojekt Tischler/-in/Abschlussbericht. Bremen: Universität Bremen, I:BB.
Piening, D., & Rauner, F. (2015g). Umgang mit Heterogenität. Eine Handreichung des Projektes
KOMET. Bremen: Universität Bremen I:BB.
Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur
reflektierten Meisterschaft. In: ZBW – Zeitschrift für Berufs- und Wirtschaftspädagogik, 3
(95), 424–446.
Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.),
Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W.
Bertelsmann.
Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische
Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches
Zentrum für die Förderung der Berufsbildung.
Rauner, F. (2013). Applying the COMET competence measurement and development model for
VET teachers and trainers. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.),
Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 181–184). Münster:
Lit.
Rauner, F. (2014). Berufliche Kompetenzen von Fachschulstudierenden der Fachrichtung Metall-
Technik – eine KOMET-Studie (Hessen). Abschlussbericht. Bremen: Universität Bremen, I:BB.
Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen
Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB.
Rauner, F., Frenzel, J., & Piening, D. (2015a). Machbarkeitsstudie: Anwendung des KOMET-
Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB.
Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015b). Engagement und
Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer
Ausbildung. Eine Studie im Auftrage der Landesinitiative „Steigerung der Attraktivität, Qualität
und Rentabilität der dualen Berufsausbildung in Sachsen“. Bremen: Universität Bremen I:BB.
Rauner, F., Heinemann, L., & Hauschildt, U. (2013). Measuring occupational competences:
Concept, method and findings of the COMET project. In L. Deitmer, U. Hauschildt,
F. Rauner, & H. Zelloth (Eds.), The architecture of innovative apprenticeship. Technical and
vocational education and training: Issues, concerns and prospects 18 (pp. 159–176). Dor-
drecht: Springer.
Rauner, F., Piening, D., & Bachmann, N. (2015). Messen und Entwicklung von beruflicher
Kompetenz in den Pflegeberufen der Schweiz (KOMET Pflegeausbildung Schweiz):
Abschlussbericht. Bremen: Universität Bremen, I:BB.
Rauner, F., Piening, D., Fischer, R., & Heinemann, L. (2014). Messen und Entwicklung von
beruflicher Kompetenz in den Pflegeberufen der Schweiz (COMET Pflege Schweiz): Ergebnis
der 1. Testphase 2013. Bremen: Universität Bremen, I:BB.
Rauner, F., Piening, D., Heinemann, L., Hauschildt, U., & Frenzel, J. (2015). KOMET NRW – Ein
ambitioniertes Projekt der Qualitätssicherung und -entwicklung in der dualen
Berufsausbildung. Abschlussbericht: Zentrale Ergebnisse. Bremen: Universität Bremen, I:BB.
Scholz, T., & Heinemann, L. (2013). COMET learning tasks in practice – how to make use of
learning tasks at vocational schools. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann
(Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 107–110).
Münster: Lit.
Index
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 543
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
544 Index
Connell, M.W., 22, 54, 55, 387, 394, 395 Freund, R., 387
Cooley, M., 20 Frey, A., 392
Corbett, J.M., 20 Frieling, E., 23, 34
Cramer, H., 70
Crawford, M.B., 351
Curcio, G.P., 393 G
Gablenz-Kollakowicz, S., 23
Ganguin, D., 6, 72, 344
D Gardner, H., 10, 22, 52, 54, 55, 140, 356, 362,
Daimler Chrysler, A.G., 267 387, 394, 395, 424
Davier, M. von, 150, 156, 162, 163 Garfinkel, H., 18, 49–52
Degen, U., 6, 48 Gäumann-Felix, K., 104, 144, 257, 336,
Dehnbostel, P., 46, 471 470–485
Deitmer, L., 282, 344 Georg, W., 84
Dengler, K., 345 Gerds, P., 344
Deutscher Bundestag, 40, 342, 344, 424 Gerecht, M., 267
Dewey, J., 347 Gerlach, H., 44
DFG, 14, 61, 335, 352 Gerstenberger, F., 5
Dick, M., 50 Gerstenmaier, J., 11
Döbrich, P., 267 Giddens, A., 16
Döring, N., 128, 134, 151, 262 Gille, M., 312
Dörner, D., 250 Gillis, S., 345
Drees, G., 268 Girmes-Stein, R., 70, 387
Drexel, I., 428 Glaser, B., 249
Dreyfus, H.L., 32, 43, 44, 73, 387 Granville, G., 7
Düggeli, A., 393 Gravert, H., 7
Dürrenberger, G., 81 Griffin, P., 345
Dybowski, G., 344 Grigorenko, E.L., 126
Grob, U., 11
Grogoll, T., 23
E Grollmann, P., 7, 53, 63, 84, 148, 249, 265,
Efron, B., 150 332, 411, 421
Emery, F.E., 21 Gruber, H., 66
Emery, M., 21 Grünewald, U., 6
Erdwien, B., 118, 120, 131, 148, 150–154, 161, Gruschka, A., 43, 54, 79, 355, 387, 425
291, 349, 405, 422 Gschwendtner, T., 332
Erpenbeck, J., 47 Guillemin, F., 185
Eugster, B., 267 Guldimann, T., 374
Euler, D., 13
Eyd, M., 156
H
Haase, P., 344
F Haasler, B., 54, 70, 150–154, 265, 332, 422
Fischer, B., 387 Hacker, W., 6, 47, 72, 346, 402
Fischer, M., 10, 46, 49, 70, 83, 312, 332, 346, Hackman, J.R., 21
387 Hartig, J., 54
Fischer, R., 1, 249, 257, 259, 333, 335, 344, Hastedt, H., 24
345, 382, 474 Hattie, J.A., 208, 336, 374, 380, 390, 407
Fleiss, J.L., 115, 116, 153 Hauschildt, U., 69, 257, 259, 336, 474
Flick, U., 250 Havighurst, R.J., 43, 70, 249, 425
Frank, H., 8 Hayes, A.F., 234
Frei, F., 23 Heeg, F.J., 129
Frenzel, J., 257, 313, 315, 339, 374, 378, 390 Heermeiner, R., 342
Index 545
Z
W Zhang, Z., 78, 414, 415
Walden, G., 268 Zhao, Z., 1, 78, 100, 187, 188, 249, 255, 259,
Wallbott, H.G., 116, 152, 161 280, 335, 340, 344, 345, 362, 386, 412,
Weber, M., 17 414, 415, 418, 422
Weber, S., 2, 332 Zhou, Y., 86, 87, 100, 171–178, 250, 255, 259,
Wedekind, V., 69 280, 386, 418
Wehmeyer, C., 194, 196, 197, 201 Zhuang, R., 150, 185, 188, 255, 418
Wehner, T., 50 Zimmermann, M., 267
Weinert, F.E., 15, 62, 70, 247 Zinnes, J. L., 147, 156
Weiß, R., 136 Zöller, A., 344
Weisschuh, B., 267 Zutavern, M., 374
Weißmann, H., 194, 201 ZVEI, 81
Wenger, E., 20, 43, 44, 70, 387
Subject Index
A C
Ability Capability model, 332
implicit, 32 Career aspirations, 80
professional, 10, 11, 18, 49, 65, 131, 134 Certification systems, 7, 15, 427
Action China, 100, 150, 185–187, 192, 244, 255, 260,
artistic, 12, 74 280, 281, 308, 309, 375, 376, 412–416,
competence, 83, 193, 194, 372, 383, 406, 520
410, 420, 458 Chinese teachers, 187, 189, 260, 376
complete, 6, 55, 59, 72, 73, 396, 402, 406, Classification systems, 18, 260
455 Coefficient of variation, 216, 217, 272, 299,
professional, 6, 16, 28, 51, 57, 59, 71–73, 300, 349, 354, 445
96, 100, 102, 103, 105, 109, 128, 131, COMET, see Competence development and
137, 142, 203, 264, 352, 383, 402, 410, assessment in TVET (COMET)
412, 414, 425, 426, 431, 438, 458, 469 Commitment
types, 73–74 occupational, 85, 88, 248, 328, 360,
vocational, 55, 57, 72, 73, 102, 213, 216, 517
224, 426, 430, 451 organisational, 85, 88, 184, 313, 319, 328,
Applied Academics, 52 358
Apprenticeship, 18, 70, 100, 126, 250, 320, professional, 311, 314, 319, 327, 328, 333,
321, 387, 433, 446, 471 358–360
Architecture of parallel educational paths/ research, 82, 84, 85, 312, 338, 339
pathways, 18, 19, 352 vocational, 184, 355
Assignment Communicativity, 51
company, 136, 196, 197 Community of practice, 12, 69, 70, 80, 83
work, 92, 195, 208 Company project work, 193–195, 197
Attractiveness, 276, 320, 329, 338, 339, 341 Comparative projects, 109, 127, 129, 144, 257,
258, 519
Competence
B to act, 41, 53, 77, 78, 157
BIBB, 134, 209, 282, 320 in action, 32
Bologna reform, 18, 352 assessment, 8, 15
Business process orientation, 66, 147, 161, 169, conceptual-planning, 136, 193, 221, 403,
268, 269, 327–329 410
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 549
F. Rauner, Measuring and Developing Professional Competences in COMET,
Technical and Vocational Education and Training: Issues, Concerns and Prospects
33, https://doi.org/10.1007/978-981-16-0957-2
550 Subject Index
126–131, 135, 136, 142, 143, 145, Quality control, 41, 77, 129, 211, 215, 269, 457
147–149, 154, 156–158, 171, 187, 190, Quality criteria, 4, 16, 70, 126, 128, 135, 200,
193–196, 199, 202, 203, 205, 206, 201, 218, 224, 259, 269, 271, 361–365,
208–210, 213–215, 250, 260, 261, 275, 375, 482
277, 286–290, 298, 299, 307, 310–312, Quality diagram, 268–272, 363, 364, 370, 371
320, 331–335, 338–342, 346, 351, 352, Quality profile, 272, 273, 339, 363
354–360, 362, 363, 368, 372, 380, 382, Questionnaires
387, 389–422, 424–426, 428, 429, 433, context, 186, 232
442, 444, 446, 455
Professional concept, 17, 468, 469
Professional development, 3, 15, 21, 92, 93, R
137, 249, 328, 333, 425 Rasch model, 155, 159, 161, 162
Professional ethics, 17, 81–83, 353, 354 Rater training, 79, 105, 110–115, 117, 118, 120,
Professional expertise, 48, 93, 148, 169, 372, 127, 128, 144, 145, 150, 153, 160,
489 188–190, 192, 201, 218, 224, 256, 278,
Professional identity, 1, 3, 15, 16, 79–81, 83, 337, 341, 348, 375–377, 384–388, 418
85, 174, 230, 269, 311–313, 315, 317, Rating procedure, 10, 78, 79, 112, 127, 150,
319, 323–326, 332, 333, 338–342, 346, 159–160, 162, 201, 206, 209, 212, 279,
351, 354, 355, 357–360, 382, 407, 467, 292, 299, 338, 341, 393, 403, 421, 453
472, 488 Rating results, 78, 79, 112–114, 117–122, 216,
Professionalism 341, 376
modern, 360 Rating scale, 68, 110, 112, 151, 161, 202,
open dynamic, 41 211–213, 257, 279, 341, 404, 405, 410,
Professional knowledge, 10, 11, 18, 30, 32, 48, 415, 419, 421, 489–496
49, 52, 53, 57, 194, 328, 331, 387, 393, Rating training, 79, 105, 111, 145, 162, 377,
405, 406, 430–432, 454, 487, 488 421
Professional learning, 11, 430, 432, 446, 455, Re-engineering, 17
456 Reflection on and in action, 53
Professional role, 80, 83, 311, 312, 355 Reliability, 78, 79, 105, 111, 120–122, 126,
Professional skills, 2, 10, 29, 40, 41, 49, 52, 63, 127, 135, 136, 150–153, 156, 159–162,
98, 99, 132, 134, 154, 193, 194, 196, 178, 182–186, 188, 191, 192, 201, 202,
346, 372, 381, 431, 433 209, 212, 219, 277, 335, 416, 421, 482
Professional typology, 316–319 analysis, 120–122, 178, 183, 191, 192
Project design, 257–264, 266, 282 calculations, 151, 153, 376
Project objectives, 257–258, 276 Requirement dimension, 56, 63, 200, 285, 394,
Project organisation, 258 404, 405
Psychometric modelling, 158 Research
designs, 249, 250, 336, 340
hypothesis-driven, 249, 250
Q hypothesis-led, 250
Qualification frameworks, 102 strategies, 248–256
Qualification levels, 97, 102, 103, 107, 129, Risk group, 67, 227, 281, 292, 298, 302, 303,
138, 140, 144, 257, 306, 309, 310 385, 401
Qualification requirements, 5, 6, 9, 10, 19, 20, Role distance, 80
25–27, 37, 41, 48, 49, 59, 62, 98, 103,
110, 129, 138, 141, 194, 203, 206–208,
397, 407, 429 S
Qualification research, 30, 41, 42, 44, 48, 51, Safety
54, 59, 71, 103, 342, 344 occupational, 58, 99, 135, 345, 464, 482,
Quality assurance, 1, 4, 142, 218, 224, 254, 505
257, 260, 264, 267, 282, 329, 335, 342, works, 345, 408, 490, 492, 498, 501, 504
343, 361, 374, 407, 484 Scale properties, 151
Quality competition, 72 School climate, 248, 269, 324, 327, 374
556 Subject Index
Scope for design, 99, 102, 104, 138, 214, 410, Tacit skills, 10, 32
437, 439, 451, 465, 481 Tasks
Scope of the task/test, 201, 225, 276, 411 holistic, 55, 59, 63, 99, 105, 138, 147, 149,
Selectivity index, 132–135 195, 201, 202, 212, 215, 249, 264, 401,
Shaping competence, 7, 39, 65–67, 75, 97, 121, 406, 438
122, 146, 166, 169, 190–192, 206, 220, solutions, 1, 28, 56, 57, 59, 66, 76, 97–99,
222–224, 244, 286, 287, 289–291, 293, 102, 104, 105, 107, 110, 111, 115, 123,
297, 298, 304, 346, 351, 384, 389, 401, 127, 128, 143–145, 147–149, 151,
405, 418, 425, 426, 428, 446, 469 159–166, 168, 169, 199, 201, 212–216,
Shaping the working world, 7, 41, 143 229, 236, 249, 256, 264, 272, 277, 282,
Shapiro-Wilk test, 151 288, 298, 341, 351, 375–377, 388, 401,
Situated learning, 261, 262, 387 406, 409, 411, 412, 433, 438–439, 443,
Situativity, 27, 51 444, 450, 451, 453, 461–462, 466–468,
SK Arbeit und Technik, 40 478, 491, 492
Skills Taylorisation, 6
implicit, 10, 49 Taylorism, 82, 343
practical, 9, 10, 49, 52, 187, 488 Teacher evaluation, 326, 327, 374
professionals, 9, 10, 468 Teachers assessment, 138, 269
social, 49 Teaching-learning process, 427, 447–459
technical, 10, 33, 381 Teaching-learning research, 332–338, 340, 342
vocational, 109 Teaching quality, 269, 368–370, 374, 380
Social compatibility, 28, 39, 58, 106, 120–122, Technical colleges (TA), 104, 144, 225, 226,
147, 161, 162, 169, 190, 192, 199, 299, 253, 264, 280, 291, 301, 376, 382, 385,
348, 351, 352, 404, 408, 428, 464, 490, 389, 415
492, 500, 504 Technicians mechatronics, 17, 101, 219–224,
Social responsibility, 42, 505 242, 251, 261, 262, 380, 396, 418
Solution spaces, 15, 59, 102, 104, 105, 107, Technological and economic change, 197
109–113, 144, 201, 214, 215, 277, 278, Technology assessment, 23
299, 332, 333, 347, 376, 399, 402, 440, Technology design, 23, 24
441, 454, 461, 462, 467, 482, 483, 490, Technology genetics research, 23
492, 497, 499, 502, 506, 508 Technology impact assessment, 23
South Africa, 118, 150, 244, 245, 490 Test arrangements, 3, 101–104, 107, 109, 257,
Specialisation, 17, 19, 103, 339, 354, 361, 419 395
S-R (behavioural) theory, 41 Test concept, 70, 99, 143, 144
Stagnation, 250–256, 261, 336, 341 Test design, 128, 261
Stagnation hypothesis, 251, 253 Test format, 1, 96, 99, 102, 104, 123, 132, 224,
Standard 335
educational, 13, 61, 427, 442 Test group
Studies primary, 102, 110, 129, 257
hypothesis-led, 44 secondary, 257
professionals, 25, 37, 44 Test motivation, 225–248, 276, 277, 340
of works, 52 Test participants, 64, 98, 102, 111, 112, 117,
Subject areas, 155, 392, 480 122–124, 127, 129, 130, 138, 142–144,
Subject didactics, 3, 61, 249 186, 202, 213, 225, 226, 229, 230, 235,
Sustainability, 39, 57, 59, 66, 68, 76, 106, 121, 238–240, 244, 247, 248, 251, 258, 261,
122, 147, 161, 162, 169, 190, 192, 214, 264, 265, 268, 274–280, 283, 285–287,
283, 351, 404, 407, 428, 465, 478, 479, 290, 300, 306, 340, 347, 348, 354, 356,
481, 483, 484, 490, 491 357, 362, 363, 372, 374, 376, 384, 385,
390, 410, 412, 418
Test population, 104, 120, 123, 125, 143, 144,
T 261, 264, 266, 384
TA, see Technical colleges (TA) Test quality criteria, 10, 126, 128, 132
Tacit knowledge, 10, 11, 32, 49–50 Test results
Subject Index 557
interpretation of, 64, 268, 276, 279, 340 contents, 69–72, 97, 98, 101, 102, 110, 112,
representation of, 290, 297, 354 125, 128–130, 135, 136, 155, 157, 159,
Test scope, 229, 276–277 160, 166, 168, 169, 187, 209, 258, 335,
Test tasks 406, 422
authenticity/reality reference, 98 curricular, 9, 19, 169, 259, 260, 264, 406
criteria-oriented, 99, 102, 136, 137 occupational, 110, 259
developing open, 91–146 professionals, 98, 112, 134, 142, 258, 259,
difficulty of, 64, 98, 102, 110, 120, 123, 406
124, 135, 138, 142–144, 146 vocational, 101
evaluation and choice of test tasks, 105–125 VDI, 98, 99
norm-based, 102 VET, see Vocational and educational training
representativeness of, 72, 98 (VET)
revision of, 110, 124, 125 Vocational and educational training (VET), 1,
selection and development, 72 22, 45, 62, 63, 68, 69, 71, 144, 155, 157,
Test theory 160, 169, 192, 198, 206, 224, 254, 258,
classical, 134, 156 260, 264, 300, 301, 309, 333, 334, 337,
probabilistic, 134, 156 344, 352, 381, 385, 418, 442, 445, 446
Total point values, 117, 138, 139, 202 Vocational development, 355, 425
Total score (TS), 120, 130, 138, 139, 202, 216, Vocational education, 1, 3–7, 11, 13–19,
218–221, 226, 228, 229, 234–237, 262, 24–26, 28, 30, 40–42, 44, 45, 49, 50, 59,
281, 286, 293, 294, 297, 299, 300, 306, 61–67, 69, 70, 72, 73, 80, 82, 126, 128,
308, 310, 337, 348–350, 356, 357, 363, 130, 131, 134, 136, 141–145, 154–157,
365, 371, 445, 517 163, 169, 171, 184, 185, 187, 197, 206,
Training 214, 216, 218, 224, 230, 248, 249, 257,
objectives, 67, 109, 192, 200, 260, 392, 451 258, 260, 262, 263, 268, 275, 276, 278,
paradox, 346, 430–432, 447, 455 282, 283, 293, 296–302, 306, 309–311,
practical, 18, 30, 45, 49, 53, 57, 63, 76, 79, 323, 331–333, 335, 336, 338, 342–346,
100, 140, 151, 158, 187, 326, 373, 382, 351, 352, 354, 355, 361–363, 371–373,
383, 398, 408, 428, 433, 435 383, 389, 391, 397–399, 403, 406–408,
programmes, 9, 100, 101, 103, 104, 253, 423–428, 438, 441, 442, 446, 455, 457,
263, 265–267, 282, 301, 306, 309, 310, 462, 470, 482, 488
323, 335, 382, 396, 397, 416, 425, 428 Vocational identity, 1, 62, 80, 172, 174, 181,
qualities, 238, 248, 268–271, 279, 324–327, 183, 184, 323, 333, 339, 354, 355
333, 334, 341, 362, 363, 365, 366, 371, Vocationalisation of higher education, 18, 352
378, 381 Vocational learning, 4, 16, 18, 25, 32, 43, 46,
regulations, 9, 20, 25, 29, 71, 80, 101, 120, 48, 56, 63, 126, 262, 263, 266, 268, 298,
142, 194, 199–202, 206, 207, 213, 224, 333, 339, 344, 346, 354, 374, 396, 405,
300, 339, 391, 396, 406, 433 426–428, 466, 488
support, 268, 269, 272, 325, 326, 328, 329, Vocational pedagogy, 29, 402, 431
367, 368 Vocational research, 17, 80, 339, 360
Typology of occupations, 86 Vocational schools, 7, 39, 40, 101, 102, 104,
123, 144, 151, 195, 208, 226, 247, 248,
251–253, 256, 258, 263, 264, 267–272,
U 274–276, 278, 286, 291, 297, 299, 301,
Understanding of (the) context, 20, 22, 100, 299 324, 339, 344, 356, 362, 364, 372–374,
Utility values, 57, 76, 77, 213, 214, 268, 345, 376, 378, 380–384, 391, 393, 396–399,
351, 352, 424, 453, 454, 461, 463 402, 405, 408, 416, 433, 436, 445, 468,
471
Vocational school teachers, 138, 186, 389, 391,
V 393, 394, 396–398, 401, 402, 406, 411,
Validity 413, 414, 416, 417, 421, 422
consensus, 91, 187 Vocational skills, 8, 16, 55, 110, 132, 151, 187,
constructs, 128, 130, 131, 134, 154 194, 207, 455
558 Subject Index
Vocational tasks, 55, 59, 128, 298, 398, 411, 343, 346, 347, 352, 360, 381, 433, 436,
414, 425, 438, 444 437, 453, 457, 469, 471, 477, 478, 481,
Vocational training practice, 49, 143, 260, 282, 483, 490–492, 507
336, 344, 401 process analyses, 39
Vocational training systems, 2, 19, 71, 88, 89, process knowledge, 30, 32, 45–48, 50, 51,
141, 185, 309, 342, 380, 428 56, 65, 66, 102, 136, 142, 207, 256, 285,
293, 294, 297, 298, 346, 381, 387, 398,
405, 406, 429, 438, 454, 455, 464, 468,
W 469
Work sample, 158
culture, 84 secondary technical, 12
design, 6, 16, 20, 59, 68, 72, 403, 408 situations, 10, 20, 31, 34–37, 39, 42, 44, 46,
design and organisation, 58, 59, 214 48, 50, 51, 98, 346, 387, 406, 425–427,
ethics, 17, 81–83, 85–89, 172, 184, 311, 431–435, 437–440, 488
313, 315, 319, 339, 346, 351–355, 357, systemic, 344
358, 361 tasks, 9, 11, 20–24, 26–28, 31, 33, 40, 41,
experiences, 27, 28, 30, 45, 50, 51, 67, 80, 43, 44, 48, 51, 56, 58, 59, 65, 71, 73, 80,
91, 343, 373, 430, 431, 434, 437, 447, 85, 91, 93–95, 98, 101, 128–130, 136,
449, 451, 453, 464, 468, 471, 490 198–200, 202, 207, 351, 361, 429,
industrial technical, 5, 51, 74, 91, 193 432–433, 436, 437, 441–443, 487, 488
morale, 81–82, 174 unpredictable, 488
organisation, 6, 11, 20, 21, 129, 466, 487, Work activity
490 complete, 23
organised skilled, 16 incomplete, 23
process, 6, 9, 18, 25, 27, 30–37, 39, 41, 44, Work and Technology, 6, 20, 24, 40, 41, 72,
46, 57, 58, 66, 72, 73, 106, 121, 122, 197, 312, 342, 429, 487
129, 142, 157, 162, 186, 187, 190, 193, Working contexts, 22, 26, 28, 36, 55, 100, 438
197, 199, 202, 203, 206, 214, 266–268,