Abstract
The lack of standards to objectively assess the quality of teaching opened a new path of research. Teaching involves a lot of different tasks and activities that should be explored, so, consequently, when talking about quality of teaching, it makes sense to look at teaching as a process and to assess its maturity. This contribution briefly looks at existing approaches, and introduces the idea of a teaching maturity model (TeaM) for school and university teachers. Such a framework, even though it proves helpful from a measurement perspective, might not be acceptable by teachers, so this paper presents the results of a study for testing the TeaM model in respect to its usability and acceptability with informatics lecturers at the Alpen-Adria-Universität Klagenfurt. The results show the interest of our teachers in the model, but also some of the impediments that have to be dealt with when applying the model on a larger scale.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Quality assurance in relation to the educational system is a path of research, aiming to provide standards to assess that quality. Researchers have already presented models within this scope. These models assess quality by covering only one or two teaching factors (like, teachers, curricula, etc.). Studies by Chen et al. [1] emphasise the fact that a better quality of teaching is achieved when managing the whole teaching process. Their work is based on the concept of a maturity model from the Software Engineering Institute (SEI) of Carnegie Mellon University. The SEI addresses the quality for software development by assessing and managing the process for producing that software. The process is defined by a framework called Capability Maturity Model Integration (CMMI) [2]. In this model, levels of maturity are assigned to processes based on their performance. The model of Chen et al. is based on CMMI, and, like Chen et al., we also believe that quality is related to the management of the teaching process. Spurred by their results and the concept behind CMMI, a Teaching Maturity Model (TeaM) covering all educational levels was created. The TeaM model differs from the work of Chen et al., as it considers not only university teachers but primary and secondary teachers as well.
The basic components of the TeaM were constructed by following the strategy of SEI. The TeaM’s practices and the other specific elements were created by observing experts in teaching, and by collecting best practices. The TeaM was additionally assessed by a CMMI expert. All our evaluations so far show that the TeaM seems to be consistent and contains all aspects of the teaching process [3].
On the other hand, the introduction of the model in the educational domain raises the question of how to integrate it in one’s daily (teaching) life. Another issue is how to integrate the TeaM in educational institutions so that teachers can use it for assessing the quality of teaching in their lectures. This requires testing of the model and improving it based on feedback that we get, and, with it, also to look at its usability and acceptability. The objective of this paper is to describe our results in checking for the applicability of the TeaM.
Within the scope of the paper, we collected opinions of informatics lecturers at the Alpen-Adria-Universität Klagenfurt when using the model, and the paper reports on the most important findings.
The rest of the paper is organised as follows: Sect. 2 describes related work by giving an overview of how models like CMMI-Services and others are related to the educational system in respect to their usability and acceptability. A detailed description of how the TeaM is tested and the feedback from the lecturers is presented in Sect. 3. In Sect. 4, the results of the study in respect to our model are discussed. Future work and the findings are described in Sect. 5.
2 Background
This section gives a short description of related models. It introduces briefly how CMMI is structured and discusses, for related models, their applicability and usability in practice.
2.1 Related Work
Traditional forms for addressing quality of teaching, such as student evaluations, feedback, peer evaluation and inspections are seen as quite subjective. This opened up a path for research for assessment models that rely on standards. In this, a lot of authors address the quality of teaching by mainly focusing either on teachers (preparation, communication, engagement), or pupils/students, or course content or the environment. Taking a closer look at existing work, these models can be divided into several groups.
There are models that, to address the quality of teaching, focus only on teachers. The AQRT model addresses quality of teaching by assessing teacher teaching practices [4]. In this case, Chen et al. applied the model in thirty physical education lessons with nine elementary physical teachers. The results emphasised the applicability of the model. The competence-based model is another model that assesses teaching quality through teacher-licensure tests [5]. Mehrens’s study is more an investigation and analysis of licensure and teachers’ competency tests. A similar model is the competence-based model for teachers on how to teach [6], and based on this, it assesses quality.
There are other approaches that consider pupils/students and the teachers’ interactions for addressing quality. The CEM model is one of them, which assesses teacher quality based on students’ outcomes [7]. Azam and Kingdon applied their model to compare students’ results of examinations from the tenth-grade to the twelfth-grade. Based on the results (improved or not) the teacher contribution was estimated. The National Education Association uses a standards-based learning and assessment system to show how student learning standards can be connected with teacher education and assessment [8]. Although there is no concrete implementation in practice, this is how they suggest measuring quality of teaching. The assessment of teacher competences and students’ learning and feelings is integrated into another model presented by Snook et al. [9], where they run an investigation in the New Zealand school system. The Angebots-Nutzungs Model is another model used to address quality based on teacher-student interaction (results, feelings, and environment) [10], while TEQAS is a model where quality is addressed by assessing teaching education [11]. Dilshad showed the applicability of the latter model by covering five quality variables through interviews (questionnaire) with 350 students on MEd programmes.
Furthermore, there is the TALIS model, which assesses quality based on the working conditions of teachers and the learning environment [12]. This OECD article was a technical report where they applied this model in a pilot test (which was successful) with five volunteered countries: Brazil, Malaysia, Norway, Portugal and Slovenia.
Beyond traditional forms and assessment methods mentioned above, some maturity models based on the CMMI’s principles have been created. Researchers in the field of computer science education adapted and created maturity models to assess and to improve the curricula or the institution itself [13,14,15]. In these cases, validation of models is referred to at a later stage, but so far, no results have been published. Ling et al. applied their model through a case study in a private institution of higher learning (IHL) in Malaysia and mentioned that a larger participation of IHLs will be used in the future for a better validation of the model [15].
The adaption of CMMI in the educational domain is seen also in course design, either in a classroom environment [16] or online [17, 18]. The model of Petri is not validated yet [16], but Neuhauser did validate the model in relation to usability, and the answers from the questionnaires revealed that 88% of the respondents found themselves in a cell within each process area [18]. Similarly, Marshall and Mitchell validated the processes and the model in the analysis of an e-learning module at New Zealand University [17].
Likewise, in primary and secondary schools, some CMMI-like implementation models focus on the institutional level or on the syllabus [19,20,21]. Montgomery applied her model in six schools for defining the level of using computers and technologies in schools. The model provided goals and practices for making improvements [19]. Solar et al. conducted a pilot study to test the validity of their model and its associated web-support tool [20]. They tested the applicability of the model in different schools and obtained positive feedback from them.
Only Chen et al. established a maturity model for observing the teaching process. The model is limited to a subset of possible process areas and focusses on tertiary teachers [1] only. In their paper, Chen et al. address the implementation of a model for primary and secondary schools, but to the best of our knowledge, such a model has not been implemented and/or yet published.
We believe that the quality of teaching is more than just focusing on the teacher or on the students, and it is more than just looking at the institution or the course content. It is rather a process that includes all the above and more. So, unlike the aforementioned models, like Chen et al., we address the quality of teaching by looking at the teaching process as a whole. However, in contrast to Chen et al., our model considers not only tertiary teachers but primary and secondary teachers as well. A more elaborated (tabular) overview about the differences between CMMI, TeaM and the T-CMM models can be found in the work of Reçi and Bollin [3, p. 7] where the authors compare the different process areas tackled and include these in the respective model.
2.2 Maturity Model in Practice
The application of maturity models is straightforward for engineers, but for teachers, such an assessment might be new. This section describes the application of a maturity model in practice, and briefly discusses usability and acceptability concerns.
The Capability Maturity Model Integration (CMMI) stemmed from the need to assess and improve the quality of products. After many years of research, the SEI collected and grouped together some relevant tasks and activities (by naming them Process Areas (PAs)). These tasks were further split into basic ones (named Specific Goals (SGs)) and their related activities (named Specific Practices (SPs)). The specific tasks and activities are unique to a PA. When talking about the generalisation and standardisation of processes, then some general tasks (named Generic Goals (GGs)) and related general activities (named Generic Practices (GPs)) were also defined. The latter tasks and activities are common for all PAs [2]. The assessment of the process for producing a product (software/service, etc.) with this model has a twofold meaning. It can focus on different PAs and defines at which level (Capability Level (CL)) the correlated specific tasks and activities are fulfilled, or it controls the fulfillment of tasks and activities on a predefined group of PAs that correspond to a Maturity Level (ML). Such outputs reveal at which maturity level a process for producing a product is. Further improvement for the process means fulfillment of the group of PAs corresponding to a higher ML [2].
Naturally, the question of how an assessment with maturity models looks might arise. For conducting the assessment, CMMI has specific models, which consist of steps of implementations. The assessment is conducted by a CMMI institute certified assessor. The steps of the assessment start with the analysis of the requirements, which determine what processes (sectors) a company wants to assess. This is followed by an appraisal plan development and a selection and preparation of a team for doing the assessment. The PAs are selected and a catalogue with questions is prepared. CMMI-Services contains a total of 24 PAs and each of them has corresponding goals and practices. This means that a catalogue with several questions needs to be answered by the interviewees. For this, considerable time is required, and the quality and quantity of questions is important as it might influence the results for ranking the company at the appropriate maturity level. In the last steps of the implementation, artifacts are obtained and the appraisal is conducted [2].
One major problem when addressing the quality of maturity models is related to time consumption for planning, answering and conducting the appraisal. It is also related to the quality and quantity of the questions, and consequently that a rating to the maturity model might influence the company (in terms of money, success, etc.). However, the published “appraisal results directory” from the CMMI institute manifests the usability and applicability of the CMMI model [22].
The Software Engineering Institution puts much effort in coming up with a consistent version of CMMI, involving a long process of studies and improvements within the last 30 yrs. Nowadays, although the model is applied in practice, there are still parts of it being improved. It is a continuous process of improvements. The same problem holds for the TeaM model. Several studies are required to produce a better version of the model.
3 Validating the TeaM
The TeaM is built up from the necessity for some standards to address the quality of teaching. The particularity of the model is on addressing quality by considering the teaching process as a whole with regard to teachers at university, primary and secondary schools. Making use of the model then either helps the educational institution in evaluating and improving its quality of teaching (by, when required, producing a ranking), or it helps teachers to evaluate and improve their teaching process on their own. Within the TeaM, the teaching process is composed of four phases:
-
Initialisation - where administrative issues are managed;
-
Preparation - where the course is planned and prepared by teachers;
-
Enactment - where the implementation of the teaching unit takes place;
-
Quality and Incident Control - where possible incidents and the teaching process itself are observed, analysed and refined.
For each of these phases, factors related to the quality of teaching are determined, and in the TeaM terminology they are called Process Areas (PAs). Each PA contains a collection of goals and activities (practices). The implementation of these goals and practices indicates which PA is satisfied. In TeaM this is called “reaching a Capability Level”. When a predefined group of PAs is satisfied, until the maximum Capability Level is reached, then also a Maturity Level is reached. The latter expresses how mature the teaching process is. Achieving a higher Maturity Level (so improving the teaching process) means satisfying all the PAs associated with that Maturity Level.
A detailed description of the TeaM and its related PAs can be found in the paper of Reçi and Bollin [3], where also a first assessment related to its consistency is presented. For the validation as presented in this study, another survey (including two questionnaires and one interview) were conducted. For this, the practices of the TeaM were mapped to 76 questions in the first questionnaire (comparable to CMMI appraisals), helping us to assess the quality of the model. The second questionnaire (containing 7 questions) then focused on applicability considerations.
3.1 Study Objectives
Having the size of the TeaM and the time-requirements (when applying it in practice) in mind, the objective of the study was to test the TeaM in terms of usability and acceptability with teachers at the University of Klagenfurt. In the context of this paper, we tried to answer the following question: how is the applicability of the TeaM perceived by lecturers at the Alpen-Adria-Universität Klagenfurt?
To deal with the objective and for answering the question, a structured interview accompanied by a questionnaire were performed.
3.2 Research Settings
A survey (including questionnaire and interviews) was used as a research instrument to assess the applicability of the TeaM in practice. The assessment was planned in a similar way to CMMI appraisals, and at first, we identified potential lectures and lecturers at our university. At random, 30 informatics courses from our bachelor and master programmes at Alpen-Adria-Universität were selected. The experimental subjects were the lecturers of these courses who were then interviewed. From 30 informatics courses that were selected for the study, only 13 lecturers participated and answered the questionnaire. The lecturers varied in their experience in teaching, from 3 yrs to 25 yrs. Only one lecturer was female, but all of them were specialised in the field of informatics and are teaching in the bachelor and the master programmes.
In comparison to CMMI, the TeaM has a total of 12 PAs with related goals (31) and practices (76). The practices of each PA were taken and a catalogue with questions was provided. The catalogue contained 76 “yes/no” questions representing the 76 practices of the TeaM. For instance, the practice “SP1.2.1.2 Arrange the Classroom Atmosphere” is mapped to the questionnaire as the question (translated to English): “7. Do you attempt to provide an adequate atmosphere in the classroom?” The same strategy is applied to all the other practices. For supporting the appraisal process, the 76 questions were provided in an electronic format using Google forms. This makes the questions public and accessible by those who are interested to use such a model. Moreover, the participation remains anonymous as no personal data are collected. The link to the questionnaire is maintained on the website of our Department (in the project section with the name “TeaM model”). On the project website, you find both the link for the questionnaire and the file containing the detailed description of the TeaM Version 1.6 (including the 76 practices) [23]. Teachers and educators are invited to join the project and to report on their personal experience with it.
For performing the appraisal, two non-expert assessors (members from the Informatics Didactic Department of the Alpen-Adria-Universität Klagenfurt) were involved in the interviews. During the interviews, the teachers were given two questionnaires. The first questionnaire contained 76 questions related to 76 practices of the TeaM. This was necessary in order to introduce the model to the teachers (by applying it in practice). The second questionnaire (with 7 questions) then focused on the two dimensions, usability and acceptability, and it was given to teachers after applying the TeaM. The questions focused on:
-
(Q1) Time to fill out the TeaM questionnaire
-
(Q2) The understandability of the questions
-
(Q3) How much they liked filling out the questionnaire
-
(Q4) The assumed benefit of the model in the future
-
(Q5) The relevance of the model for assessing the quality of teaching
-
(Q6) Whether the model would criticise the teachers’ way of teaching
-
(Q7) Other observations or ideas to share
The results are presented in detail in Sect. 3.3, while the presentation of the results from the first questionnaire (the TeaM assessment) is not in the scope of this paper. In a next step, however, the results from applying the TeaM in practice will be analysed to see if there is a correlation between the generated TeaM’s maturity levels for each course with the feedback provided in the ZEUS system at the University of Klagenfurt.
3.3 Study Results
The 13 lecturers participating in the questionnaire worked through all the “yes/no” questions about the practices of the TeaM, and at the end they provided their opinion about missing/relevant practices of the model. Additionally, a questionnaire with 7 questions was given to them to better understand their perceptions about usability and acceptability of the model.
(Q1) The first question was related to the time required to fill out the questionnaire. The average time was 30 min to answer the 76 questions. Only one interview lasted longer (56 min) because the assessor read the questions and the interviewee read the questions himself one more time.
(Q2) The second question dealt with the understandability of the questions from the first questionnaire. We were looking for any ambiguities. Five questions needed explanation from the assessor, because their structure was misleading for the interviewees. Basically, these questions were connected with “and/or” conjunctions and they confused the interviewees. Examples of such questions were: “Do you consider other requirements that might come from students/pupils (like explanation of a new term, repetition of an exercise, etc.), OR administration (like substituting a colleague in one teaching hour because she/he is sick?)”; “Do you consider AND document problems during units’ delivery?” Another problem was a set of questions related to existing curricula. As there are courses which are not based on only one curriculum, a correct answer was impeded as well.
(Q3) The third question produced a ranking from unpleasant (1) to wonderful (10) of the process for filling out the questionnaire. The interviewees rated it with 6. This was related to the unclear structure of the sentences and due to the fact that they had to think about their teaching process for the first time. This created a little tension for them and they were trying to explain the reason why their answers were “no” or why “bad” things happened in their course. The assessors think that the TeaM questionnaire might work better without the presence of an assessor. However, the interviewees expressed their deep interest in the model.
(Q4) The benefit in using this model for the future was the fourth question. The interviewees liked the idea of thinking about the questions that helped to improve their teaching, so they thought that it was an advantage to use the model. The only problem identified was related to documentation practices that were required by the model.
(Q5) The fifth question revealed if the TeaM is relevant or appropriate to be used in order to assess the quality of teaching. None of interviewees raised a concern that any of the questions was not related to the quality of teaching. They saw it as a good collection of standards to follow for addressing the quality.
(Q6) The sixth question looked closer at the fear of the interviewees if such an assessment could criticise their way of teaching. In a way, the answers were “yes”. They expressed this response even in question 3. There, they expressed worries about some questions that they could only answer with “no”, and this was in a major way related to the documentation practices.
(Q7) Last but not least, they were asked about other observations or ideas to share. They thought that providing more information on the questions in such a way that no assessor had to participate during the assessment would make them answer with less tension. Most questions were well understandable and also interesting to think about. Already the process of trying to answer the questions and thinking of their own process was felt to be worthwhile.
4 Discussion
By analysing the collected feedback, it is noticeable that the model somehow surprised the interviewees. It made them think (maybe for the first time) about teaching as a process. If we go back to the questionnaire, it is obvious that, in comparison to the CMMI questions catalogue, answering the questions concerning the TeaM takes not so much time (referring to Q1). This is worthwhile when thinking about the model as a part of assessing and improving your work.
Based on the results (Q2), we see that the TeaM needs to be improved regarding the structure of its “and/or” sentences, even though splitting them will yield a slightly larger number of questions and consequently lead to a higher time consumption.
When answering the main question related to the objective of this paper, the TeaM is perceived as interesting from the general point of view of the lecturers at Alpen-Adria-Universität. Providing an improved version of the model (with clearer questions and with no assessor) will further motivate the teachers to use it in practice. Clearly, at least within the scope of the study, the model is applicable by the teachers at Alpen-Adria-Universität Klagenfurt. Another benefit to be considered is: by just introducing the TeaM, the idea of seeing their own teaching process in more detail was planted into the heads of the participants. When perceiving TeaM more as a self-assessment framework rather than as a raking generator, then its integration in practice in the educational domain could be greater.
5 Summary and Future Work
The TeaM is an ongoing project running at the Alpen-Adria-Universität Klagenfurt. At first, it can be seen as a model for ranking. This might create doubt with teachers as to whether to use it or not. However, the main aim of the TeaM is not to create a ranking between teachers or educational institutions (even though one might do so). TeaM aims at providing a framework that helps teachers to assess the quality of teaching and to tell them how to improve.
After a lot of theoretical research, the TeaM is now consistent, and its applicability in practice was tested for the first time. Based on the results presented in this paper, it seems that it can be used by teachers to assess their teaching process.
As to future work, we plan to test the model in other courses at the University and schools and to produce stable maturity levels based on the results. Further future work will be the extension of the TeaM by an advisory framework. The practices of the models will then be presented in a form of a checklist, clearly defined and annotated, and future users will not need the presence of an assessor to apply the appraisal.
References
Chen, C.Y., Chen, P.C., Chen, P.Y.: Teaching quality in higher education: an introductory review on a process-oriented teaching-quality model. Total Qual. Manag. Bus. Excell. 25, 36–56 (2014)
Forrester, E.C., Buteau, B.L., Shrum, S.: CMMI for Services: Guidelines for Process Integration and Product Improvement, 2nd edn. Pearson Education Inc., Fort Worth (2011)
Reçi, E., Bollin, A.: Managing the quality of teaching in computer science education. In: Pieterse, V., van Eekelen, M., Michalis Giannakos, M. (eds.) Proceedings of CSERC 2017: The 6th Computer Science Education Research Conference, pp. 38–47, Helsinki, Finland (2017)
Chen, W., Mason, S., Stainszewski, C., Upton, A., Valley, M.: Assessing the quality of teachers’ teaching practices. Educ. Assess. Eval. Account. 24(1), 25–41 (2012)
Mehrens, W.A.: Assessing the quality of teacher assessment test. Assessment of teaching: purposes, practices and implications for the profession, pp. 77–136. The Buros-Nebraska Series on Measurement and Testing at DigitalCommons at University of Nebraska, Lincoln, NE (1990)
Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland: Standards für die Lehrerbildung: Bildungswissenschaften. Beschluss der Kultusministerkonferenz. Ständige Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland, Germany (2004)
Azam, M., Kingdon, G.: Assessing the teaching quality in India. IZA Discussion Paper (2014). https://ssrn.com/abstract=2512933
National Education Association: Framework for Transforming Education Systems to Support Effective Teaching and Improve Student Learning (2010). http://www.nea.org/home/41858.htm
Snook, I., Neill, J., Birks, S., Church, J., Rawlins, P.: The Assessment of Teacher Quality: An Investigation into Current Issues in the Evaluating and Rewarding Teachers. Institute of Education, Massey University, Auckland, New Zealand (2013)
Helmke, A.: Studienbrief Unterrichtsdiagnostik. Projekt EMU (Evidenzbasierte Methoden der Unterrichtsdiagnostik) der Kultusministerkonferenz. Landau: Universität Koblenz-Landau (2011)
Dilshad, R.M.: Assessing quality of teacher education: a student perspective. Pak. J. Soc. Sci. 30, 85–97 (2010)
OECD: TALIS Technical Reports. Teaching and Learning International Survey (2008). http://www.oecd.org/education/talis
Lutteroth, C., Reilly, A., Dobbie, G., Hamer, J.: A maturity model for computing education. In: Mann, S., Simon (eds.) Proceedings of the 9th Australasian Conference on Computing Education, vol. 66, pp. 107–114. Australian Computer Society, Ballarat (2007)
Duarte, D., Martins, P.: A maturity model for higher education institution. J. Spat. Organ. Dyn. 1(1), 25–44 (2013)
Ling, T., Jusoh, Y., Abdullah, R., Hayati Alwi, N.: A review study: applying capability maturity model in curriculum design process for higher education. J. Adv. Sci. Arts 3(1), 46–55 (2012)
Petrie, M.L.: A model for assessment and incremental improvement of engineering and technology education in the Americas. In: Second LACCEI International Latin American and Caribbean and Conference for Engineering and Technology, Miami, FL (2004). https://www.researchgate.net/publication/254888917
Marshall, S., Mitchell, G.: Applying SPICE to e-learning: an e-learning maturity model? In: Lister, R., Young, A. (eds.) Proceedings of the Sixth Australasian Conference on Computing Education, vol. 30, pp. 185–191. Australian Computer Society, Ballarat, VA (2004)
Neuhauser, C.: A maturity model: does it provide a path for online course design. J. Interact. Online Learn. 3(1), 1–17 (2004)
Montgomery, B.: Developing a technology integration capability maturity model for K-12 schools. Published Diploma Thesis. Concordia University, Montreal, Canada (2003)
Solar, M., Sabattin, J., Parada, V.: A maturity model for assessing the use of ICT in school education. J. Educ. Technol. Soc. 16(1), 206–218 (2013)
White, B., Longenecker, H., Leidig, P., Yarbrough, D.: Applicability of CMMI to the IS curriculum: a panel discussion. Presented in the Information Systems Education Conference, EDSIG, San Diego, CA (2003). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.586.2876&rep=rep1&type=pdf
CMMI Institute: Published Appraisal Results (no date). https://sas.cmmiinstitute.com/pars/pars.aspx
IID: Research projects (2018). http://iid.aau.at/bin/view/Main/Projects
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 IFIP International Federation for Information Processing
About this paper
Cite this paper
Reçi, E., Bollin, A. (2019). A Teaching Process Oriented Model for Quality Assurance in Education - Usability and Acceptability. In: Passey, D., Bottino, R., Lewin, C., Sanchez, E. (eds) Empowering Learners for Life in the Digital Age. OCCE 2018. IFIP Advances in Information and Communication Technology, vol 524. Springer, Cham. https://doi.org/10.1007/978-3-030-23513-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-23513-0_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-23512-3
Online ISBN: 978-3-030-23513-0
eBook Packages: Computer ScienceComputer Science (R0)