Abstract
Previous studies in our laboratory have shown the benefits of immediate feedback on cognitive performance for pathology residents using an intelligent tutoring system (ITS) in pathology. In this study, we examined the effect of immediate feedback on metacognitive performance, and investigated whether other metacognitive scaffolds will support metacognitive gains when immediate feedback is faded. Twenty-three participants were randomized into intervention and control groups. For both groups, periods working with the ITS under varying conditions were alternated with independent computer-based assessments. On day 1, a within-subjects design was used to evaluate the effect of immediate feedback on cognitive and metacognitive performance. On day 2, a between-subjects design was used to compare the use of other metacognitive scaffolds (intervention group) against no metacognitive scaffolds (control group) on cognitive and metacognitive performance, as immediate feedback was faded. Measurements included learning gains (a measure of cognitive performance), as well as several measures of metacognitive performance, including Goodman–Kruskal gamma correlation (G), bias, and discrimination. For the intervention group, we also computed metacognitive measures during tutoring sessions. Results showed that immediate feedback in an intelligent tutoring system had a statistically significant positive effect on learning gains, G and discrimination. Removal of immediate feedback was associated with decreasing metacognitive performance, and this decline was not prevented when students used a version of the tutoring system that provided other metacognitive scaffolds. Results obtained directly from the ITS suggest that other metacognitive scaffolds do have a positive effect on G and discrimination, as immediate feedback is faded. We conclude that immediate feedback had a positive effect on both metacognitive and cognitive gains in a medical tutoring system. Other metacognitive scaffolds were not sufficient to replace immediate feedback in this study. However, results obtained directly from the tutoring system are not consistent with results obtained from assessments. In order to facilitate transfer to real-world tasks, further research will be needed to determine the optimum methods for supporting metacognition as immediate feedback is faded.
Similar content being viewed by others
References
Aleven, V., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26, 147–181.
Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates.
Azevedo, R., Cromley, J. G., & Seibert, D. (2004). Does adaptive scaffolding facilitate student’s ability to regulate their learning with hypermedia? Contemporary Educational Psychology, 29(3), 344–370. doi:10.1016/j.cedpsych.2003.09.002.
Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and metacognition-implications for the design of computer-based scaffolds. Instructional Science, 33, 367–379. doi:10.1007/s11251-005-1272-9.
Azevedo, R., & Lajoie, S. (1998). The cognitive basis for the design of a mammography interpretation tutor. International Journal of Artificial Intelligence in Education, 9, 32–44.
Azevedo, R., Moos, D., Greene, J., Winters, F., & Cromley, J. (2008). Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia? Educational Technology Research and Development, 56(1), 45–72. doi:10.1007/s11423-007-9067-0.
Azevedo, R., & Witherspoon, A. M. (2008). Self-regulated learning with hypermedia. In A. Graesser, J. Dunlosky, D. Hacker (Eds.), Handbook of metacognition in education (in press). Mahwah, NJ: Erlbaum.
Azevedo, R., & Witherspoon, A. M. (2009) Self-regulated use of hypermedia. In A. Graesser, J. Dunlosky, D. Hacker (Eds.), Handbook of metacognition in education (in press). Mahwah, NJ: Erlbaum.
Balzer, W., Doherty, M., & O’Conner, R. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106, 410–433. doi:10.1037/0033-2909.106.3.410.
Butler, D., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.
Carver, C., & Scheier, M. (1990). Origins and functions of positive and negative affect: A control-process view. Psychological Review, 97, 19–35. doi:10.1037/0033-295X.97.1.19.
Choi, I., Land, S. M., & Turgeon, A. Y. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science, 33, 483–511. doi:10.1007/s11251-005-1277-4.
Clancy, W. (1983). Knowledge-based tutoring: The GUIDON program. Journal of Computer-based Instruction, 10, 8–14.
Corbett, A. T., & Anderson, J. R. (2001). Locus of feedback control in computer-based tutoring: impact on learning rate, achievement and attitudes. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 245–252). Seattle, WA: ACM NY.
Corbett, A. T., Koedinger, K. R., & Hadley, W. H. (2002). In Goodman, P. S. (Ed.), Cognitive tutors: From the research classroom to all classrooms—technology enhanced learning: Opportunities for change (pp. 198–224). Taylor & Francis.
Crowley, R. S., Legowski, E., Medvedeva, O., Tseytlin, E., Roh, E., & Jukic, D. (2007). Evaluation of an intelligent tutoring system in pathology: Effects of external representation on performance gains, metacognition, and acceptance. Journal of the American Medical Informatics Association, 14(2), 182–190. doi:10.1197/jamia.M2241.
Crowley, R. S., & Medvedeva, O. (2006). An intelligent tutoring system for visual classification problem solving. Artificial Intelligence in Medicine, 36(1), 85–117.
Dabbagh, N., & Kitsantas, A. (2005). Using web-based pedagogical tools as a scaffolds for self-regulated learning. Instructional Science, 33, 513–540. doi:10.1007/s11251-005-1278-3.
Graber, M. L., Franklin, N., & Gordon, R. (2005). Diagnostic error in internal medicine. Archives of Internal Medicine, 165(13), 1493–1499. doi:10.1001/archinte.165.13.1493.
Graesser, A., McNamara, D., & VanLehn, K. (2005). Scaffolding deep comprehension strategies through Pint & Query, AuthTutor and iSTRAT. Educational Psychologist, 40(4), 225–234. doi:10.1207/s15326985ep4004_4.
Green, B. A. (2000). Project-based learning with the world wide web: A qualitative study of resource integration. Educational Technology Research and Development, 48(1), 45–66. doi:10.1007/BF02313485.
Hill, J. R., & Hannafin, M. J. (1997). Cognitive strategies and learning from the world wide web. Educational Technology Research and Development, 45(4), 37–64. doi:10.1007/BF02299682.
Kelemen, W. L., Frost, P. J., & Weaver, C. A., I. I. I. (2000). Individual differences in metacognition: Evidence against a general metacognitive ability. Memory & Cognition, 28(1), 92–107.
Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1, 279–308. doi:10.1007/BF01320096.
Kulhavy, R. W., Yekovich, F. R., & Dyer, J. W. (1979). Feedback and content review in programmed instruction. Contemporary Educational Psychology, 4, 91–98. doi:10.1016/0361-476X(79)90062-6.
Loboda, T. D., & Brusilovsky, P. (2006). WADEIn II: Adaptive explanatory visualization for expressions evaluation. Proceedings of the 2006 ACM Symposium on Software Visualization. Brighton, UK: ACM.
Maries, A., & Kumar, A. (2008). The effect of student model on learning. Advanced learning technologies. ICALT’08, 8th IEEE International Conference on 2008 (pp. 877–881).
Mattheos, N., Nattestad, A., Falk-Nilsson, E., & Attström, R. (2004). The interactive examination: Assessing students’ self-assessment ability. Medical Education, 38(4), 378–389. doi:10.1046/j.1365-2923.2004.01788.x.
Metcalf, J., & Dunlosky, J. (2008). Metamemory. In H. Roediger (Ed.), Cognitive psychology of memory (Vol. 2). Oxford: Elsevier.
Mitrovic, A., & Martin, B. (2002). Evaluating the effects of open student models on learning (Vol. 2347/2002, pp. 296–305). Berlin/Heidelberg: Springer.
Nelson, T. (1984). A comparison of current measures of the accuracy of feeling-of-knowing predictions. Psychological Bulletin, 95(1), 109–133. doi:10.1037/0033-2909.95.1.109.
Nelson, T., & Narens, L. (1990). Metamemory: A theoretical framework and some new findings. In G. Bower (Ed.), The psychology of learning and motivation. San Diego, CA: Academic Press.
Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom. Journal of Educational Psychology, 82(1), 33–40. doi:10.1037/0022-0663.82.1.33.
Puntambekar, S., & Hubscher, R. (2005). Tools for scaffolding students in a complex learning environment: What have we gained and what have we missed? Educational Psychologist, 40, 1–12. doi:10.1207/s15326985ep4001_1.
Reiser, B. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. Journal of the Learning Sciences, 13(3), 273–304. doi:10.1207/s15327809jls1303_2.
Saadawi, G. M., Tseytin, E., Legowski, E., Jukic, D., Castine, M., Crowley, R. S. (2008). A natural language intelligent tutoring system for training pathologists: implementation and evaluation. Advances in Health Sciences Education: Theory and Practice, 13, 709–722.
Sharples, M., Jeffery, N., du Boulay, B., Teather, B., Teather, D., & du Boulay, G. (2000). Structured computer-based training in the interpretation of neuroradiological images. International Journal of Medical Informatics, 60, 263–280. doi:10.1016/S1386-5056(00)00101-5.
Smith, P., Obradovich, J., Heintz, P., et al. (1998). Successful use of an expert system to teach diagnostic reasoning for antibody identification. Proceedings of the 4th International Conference on Intelligent Tutoring Systems (pp. 54–63). San Antonio, TX.
VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.
Voytovich, A. E., Rippey, R. M., & Suffredini, A. (1985). Premature conclusions in diagnostic reasoning. Journal of Medical Education, 60, 302–307.
Wenger, A. (1987). Artificial intelligence and tutoring systems-computational and cognitive approaches to the communication of knowledge. Los Altos, CA: Morgan Kaufmann Publishers Inc.
White, B., & Frederiksen, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychologist, 40(4), 211–223. doi:10.1207/s15326985ep4004_3.
Winne, P. H. (1982). Minimizing the black box problem to enhance the validity of theories about instructional effects. Instructional Science, 11, 13–28. doi:10.1007/BF00120978.
Winne, P. H. (2001). Self-regulated learning viewed from models of information processing. In J. Douglas, J. D. Hacker, & A. C. Graesser (Eds.), Self-regulated learning and academic achievement: Theoretical perspective (pp. 153–190). Mahwah, NJ: Lawrence Erlbaum Associates.
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In J. Douglas, J. D. Hacker, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Mahwah, NJ: Lawrence Erlbaum Associates.
Winne, P., & Hadwin, A. (Eds.). (2008). The weave of motivation and self-regulated learning (pp. 297–314). NY: Taylor & Francis.
Winne, P. H., & Marx, R. W. (1982). Students’ and teachers’ views of thinking processes for classroom learning. The Elementary School Journal, 82, 493–518. doi:10.1086/461284.
Woo, C. W., Evens, M. W., Freedman, R., et al. (2006). An intelligent tutoring system that generates a natural language dialogue using dynamic multi-level planning. Artificial Intelligence in Medicine, 38(1), 25–46. doi:10.1016/j.artmed.2005.10.004.
Yudelson, M. V., Medvedeva, O. P., & Crowley, R. S. (2008). Multifactor approach to student model evaluation in a complex cognitive domain. User Modeling and User-Adapted Interaction, 18(4), 315–382.
Yudelson, M. V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., & Crowley, R. S. (2006). Mining student learning data to develop high lever pedagogic strategy in a medical ITS. Proceedings of AAAI Educational Data Mining 21st National Conference Educational Data Mining Workshop (pp. 82–90). Boston MA: AAAI press.
Zimmerman, B. (2006). Development and adaptation of expertise: The role of self-regulatory processes and beliefs. In K. A. Ericsson, P. Charness, P. Feltovich, & R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 705–722). New York: Cambridge University Press.
Acknowledgments
Work on this project was supported by a grant from the National Library of Medicine (R01 LM007891). The work was conducted using the Protégé resource, which is supported by grant LM007885 from the United States National Library of Medicine. We thank Lucy Cafeo for editorial assistance.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: metacognitive pseudo-dialog
During case
-
1.
Having a strategy usually helps in solving any problem. What sentence would best describe your strategy at the beginning of a case?
-
a.
I try to place the case first in a general category of disease.
-
b.
I try to remember a similar case.
-
c.
I try to look at the whole slide and then focus on finding features.
-
d.
I like to hypothesize and then confirm or dispute my hypothesis.
-
a.
-
2.
At any given time in problem solving, it is usually helpful to have a strategy in place. What sentence would best describe your strategy during viewing a case?
-
a.
I try to look for a pathognomonic feature that would narrow the differential diagnosis as early as possible.
-
b.
I have a systematic approach that would narrow the differential diagnosis in a gradual step by step fashion.
-
c.
I identify all the features in the order I see them until I come up with a differential diagnosis.
-
d.
I ask for help to have an idea of features I should be looking for in the case.
-
e.
I try to place the case in a general schema (visual representation in my head) of concepts and diagnoses.
-
a.
-
3.
What is the specific purpose for the strategy you are using?
-
a.
This strategy has worked in the past.
-
b.
This strategy is effective in this case.
-
c.
I don’t know of another strategy.
-
d.
I don’t have any strategy.
-
a.
-
4.
Did you set specific goals before viewing this case? What are they?
-
5.
What features are most important to identify in this case? Why are they important?
-
6.
Did you ask yourself questions about the case before you began? What were these questions?
-
7.
What sentence would best describe your goals at this point?
-
a.
I have a hypothesis but I am not able to confirm or dispute it.
-
b.
I am trying to locate all the features on the slide before making a hypothesis.
-
c.
I am just looking around the slide trying to find something familiar.
-
d.
I am completely lost and I need help.
-
a.
-
8.
Did you slow down when you encountered important features like <random_feature>?
-
9.
What features are possibly associated with the <random_hypothesis>?
-
10.
One way that expertise can be acquired is to develop a schema or representation of concepts such as visual features and their relationships to different diagnoses. Do you have a schema (visual representation) in your head that helps you integrate the information you learn after each case?
-
11.
Studies show the amount you learn is related to your motivation. How motivated are you to learn something new from the tutor?
-
12.
It is important to identify ones intellectual strengths and build on them to account for ones weakness. What sentence would best describe your intellectual strengths and weaknesses?
-
a.
I am better at viewing the slide when I have studied and know the educational concepts for a case.
-
b.
I can learn how to diagnose the case even without knowing anything about the subject.
-
c.
I like to read a textbook before attempting to view cases.
-
d.
I cannot learn about a case until I have been taught the educational concepts for a case
-
a.
-
13.
Locking onto a salient feature at initial presentation and failing to shift from your first impression of the case is a common heuristic error used in medical decision making. Which of the following heuristics does this describe?
-
a.
Anchoring
-
b.
Pseudo-diagnosticity
-
c.
Satisficing
-
d.
Representativeness
-
e.
I do not know
-
a.
-
14.
A common heuristic error is the tendency to not search for other possible diagnoses once a satisfactory solution has been reached leading to premature diagnostic closure of the case. Which of the following heuristics does this describe?
-
a.
Anchoring
-
b.
Pseudo-diagnosticity
-
c.
Satisficing
-
d.
Representativeness
-
e.
I do not know
-
a.
-
15.
Seeking features that confirm an initial diagnosis but not seeking features that support a competing diagnosis is known as which of these options?
-
a.
Anchoring
-
b.
Pseudo-diagnosticity
-
c.
Satisficing
-
d.
Representativeness
-
e.
I do not know
-
a.
-
16.
A heuristic error commonly made through categorization of cases depending on one prototypical feature is known as which of these options?
-
a.
Anchoring
-
b.
Pseudo-diagnosticity
-
c.
Satisficing
-
d.
Representativeness
-
e.
I do not know
-
a.
End of case
-
1.
Are you consciously focusing your attention on important features? What are these features?
-
2.
It is a good strategy to stop and review a case to help understand important relationships between features and diagnoses. How frequently are you reviewing the case to help understand these relationships?
-
3.
Now that you have reached <random_diagnosis>, what sentence would best describe what you did?
-
a.
I think it would have been easier to reach the diagnosis if I had asserted a hypothesis earlier.
-
b.
I think I should have asked for more help.
-
c.
I think I should have studied the slide more carefully and identified more features before I attempted a hypothesis.
-
d.
I worked through the case efficiently and should not have done anything differently.
-
a.
-
4.
What sentence would best describe your asking for help?
-
a.
I ask for help only when I need it.
-
b.
I frequently ask for help to be sure of my work.
-
c.
I never ask for help.
-
d.
I only ask for help if everything else fails.
-
a.
-
5.
It is a good strategy to link relationships between features and diagnoses in different cases. Is a <random_feature> related to what you have already seen in previous cases? Explain.
-
6.
It is a good strategy to stop and reevaluate your actions to ensure that they are consistent with your goal. How did you reevaluate <random_hypothesis>?
-
7.
What sentence would best describe how you learn?
-
a.
I have a schema (visual representation in my head) for features and their relationships with a diagnosis.
-
b.
I just take each case individually.
-
c.
I try to remember previous cases and link them to what is in the slide.
-
d.
I try to remember what I read about the subject.
-
a.
-
8.
What sentence would best describe how correct you are in identifying the features and reaching a diagnosis?
-
a.
I am always sure when I am correct and when I am incorrect.
-
b.
Most of the time, I am sure when I am correct.
-
c.
Most of the time, I am sure when I am incorrect.
-
d.
I am never sure how correct or incorrect I am.
-
a.
-
9.
What sentence would best describe how much you learned in comparison to what the tutor expected you to learn in this case?
-
a.
I think I learned most of what the tutor wanted me to learn.
-
b.
I don’t think I learned what the tutor wanted me to learn.
-
c.
I think I got some of what the tutor wanted me to learn.
-
d.
I don’t think the tutor was teaching me anything new.
-
a.
-
10.
Did you ever lock onto a salient feature at initial presentation and fail to shift from your first impression of the case? This is a common heuristic error made in the medical field and is known as Anchoring.
-
11.
Did you consider multiple hypotheses for the case? A common heuristic error known as Satisficing is the tendency to not search for other possible diagnoses once a satisfactory solution has been reached leading to premature diagnostic closure of the case.
-
12.
When trying to reach your definitive diagnosis from the list of hypotheses, did you seek both confirming and disputing findings? A common heuristic error known as Pseudo-diagnosticity occurs when data is sought that confirms one but not other competing hypotheses.
-
13.
Do you try to identify multiple features to support a hypothesis? Representativeness is one of the heuristic errors commonly made through categorization of cases depending on one prototypical feature.
Inspectable student model (knowledge explorer) questions at end of case
-
1.
Using the Knowledge Explorer, can you tell if you were right or wrong about <wrong_diagnosis_or_hypothesis>? If you were wrong, what error did you make?
-
2.
Looking at the Knowledge Explorer, what things have you seen but haven’t learned?
-
3.
Click on the Self Check Summary tab. For items you were wrong or unsure about (refer to the self check column), how much knowledge does the tutor think you have about it?
Appendix 2: foil dialog
Early in case
-
1.
Why do you need high power for the diagnosis of this case?
-
2.
Why do you need low/medium power for the diagnosis of this case?
-
3.
Do you have a set of feature(s) you look for in this kind of case? If yes, please list.
-
4.
Why did you feel feature x was important in coming to a diagnosis?
-
5.
What other feature(s) can you confuse with feature X?
-
6.
What is the differential diagnosis you think of when you see feature X?
-
7.
What are some important features related to hypothesis X?
-
8.
Does the attribute Z of feature X have other values?
-
9.
What are the hypotheses supported by feature X?
Later in case (have suggested at least one diagnosis):
-
1.
What feature(s) do you think is the most crucial in coming to the diagnosis of this case?
-
2.
What are some important features that you have learned in diagnosing diagnosis x?
-
3.
What do you think is the stain used in this slide? When you sign-out, would you like to have more stains done on the same specimen?
-
4.
What are the features easily identified by using H&E?
-
5.
What are the features easily identified by using Pas D?
-
6.
What are the features easily identified by using colloidal iron ?
-
7.
If you had to sign out this case, are there any additional information/tests/that you would like to have completed? If so, why?
-
8.
Are there any additional features that you would have liked to identify in coming to the diagnosis that were not in the tutor?
-
9.
The diagnosis X is supported by presence of what features?
Rights and permissions
About this article
Cite this article
El Saadawi, G.M., Azevedo, R., Castine, M. et al. Factors affecting feeling-of-knowing in a medical intelligent tutoring system: the role of immediate feedback as a metacognitive scaffold. Adv in Health Sci Educ 15, 9–30 (2010). https://doi.org/10.1007/s10459-009-9162-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10459-009-9162-6