Abstract
This paper argues for a revival of a mentalist approach to modeling human intelligence. The fields of artificial intelligence and natural language processing have over the past two decades been dominated by empirical approaches based on analogical reasoning, distributional semantics, machine learning and what is today called Big Data. This has led to a variety of gradual technological advances. True advances, however, are predicated on developing and testing explanatory theories of human behavior. This latter activity must include accounts of “directly unobservable” phenomena, such as human beliefs, emotions, intentions, plans and biases. This task is addressed by the field of cognitive systems. It is extraordinarily complex but unavoidable if the goal is success in modeling complex—and not entirely rational—human behavior.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Unfortunately, this complexity was not sufficiently understood even by many of the early AI researchers who on numerous occasions overplayed their hands by claiming imminent success of advanced applications—from machine translation to expert systems—that never materialized. Note, however, that this does not mean that the scientific paradigm in which they worked is invalid.
- 2.
Minsky believes that “a clear definition can make things worse, until we are sure that our ideas are right” (op.cit., p. 95).
- 3.
This opinion may be a manifestation of the exposure effect cognitive bias described by Kahneman (2011): “A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth” (ibid: 62).
References
Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins.
Campbell, M., Hoane, A. J., & Hsu, F. H. (2002). Deep blue. Artificial Intelligence, 134, 57–83.
Christian, B. (2011). Mind vs. machine, The Atlantic.
Economist. (2011). Indolent or aggressive: A computerized pathologist that can outperform its human counterparts could transform the field of cancer diagnosis. The Economist, December 3, 2011.
Erman, L. D., Hayes-Roth, F., Lesser, V. R., & Reddy, D. R. (1980). The Hearsay-II speech-understanding system: Integrating knowledge to resolve uncertainty. ACM Computing Surveys, 12(2), 213.
Ferrucci, D., Brown, E., Jennifer, C. -C., Fan, J., Gondek, D., & Kalyanpur, A. A. et al. (2010). Building Watson: An overview of the DeepQA Project. AI Magazine.
Hayes-Roth, B. (1985). A blackboard architecture for control. Artificial Intelligence, 26(3), 251–321.
Kahneman, D. (2011). Thinking: Fast and slow. New York: Farrar, Straus and Giroux
Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to latent semantic analysis. Discourse Processes, 25, 259–284.
Langley, P., & Choi, D. (2006). A unified cognitive architecture for physical agents. Proceedings of the Twenty-First National Conference on Artificial Intelligence. Boston: AAAI Press.
Manning, C. (2004). Beyond the Thunderdome. Proceedings of CONLL-04.
Minsky, M. (2006). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. Simon and Schuster.
Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35(4), 250–256.
Traum, D. R., Swartout, W., Marsella, S., & Gratch, J. (2005). Virtual humans for non-team interaction training. Proceedings of the AAMAS Workshop on Creating Bonds with Embodied Conversational Agents.
Weizenbaum, J. (1966). ELIZA - a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.
Wilks, Y. (2010). On being a Victorian companion. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins.
Acknowledgements
Many thanks to Marge McShane for productive criticism, apt suggestions and editorial help and to Pat Langley for critiquing an earlier draft of this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Nirenburg, S. (2015). Cognitive Systems as Explanatory Artificial Intelligence. In: Gala, N., Rapp, R., Bel-Enguix, G. (eds) Language Production, Cognition, and the Lexicon. Text, Speech and Language Technology, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-319-08043-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-08043-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-08042-0
Online ISBN: 978-3-319-08043-7
eBook Packages: Computer ScienceComputer Science (R0)