Abstract
The concept of “common sense” (“commonsense”) has had a visible role in the history of artificial intelligence (AI), primarily in the context of reasoning and what’s been referred to as “symbolic knowledge representation.” Much of the research on this topic has claimed to target general knowledge of the kind needed to ‘understand’ the world, stories, complex tasks, and so on. The same cannot be said about the concept of “understanding”; although the term does make an appearance in the discourse in various sub-fields (primarily “language understanding” and “image/scene understanding”), no major schools of thought, theories or undertakings can be discerned for understanding in the same way as for common sense. It’s no surprise, therefore, that the relation between these two concepts is an unclear one. In this review paper we discuss their relationship and examine some of the literature on the topic, as well as the systems built to explore them. We agree with the majority of the authors addressing common sense on its importance for artificial general intelligence. However, we claim that while in principle the phenomena of understanding and common sense manifested in natural intelligence may possibly share a common mechanism, a large majority of efforts to implement common sense in machines has taken an orthogonal approach to understanding proper, with different aims, goals and outcomes from what could be said to be required for an ‘understanding machine.’
Sponsored in part by the School of Computer Science at Reykjavik University and by a Centers of Excellence Grant from the Science and Technology Policy Council of Iceland.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
http://www.cyc.com/platform/, accessed Apr. 29 2017.
- 2.
This bears a relation to McCarthy’s (1998) concept of “elaboration tolerance”: Micro-malleability is a way to imbue causal-relational models with elaboration tolerance.
- 3.
For a thorough overview of this theory see Thórisson et al. (2016).
- 4.
http://www.cyc.com/platform/, accessed Apr. 29 2017.
- 5.
In a demo given of Cyc to one of the authors of this paper (Thórisson) in 1998 (around 200 images instead of 20), unexplained inconsistencies surfaced, albeit different ones from those reported by Pratt (1994).
- 6.
This number may have originated from the MIT AI lab (Minsky and Papert 1970), however, its origin or argumentation for why this number and not some other is not provided in the respective publications.
- 7.
The “symbols" in such systems have no meaning for its manipulator, and can thus only be considered a token in a simulator whose meaning can only be discerned by its human author.
References
Bieger, J., Thórisson, K.R., Steunebrink, B.: Evaluating understanding. In: IJCAI Workshop on Evaluating General-Purpose AI, Melbourne, Australia (2017 to be accepted)
Cambria, E., Olsher, D., Kowk, K.: Sentic activation: A two-level affective common sense reasoning framework. In: Proceedings of the Twenty-Sixth Conference on Artificial Intelligence, AAAI, Toronto, Canada (2012)
Grimm, S.R.: The value of understanding. Philos. Compass 7(2), 279–299 (1988)
Grimm, S.R.: Understanding as knowledge of causes. In: Fairweather, A. (ed.) Virtue Epistemology Naturalized. Springer, New York City (2014)
Lenat, D.B.: CYC: a large-scale investment in knowledge infrastructure. Commun. ACM 38(11), 33–38 (1995)
Lenat, D.B., Guha, R.V., Pittman, K., Pratt, D., et al.: CYC: toward programs with common sense. Commun. ACM 33(8), 30–49 (1990)
Liu, H., Singh, P.: ConceptNet - a practical commonsense reasoning tool-kit. BT Technol. J. 22(4), 211–226 (2004)
McCarthy, J.: Programs with common sense. In: Proceedings Symposium on Mechanization of Thought Processes. Her Majesty’s Stationary Office, London (1959)
McCarthy, J.: Situations, actions, and causal laws. Stanford Artificial Intelligence Project, Memo No. 2 (1963)
McCarthy, J.: Elaboration tolerance. In: Common-Sense 1998 (1998)
Minsky, M.: The Emotion Machine. Simon and Schuster, New York (2006)
Minsky, M., Papert, S.: Proposal to ARPA For Research on Artificial Intelligence at M.I.T., 1970–1971. Artificial Intelligence Memo No. 185, Massachusetts Institute of Technology A.I. Lab (1970)
Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., et al.: Bounded recursive self-improvement. Reykjavik University School of Computer Science Technical Report, RUTR-SCS13006 (2013). arXiv:1312.6764 [cs.AI]
Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., et al.: Autonomous acquisition of natural language. In: Proceedings of IADIS International Conference on Intelligent Systems and Agents 2014 (ISA 2014). IADIS Press, Lisbon (2014)
Nivel, E., Thórisson, K.R., Steunebrink, B., Schmidhuber, J.: Anytime bounded rationality. In: Bieger, J., Goertzel, B., Potapov, A. (eds.) AGI 2015. LNCS, vol. 9205, pp. 121–130. Springer, Cham (2015). doi:10.1007/978-3-319-21365-1_13
Panton, K., Matuszek, C., Lenat, D., Schneider, D., Witbrock, M., Siegel, N., Shepard, B.: Common sense reasoning – from CYC to intelligent assistant. In: Cai, Y., Abascal, J. (eds.) Ambient Intelligence in Everyday Life. LNCS, vol. 3864, pp. 1–31. Springer, Heidelberg (2006). doi:10.1007/11825890_1
Poria, S., Gelbukh, A., Cambria, E., Hussain, A., et al.: EmoSenticSense: a novel framework for affective common-sense reasoning. Knowl. Based Syst. 69, 108–123 (2014)
Potter, V.G.: On Understanding Understanding: A Philosophy of Knowledge. Fordham University Press, New York (1994)
Pratt, V.: CYC Report (1994). http://boole.stanford.edu/cyc.html. Accessed 29 Apr 2017
Thórisson, K.R., Kremelberg, D., Steunebrink, B.R., Nivel, E.: About understanding. In: Steunebrink, B., Wang, P., Goertzel, B. (eds.) AGI -2016. LNCS, vol. 9782, pp. 106–117. Springer, Cham (2016). doi:10.1007/978-3-319-41649-6_11
Thórisson, K.R., Helgason, H.P.: Cognitive architectures and autonomy: a comparative review. J. Artif. Gen. Intell. 3(2), 1–30 (2012)
Thórisson, K.R., Nivel, E.: Achieving artificial general intelligence through peewee granularity. In: Proceedings of the Second Conference on Artificial General Intelligence (AGI 2009). Atlantis Publishing Paris, France (2009)
Wang, P.: From NARS to a thinking machine. In: Artificial General Intelligence Research Institute Workshop, Washington DC, May 2006 (2006)
Wang, P.: Non-Axiomatic Logic - A Model of Intelligent Reasoning. World Scientific Publishing Co. Inc., Hackensack (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Thórisson, K.R., Kremelberg, D. (2017). Understanding and Common Sense: Two Sides of the Same Coin?. In: Everitt, T., Goertzel, B., Potapov, A. (eds) Artificial General Intelligence. AGI 2017. Lecture Notes in Computer Science(), vol 10414. Springer, Cham. https://doi.org/10.1007/978-3-319-63703-7_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-63703-7_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63702-0
Online ISBN: 978-3-319-63703-7
eBook Packages: Computer ScienceComputer Science (R0)