Abstract
Human behavior is conditioned by codes and norms that constrain action. Rules, “manners,” laws, and moral imperatives are examples of classes of constraints that govern human behavior. These systems of constraints are “messy:” individual constraints are often poorly defined, what constraints are relevant in a particular situation may be unknown or ambiguous, constraints interact and conflict with one another, and determining how to act within the bounds of the relevant constraints may be a significant challenge, especially when rapid decisions are needed. General, artificially-intelligent agents must be able to navigate the messiness of systems of real-world constraints in order to behave predictability and reliably. In this paper, we characterize sources of complexity in constraint processing for general agents and describe a computational-level analysis for such constraint compliance. We identify key algorithmic requirements based on the computational-level analysis and outline a limited, exploratory implementation of a general approach to constraint compliance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
More recent approaches to constraints extend the coverage of classical approaches but do not span all the forms of messiness we consider [18].
- 2.
Some newer cars offer an indicator for travelling too closely. Thus, with a different embodiment, this constraint no longer requires active measurement.
References
Arkin, R.C., Ulam, P., Wagner, A.R.: Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc. IEEE 100(3), 571–589 (2011)
Barto, A., Mirolli, M., Baldassarre, G.: Novelty or surprise? Front. Psychol. 4, 907 (2013)
Bubic, A., von Cramon, D.Y., Schubotz, R.I.: Prediction, cognition and the brain. Front. Hum. Neurosci. 4, 25 (2010)
Dechter, R.: Constraint Processing. Morgan Kaufman, Burlington (2003)
García, J., Fernández, F.: A comprehensive survey on safe reinforcement learning. J. Mach. Learn. Res. 16(42), 1437–1480 (2015)
Gershman, S.J.: Context-dependent learning and causal structure. Psychon. Bull. Rev. 24, 557–565 (2017)
Giancola, M., Bringsjord, S., Govindarajulu, N.S., Varela, C.: Ethical reasoning for autonomous agents under uncertainty. In: International Conference on Robot Ethics and Standards (ICRES), pp. 1–16. Taipei (2020)
Gigerenzer, G.: Fast and frugal heuristics: tools of bounded rationality. In: Handbook of Judgment and Decision Making, pp. 62–88. Blackwell, Malden (2004)
Kahneman, D.: Thinking, Fast and Slow. Doubleday, New York (2011)
Kirk, J.R., Laird, J.E.: Learning hierarchical symbolic representations to support interactive task learning and knowledge transfer. In: IJCAI 2019, IJCAI (2019)
Laird, J.E.: The Soar Cognitive Architecture. MIT Press, Cambridge, MA (2012)
Lynce, I., Ouaknine, J.: Sudoku as a SAT problem. In: AI &M (2006)
Mani, G., Chen, F., et al.: Artificial intelligence’s grand challenges: past, present, and future. AI Mag. 42(1), 61–75 (2021)
Marr, D.: Vision. Freeman and Company, New York (1982)
Meseguer, P., Rossi, F., Schiex, T.: Soft constraints. In: Foundations of Artificial Intelligence, vol. 2, pp. 281–328. Elsevier (2006)
Mininger, A.: Expanding Task Diversity in Explanation-Based Interactive Task Learning. Ph.D. Thesis, University of Michigan, Ann Arbor (2021)
Pearl, J.: Reasoning under uncertainty. Ann. Rev. Comput. Sci. 4(1), 37–72 (1990). https://doi.org/10.1146/annurev.cs.04.060190.000345
Rossi, F., Mattei, N.: Building ethically bounded AI. In: 33\(^{rd}\) AAAI Conference (2019)
Simon, H.A.: Models of Man; Social and Rational. Wiley, Oxford, England (1957)
Weidinger, L., Mellor, J., et. al: Ethical and social risks of harm from language models (2021), arXiv:2112.04359
Wray, R.E., Laird, J.E.: Incorporating abstract behavioral constraints in the performance of agent tasks. In: ICAI. Springer, Las Vegas, NV (2021)
Acknowledgment
This work was supported by the Office of Naval Research, contract N00014-22-1-2358. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of Defense or Office of Naval Research. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. We thank the anonymous reviewers for substantive comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wray, R.E., Jones, S.J., Laird, J.E. (2023). Computational-Level Analysis of Constraint Compliance for General Intelligence. In: Hammer, P., Alirezaie, M., Strannegård, C. (eds) Artificial General Intelligence. AGI 2023. Lecture Notes in Computer Science(), vol 13921. Springer, Cham. https://doi.org/10.1007/978-3-031-33469-6_32
Download citation
DOI: https://doi.org/10.1007/978-3-031-33469-6_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33468-9
Online ISBN: 978-3-031-33469-6
eBook Packages: Computer ScienceComputer Science (R0)