Abstract
Argumentation is a promising approach to handle inconsistent knowledge bases, based on the justification of plausible conclusions by arguments. Because of inconsistency, however, arguments may be defeated by counterarguments (or defeaters). The problem is thus to select the most acceptable arguments. In this paper we investigate preference-based acceptability. The basic idea is to accept undefeated arguments and also arguments that are preferred to their defeaters. We say that these arguments defend themselves against their defeaters. We define argumentation frameworks based on that preference-based acceptability. Finally, we study associated inference relations for reasoning with inconsistent knowledge bases.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Amgoud, L., Cayrol, C. and Le Berre, D.: Comparing arguments using preference orderings for argument-based reasoning, in Proc. of the 8th International Conference on Tools with Artificial Intelligence, ICTAI'96, 1996, pp. 400-403.
Amgoud, L. and Cayrol, C.: Integrating preference orderings into argument-based reasoning, in Proc. of the First International Joint Conference on Qualitative and Quantitative Practical Reasoning, ECSQARU-FAPR'97, 1997, pp. 159-170.
Amgoud, L. and Cayrol, C.: A model of reasoning based on the production of acceptable arguments, in Annals of Mathematics and Artificial Intelligence (AMAI), to appear in 2002.
Benferhat, S., Dubois, D. and Prade, H.: Argumentative inference in uncertain and inconsistent knowledge bases, in Proc. of the 9th Conference on Uncertainty in Artificial Intelligence, UAI'93, 1993, pp. 411-419.
Benferhat, S., Dubois, D. and Prade, H.: How to infer from inconsistent beliefs without revising, in Proc. of the 14th International Joint Conference on Artificial Intelligence, IJCAI'95, 1995, pp. 1449-1455.
Bibel, W.: Methods of automated reasoning, in Fundamentals in Artificial Intelligence, Lecture Notes in Comput. Sci. 232, 1985.
Brewka, G.: Preferred subtheories: An extended logical framework for default reasoning, in Proc. of the 11th International Joint Conference on Artificial Intelligence, IJCAI'89, 1989, pp. 1043-1048.
Brewka, G.: Reasoning about priorities in default logic, in Proc. of the National Conference on Artificial Intelligence, AAAI'94, 1994, pp. 940-945.
Cayrol, C.: On the relation between argumentation and non-monotonic coherence-based entailment, in Proc. of the 14th International Joint Conference on Artificial Intelligence, IJCAI'95, 1995, pp. 1443-1448.
Cayrol, C.: From non-monotonic syntax-based entailment to preference-based argumentation, in Proc. of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, ECSQARU'95, 1995, pp. 99-106.
Cholvy, L.: Automated reasoning with merged contradictory information whose reliability depends on topics, in Proc. of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, ECSQARU'95, 1995, pp. 125-132.
Cayrol, C. and Lagasquie-Schiex, M. C.: Non-monotonic syntax-based entailment: A classification of consequence relations, in Proc. of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, ECSQARU'95, 1995, pp. 107-114.
Cayrol, C., Royer, V. and Saurel, C.: Management of preferences in assumption-based reasoning, in Advanced Methods in Artificial Intelligence, Lecture Notes in Comput. Sci. 682, Springer-Verlag, 1993, pp. 13-22.
Dung, P. M.: On the acceptability of arguments and its fundamental role in non-monotonic reasoning and logic programming, in Proc. of the 13th International Joint Conference on Artificial Intelligence, IJCAI'93, 1993, pp. 852-857.
Dung, P. M.: On the acceptability of arguments and its fundamental role in non-monotonic reasoning, logic programming and n-person games, Artificial Intelligence 77(1995), 321-357.
Elvang-Goransson, M., Fox, J. and Krause, P.: Acceptability of arguments as “logical uncertainty”, in Proc. of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, ECSQARU'93, 1993, pp. 85-90.
Elvang-Goransson, M. and Hunter, A.: Argumentative logics: Reasoning with classically inconsistent information, Data & Knowledge Engineering 16(1995), 125-145.
Gabbay, D. M. and Hunter, A.: Making inconsistency respectable: A logical framework for inconsistency in reasoning, in Fundamentals of Artificial Intelligence Research, Lecture Notes in Artif. Intell. 535, 1991, pp. 19-32.
Gärdenfors, P. and Makinson, D.: Nonmonotonic inference based on expectations, Artificial Intelligence 65(1994), 197-245.
Hunter, A.: Defeasible reasoning with structured information, in Proc. of the International Conference on Principles of Knowledge Representation and Reasoning, KR'94, 1994, pp. 281-292.
De Kleer, J.: Using crude probability estimates to guide diagnosis, Artificial Intelligence 45(1990), 381-391.
De Kleer, J. and Williams, B. C.: Diagnosing multiple faults, Artificial Intelligence 32(1987), 97-130.
Pearl, J.: System Z: A natural ordering of defaults with tractable applications to default reasoning, in Proc. of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge, TARK'90, 1990, pp. 121-135.
Pollock, J. L.: Defeasible reasoning, Cognitive Science 11(1987), 481-518.
Pollock, J. L.: How to reason defeasibly, Artificial Intelligence 57(1992), 1-42.
Poole, D.: On the comparison of theories: Preferring the most specific explanation, in Proc. of the 9th International Joint Conference on Artificial Intelligence, IJCAI'85, 1985, pp. 144-147.
Prakken, H. and Sartor, G.: On the relation between legal language and legal argument: Assumptions, applicability and dynamic priorities, in Proc. of the 8th International Conference on Artificial Intelligence and Law, 1995.
Prakken, H. and Sartor, G.: A dialectical model of assessing conflicting arguments in legal reasoning, Artificial Intelligence and Law(1996), 331-368.
Reiter, R.: A logic for default reasoning, Artificial Intelligence 13(1980), 81-132.
Reiter, R.: A theory of diagnosis from first principles, Artificial Intelligence 32(1987), 57-95.
Rescher, N. and Manor, R.: On inference from inconsistent premises, Theory and Decision 1(1970), 179-219.
Simari, G. R. and Loui, R. P.: A mathematical treatment of defeasible reasoning and its implementation, Artificial Intelligence 53(1992), 125-157.
Toulmin, S.: The Uses of Argument, Cambridge University Press, 1958.
Vreeswijk, G.: Abstract argumentation systems, J. Artificial Intelligence 90(1997), 225-279.
Vreeswijk, G. and Prakken, H.: Logical systems for defeasible argumentation, in D. Gabbay (ed.), Handbook of Philosophical Logic, 2nd edn, Kluwer Academic Publishers, Dordrecht, to appear.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Amgoud, L., Cayrol, C. Inferring from Inconsistency in Preference-Based Argumentation Frameworks. Journal of Automated Reasoning 29, 125–169 (2002). https://doi.org/10.1023/A:1021603608656
Issue Date:
DOI: https://doi.org/10.1023/A:1021603608656