Several forms of iterable belief change exist, differing in the kind of change and its strength: some operators introduce formulae, others remove them; some add formulae unconditionally, others only as additions to the previous beliefs; some only relative to the current situation, others in all possible cases. A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs. The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement, and severe withdrawal. In turn, these three can be expressed in terms of lexicographic revision at the cost of restructuring the sequence. This restructuring needs not to be done explicitly: an algorithm that works on the original sequence is shown. The complexity of mixed sequences of belief change operators is also analyzed. Most of them require only a polynomial number of calls to a satisfiability checker, some are even easier.
1 Introduction
New information comes in different forms. At the one end of the spectrum, it is believed to be always true (lexicographic revision [60, 76]); at the far opposite, it is unknown (contraction [1, 73]). Middle cases exist: it may be believed only as long as the current situation is concerned (natural revision [12, 47]), or it may be believed only as long as it does not contradict the previous beliefs (refinement [10, 63]). Sequences of changes are hardly all of the same form, like someone who embraces every single new theory with all of his heart or instead is so skeptical to refuse every piece of information that contradicts what is known.
Natural revision, then lexicographic revision. Not two lexicographic revisions, not two natural revisions. A mixed sequence of revisions is what to do in this example. As Booth and Meyer [8] put it: “It is clear that an agent need not, and in most cases, ought not to stick to the same revision operator every time that it has to perform a revision.”
Other forms of revisions exist. They may mix in any order. A lexicographic revision may be followed by a natural revision, a restrained revision and a radical revision. Such a sequence is used as a running example:
The sequence begins with \(\emptyset\), the complete lack of knowledge. The first information acquired is y, and is considered valid in all possible situations; it is a lexicographic revision. The next is \(\lnot x\), but is deemed valid only in the current conditions; this is a natural revision. The next is \(x \wedge z\), but is only accepted as long as it does not contradict what is currently known; it is a refinement. Finally, \(\lnot z\) is so firmly believed that all cases where it does not hold are excluded; this is a radical revision.
No kind of revision is better than the others. Natural revision has its place. As well as lexicographic revision. As well as refinement, radical revision, severe antiwithdrawals and severe revision. None of them is the best. Each is right in the appropriate conditions and wrong in the others. For example, a sequence of severe revisions is problematic, because it coarsens the plausibility degree of different scenarios [28, 67]; yet, it is a common experience that sometimes new information makes unlikely conditions as likely as others. A truck full of parrots to be sold as pets crushed nearby, freeing thousands of exotic animals; a red bird is now as likely to be stumbled upon as a local wild animal. New information may raise the plausibility of a situation to the level of another, making them equally likely. This is a consequence of the new information, not a fault of the revision operator. The problem comes when using severe revisions only, without other revisions that separate the likeliness of different conditions [28, 67].
The solution is not a new framework that encompasses all possible cases, but to deal with mixes of different revision kinds.
Which form of revision is used at each step is assumed known in this article. As in the example, it comes together with the revising formula itself: that there is a red bird is only believed in the current situation, making it a natural revision; that hunting red animals is forbidden is always the case, calling for lexicographic revision. In other scenarios, the form of revision to use may depend on the time point or the sequence of previous revisions. The first approach is exemplified by the alternating sequences of actions and observations by Hunter and Delgrande [40, 41]. The second by reconstructing the epistemic states from the outcome of past revisions [9, 56]. The issue is outside the scope of the present article.
The problem considered here is to determine the outcome of a sequence of mixed revisions. Ten belief change operators are considered: lexicographic revision, refinement, severe withdrawal, natural revision, restrained revision, plain severe revision, severe revision, moderate severe revision, very radical revision, and full meet revision. They all change the plausibility of the possible situations, each in its own way. The possible situations are formalized by propositional models, their plausibility by an ordering among propositional models. Other approaches not considered here involve action histories [9, 56], fixed orderings [2], simultaneous revisions [45], revisions by orderings [60], and bidimensional revisions [69].
The ten considered change operators modify the order of plausibility of the models. Each does it in its own way. For example, natural revision promotes some models to the maximal plausibility; lexicographic revision makes some models being more plausible than others. Keeping in memory a complete description of these orderings is unfeasible even in the propositional case, as the number of models is exponential in the number of variables. Several operators such as lexicographic revision, refinement, and restrained revision can generate orderings that compare equal no two models, making exponentially large every representation that is linear in the number of equivalence classes of the ordering, such as the ordered sequence of formulae by Rott [68] and the partitions by Meyer, Ghose, and Chopra [59]. Many distance-based one-step revisions suffer from a similar problem: the result of even a single revision may be exponential in the size of the involved formulae [14]. Iterated revisions typically do not employ distances, and the problem can be overcome:
•
the ten belief change operators considered in this article (lexicographic revision, refinement, severe withdrawal, natural revision, restrained revision, plain severe revision, severe revision, moderate severe revision, very radical revision, full meet revision) can be reduced to three: lexicographic revision, refinement, and severe withdrawal; these reductions are local: they replace an operator without changing the rest of the sequence before and after it;
•
refinement and severe withdrawal can be reduced to lexicographic revision; this however requires structuring the sequence of belief change operators; however, the result is a sequence that behave like the original on subsequent changes;
•
this restructuring needs not to be done explicitly; an algorithm that works on the original sequence is shown; it does not change the sequence, but behaves as if it were restructured; apart from the calls to a satisfiability checker, the running time is polynomial.
This mechanism determines the result of an arbitrary sequence of these revisions from an initial ordering that reflects a total lack of information. This is not a limitation when using the ten considered revision forms, as an arbitrary ordering can be created by a sequence of lexicographic revisions [9].
During its execution, the algorithm calculates some partial results called underformulae, which can be used when the sequence is extended by further revisions. The need for a satisfiability checker is unavoidable, given that belief change operates on propositional formulae. However, efficient solvers have been developed [3, 5]. Restricting to a less expressive language [16] such as Horn logic may also reduce the complexity of the problem, as it is generally the case for one-step revisions [21, 52, 58, 62], since satisfiability in this case can be solved efficiently.
Some complexity results are proved: some imply the ones announced without proofs in a previous article [55], but extend them to the case of mixed sequences of revisions. Entailment from a sequence of lexicographic, natural, restrained, very radical and severe revisions, refinements, and severe antiwithdrawals is \(\Delta ^p_{2}\)-complete. Two groups of belief change operators are relevant to complexity. The first is called lexicographic-finding and comprises the ones that behave like lexicographic revision on consistent sequences of formulae; lexicographic and moderate severe revisions are in this group. The second is called bottom-refining, as it includes the revisions that separate the most likely scenarios when some are consistent with the new information; natural revision, restrained revision, and severe revision are in this group. Entailment from a sequence of operators all of the first kind or all of the second is \(\Delta ^p_{2}\)-complete. Three revision operators require a separate analysis. Entailment from a sequence of very radical revision is \(\Delta ^p_{2}[\log n]\)-complete. The same complexity comes from sequences of plain severe and full meet revisions only. Intermixing these operators with the others is \(\Delta ^p_{2}[\log n]\)-complete in one case and \(\Delta ^p_{2}\)-complete in the others.
The rest of this article is organized as follows: Section 2 introduces the main concepts of total preorder, lexicographic revision, refinement, and severe withdrawal; Section 3 shows how to reduce the other change operators to these three; Section 4 shows an algorithm for computing the result of a sequence of revisions; Section 5 presents the computational complexity results; Section 6 investigates the properties of severe antiwithdrawal, especially in light of its central role in iterated belief changes; Section 7 discusses the results in this article, compares them with other work in the literature, and presents the open problems.
2 Preliminaries
A propositional language over a finite alphabet is assumed. Given a formula F, its set of models is denoted \(M\!od(F)\), while a formula having S as its set of models is denoted \(F\!orm(S)\). The symbol \(\top\) denotes a tautology, a formula satisfied by all models over the given alphabet.
A base is a propositional formula denoting what is believed in certain moment. Historically, revision was defined as an operator that modifies a base in front of new information; an ordering was employed to take choices when this integration may be done in multiple ways, which is usually the case. Assuming this ordering as fixed or depending on the base only is the AGM model or revision [1, 30]. Iterated revision is problematic using this approach; the solution is to reverse the role of the base and the ordering. Instead of being a supporting element, the ordering becomes the protagonist. The base derives from it as the set of most plausible formulae [18, 50]. Such plausibility information can be formalized in several equivalent ways: epistemic entrenchments [26, 31], systems of spheres [32, 35], rankings [46, 77, 80], and KM preorders [44, 64].
2.1 Total Preorders
Katsuno and Mendelzon [44] proved that AGM revision can be reformulated in terms of a total preorder over the set of models, where the models of the base are exactly the minimal ones according to the ordering. Iterated revision can be defined by demoting the base from primary information to derived one. Instead of revising a base using the ordering as a guide, the ordering itself is modified. The base is taken to be just the set of formulae implied by all most plausible models.
Such an ordering can be depicted as a stack, the top boxes containing the most plausible models. This is equivalent to a reflexive, transitive, and total relation, but it makes for simpler definitions and proofs about iterated revisions.
A KM total preorder is the same as a partition by Mayer, Ghose, and Chopra [59], who use a formula for each class in place of its set of models. In turns, such a partition is similar to the system for expressing such orderings in possibilistic logic [4], and correspond to a sequence of formulae by Rott [68], \(h_1 \prec \cdots \prec h_m\) via \(C(0) \cup \ldots \cup C(i) = M\!od(h_{i+1} \wedge \cdots \wedge h_m)\), and to an epistemic entrenchment [1].
Being a partition, \(C=[C(0),\ldots ,C(m)]\) contains all models. As a result, every model is in a class. No model is “inaccessible,” or excluded from consideration when performing revision. Revisions producing such models could still be formalized by giving a special status to the last class \(C(m)\), as the set of such inaccessible models, but they are not studied in this article. Their analysis is left as an open problem.
Classes are allowed to be empty, even class zero \(C(0)\). The base represented by a total preorder \(C=[C(0),\ldots ,C(m)]\) cannot therefore being defined as \(F\!orm(C(0))\) but as the minimal models according to C, denoted by \(M\!od(C,\top)\).
More generally, given a formula P the notation \(\min (C,P)\) indicates the set of minimal models of P according to the ordering C. Formally, if i is the lowest index such that \(C(i) \cap M\!od(P)\) is not empty, then \(\min (C,P)=C(i) \cap M\!od(P)\). Several iterated revision depends on such an index i and its corresponding set of models \(C(i) \cap M\!od(P)\).
Another consequence of allowing empty classes is that two total preorder may be different yet comparing models in the same way. For example, \([M\!od(\top)]\) and \([\emptyset ,M\!od(\top)]\) both place all models in the same class, which is class zero for the former and class one for the latter. They are in this sense equivalent. They coincide when removing the empty classes. The minimal models of every formula are the same [63].
Since no other kind of equivalence between preorders is considered in this article, revision-equivalence is shortened to equivalence.
Revising by the same formula changes revision-equivalent orderings into revision-equivalent orderings. This holds for all revision semantics considered in this article.
The amount of information an ordering carries can be informally identified with its ability of telling the relative plausibility of two models. Ideally, an ordering should have a single minimal model, representing what is believed to be the state of the world, and a single model in each class, allowing to unambiguously decide which among two possible states of the world is the most likely. Most revision indeed refine the ordering by splitting its classes. At the other end of the spectrum, the total order \(\emptyset =[M\!od(\top)]\) carries no information: not only its base comprises all models and is therefore tautological, but all models are also considered equally plausible. Studies on the practical use of revision [9, 55] assume an initial empty ordering that is then revised to obtain a more discriminating one. Equivalently, an ordering can be expressed as a suitable sequence of revisions applied to the empty total preorder.
Not all operators considered in this article are revisions, only the ones that produce an ordering whose base implies the revising formula. Some other operators just split classes (like refinement) or merge them (like severe withdrawal). The result of an operator \(\mathrm{ope}\) modifying a total preorder C by a formula P is defined by the infix notation \(C \mathrm{ope}(P)\). This is a new total preorder whose base entails P if \(\mathrm{ope}\) is a revision operator. More specifically, AGM revisions produce a base out of the minimal models of P in C:
Several iterated belief revision operators are considered. These can be all expressed in terms of three of them: lexicographic revision, refinement, and severe withdrawal. Intuitively, this is because each of these three includes a basic operation that can be performed over an ordering: moving, splitting, and merging classes. The correspondence is not exact, as the lexicographic revision perform both moving and splitting but can be made to move a single class from a position of the sequence to another.
These three operators are defined in this section. The others will be then introduced in the next, and immediately proved to be reducible to these three. This allows to concentrate on the computational aspects only on the three basic ones.
2.2.1 Lexicographic Revision.
Lexicographic revision is one of the two earliest iterated belief revision operator [76]. While its authors initially rejected it, later research have reconsidered it [55, 60, 61, 66]. The tenant of this operator can be summarized as: revising by P means that P is true no matter of everything else. Technically, all models satisfying P are more plausible that every other one.
Alternatively, a formula directly based on sequences can be taken as the definition of lexicographic revision:
This definition does not exactly coincide with the previous one because of some empty classes, which means that the two produce equivalent total preorders. A graphical representation of revising a total preorder by a formula P is the following one:
In words, the models of P are “cut out” from the ordering and shifted together to the top. Their relative ordering is not changed, but they are made more plausible than every model of \(\lnot P\). By construction, \(\min (C \mathrm{lex}(P),\top)\) is equal to \(\min (C,P)\), making this operator a revision.
2.2.2 Refinement.
Contrary to lexicographic revision, refinement [10, 63] is not a revision. It is still a basic form of belief revision in which belief in a formula P is strengthened, but never so much to contradict previous information. Technically, the models of every class are split depending on whether they satisfy P or not. This way, two models are separated only if they were previously considered equally plausible, and only if one satisfies P and the other does not.
Alternatively, refinement can be defined directly on partitions:
Some of these classes may be empty, and can therefore be removed respecting preorder equivalence. Graphically, refining a total preorder C by a formula P can be seen as follows:
2.2.3 Severe Antiwithdrawal.
While this operator was defined [27, 70] as a form of contraction, it is technically cleaner to use it in reverse, with the negated formula. Removing a formula \(\lnot P\) is the same of creating the consistency with P, but the second definition has been advocated has a most direct formalization of the actual process of belief change [34].
In the specific case of severe antiwithdrawal, creating consistency with P is obtained by merging all classes of the ordering that are in the same class or one of lower index with the minimal models of P. This is motivated by the principle of equal-treated-equally when applied to the plausibility of models: To make P consistent some models of P have to become minimal; but the models of \(\lnot P\) that are in lower classes have the same plausibility or greater, so they should not be excluded.
Lexicographic antiwithdrawal can also be defined in terms of sequences. If i is the lowest index such that \(C(i) \cap M\!od(P) \not= \emptyset\), then
Graphically, severe antiwithdrawal merges all classes of index lower or equal to the minimal class intersecting \(M\!od(P)\):
This way, the base of the revised preorder \(\min (C \mathrm{sev}(P),\top)\) is guaranteed to contain some models of P, which means that it has been made consistent with P. At the same time, the relative plausibility of two models is never reversed: a model that is more plausible that another according to C is never made less plausible than that according to \(C \mathrm{sev}(P)\).
3 Reductions
Many belief change operators exist. Many of them are expressible in terms of the three presented in the previous section: lexicographic revision, refinement, and severe antiwithdrawal. The reductions do not affect what is before or after then replaced operator applications, which is not the case for the transformations shown in the next section.
3.1 Natural Revision
This revision was first considered and discarded by Spohn [76], and later independently reintroduced by Boutilier [12]. Among revisions, it can be considered at further opposite to lexicographic revision, in that a formula P is made true by a minimal change to the ordering. This amounts to making the minimal models of P the new class zero of the ordering and changing nothing else.
Formally, if \(\min (C,P)=C(i) \cap M\!od(P),\) then
Graphically, \(\min (C,P)\) is “cut out” from the total preorder C and moved to the beginning of the sequence, making it the new class zero \(C \mathrm{nat}(P)(0)\).
Since by definition \(\min (C,P)\) is not empty, it holds \(\min (C \mathrm{nat}(P),\top) = C \mathrm{nat}(P)(0) = \min (C,P)\), meaning that it is an AGM revision operator.
This transformation does not just tell how to compute the propositional result of natural revision, that is, the base \(F\!orm(C \mathrm{nat}(P)(0))\) of the revised ordering. To the contrary, it requires it, as \(M\!od(K) = \min (C,P) = C \mathrm{nat}(P)(0)\). After K has been calculated, \(\mathrm{lex}(K)\) produces the same exact preorder as \(\mathrm{nat}(P)\) when applied to the same preorder, not just two preorders having the same base. This means that all subsequent revisions are unaffected by the replacement. In other words, for every initial preorder C and every sequence of previous and future belief changes, it holds
The other reductions in this section have all this property, that an operator application in whichever position of a sequence can be replaced without affecting the final ordering. Natural revision requires the minimal models of P to be calculated, some other operators do not. Natural revision is replaced by a single lexicographic revision, the others may require some lexicographic revisions, refinements and severe antiwithdrawals.
Some other operators are reduced to natural revision, which can in turn be reduced to lexicographic revision. For example, restrained revision is a refinement followed by natural revision (or vice versa). The above theorem shows that it can be further reformulated as a refinement and a lexicographic revision.
3.2 Restrained Revision
Restrained revision [8] can be seen as a minimal modification of refinement to turn it into a form of revision. Indeed, refining a total preorder by a formula P does not generally makes P entailed by the refined total preorder. This is indeed the case only if \(\min (C,\top)\) contains some models of P.
Restrained revision can be seen as an intermediate form of revision: while natural revision changes the preorder in a minimal way to make the revising formula entailed and lexicographic revision makes the formula to be preferred in all possible cases, restrained revision makes it to be preferred only when this is consistent with previous beliefs, and makes it entailed by a minimal change in the ordering.
Restrained revision is defined as follows, where \(\min (C,P)=C(i) \cap M\!od(P)\):
Graphically, each intersection \(C(i) \cap M\!od(P)\) is lifted just above \(C(i) \backslash C(i)\), except for \(i=0\), where it is lifted to the top.
The following quite obvious theorem is proved only for the sake of completeness, its statement being almost a direct consequence of the definition.
This reduction is applied to the running example.
3.3 Very Radical Revision
Irrevocable revision [75] formalizes hypothetical reasoning by excluding from consideration all models that do not satisfy the assumption. Formally, these models are made inaccessible to revision, which cannot therefore recover them (hence the name). The scope of this article is limited to revisions that consider all models. While irrevocable revision excludes some model, the very radical revision variant by Rott [68] does not. Formally, it is defined as follows:
The original definition has the first part \(C(0) \cap M\!od(P),\ldots ,C(m) \cap M\!od(P)\) only for the classes that intersect \(M\!od(P)\). The difference is inessential, since the other classes are empty and empty classes are irrelevant.
Graphically, the models of P maintain the same relative order, the others are merged in a single class at the end of the sequence.
Very radical revision can be expressed in terms of a sequence of a lexicographic revision, a severe antiwithdrawal and a second lexicographic revision. Intuitively, this is because very radical revision merges the classes not satisfying P, which is equivalent to make them minimal by a lexicographic revision by \(\lnot P\) and then by a severe antiwithdrawal by P; a further lexicographic revision is needed to restore the correct ordering.
The reduction is applied to the running example.
3.4 Severe Revisions
The Levi identity [36] allows constructing a revision operator from a contraction. This can be applied to severe withdrawal, leading to the definition of severe revision. However, the Levy identity only specifies the base of the revised ordering, the set of its minimal models. The rest of the ordering can be obtained in at least three different ways, leading to different revision operators [68].
The first definition is called just “severe revision.” Since the symbol \(\mathrm{sev}\) is already taken for severe antiwithdrawal, \(\mathrm{sevr}\) is used for this revision.
Graphically, the part of the first class that intersects \(M\!od(P)\) is moved at the beginning of the sequence; the rest of that class is merged with the previous ones.
This operator can be shown to be reducible to a severe antiwithdrawal followed by a natural revision, the latter being reducible to lexicographic revision as proved above.
Moderate severe revision mixes a severe withdrawal with the changes lexicographic revision makes to a preorder. It will indeed be proved to be equivalent as a sequence of a severe antiwithdrawal and a lexicographic revision.
The difference with severe revision is that all of \(M\!od(P)\) is shifted at the beginning of the sequence.
Moderate severe revision can be proved to be equivalent to a severe antiwithdrawal followed by a lexicographic revision.
The last variant of severe revision is plain severe revision.
Moderate severe revision differs from severe revision in that it also merges the first non-empty class following the first that intersects \(M\!od(P)\).
Plain severe revision can be reformulated in terms of severe antiwithdrawal and lexicographic revision.
The following theorem shows that plain severe revision is not able to increase the number of levels over two. It also links it with full meet revision, to be defined in the next section.
3.5 Full Meet Revision
Full meet revision was created for a single step of revision [1, 30] and then considered in iteration [2, 37, 54]. It was originally defined as the result of disjoining all possible ways of minimally revising a propositional theory, formalizing both the assumption of minimal change and that of a complete lack of knowledge about the plausibility of the various choices. The initial plausibility ordering is not used other than for its set of minimal models. The resulting ordering only distinguish models in two classes: the base and the others.
The models of the first class that intersects \(M\!od(P)\) are moved at the beginning of the sequences; all others are merged in a single, giant class.
Due to its simplicity, full meet revision can be expressed in a number of ways in terms of the other operators. For example, it is equivalent to a sequence made of a lexicographic revision followed by a severe antiwithdrawal and another lexicographic revision.
An alternative reduction is \(C \mathrm{full}(P) = C \mathrm{lex}(\lnot F\!orm(\lbrace I\rbrace)) \mathrm{sev}(F\!orm(\lbrace I\rbrace)) \mathrm{lex}(K)\), where I is an arbitrary propositional interpretation. Indeed, the proof relies on \(C \mathrm{lex}(\lnot F) \mathrm{sev}(F) = [M\!od(\top)]\), which holds for every formula F such that \(M\!od(F)\) is contained in a single class of C. This is the case for \(\min (C,P)\) but also for every formula having only one model.
Yet another reduction is \(C \mathrm{full}(P) = \emptyset \mathrm{lex}(K)\) dove \(K = F\!orm(\min (C,P))\). This is, however, not used, because contrary to the other reductions it affects the previous sequence of revisions. For example, \(C \mathrm{nat}(N) \mathrm{full}(P)\) is turned into \(\emptyset \mathrm{lex}(K)\), therefore making the initial natural revision disappear.
The previous revisions are instead preserved by the reduction \(C \mathrm{full}(P) \equiv C \mathrm{rad}(K)\) where \(K=F\!orm(\min (C,P))\), which however requires the calculation of the minimal models of P in the total preorder C before being applied. The very radical revision can be then expressed in terms of two lexicographic revisions and a severe antiwithdrawal.
Starting from an empty ordering, full meet revision and plain severe revision behave in exactly the same way. Formally, for every sequence of formulae \(P_1,\ldots ,P_n\), it holds
This means that a sequence of mixed plain severe and full meet revisions can be turned into one containing only one type of revisions. This fact is a consequence of how they change an ordering comprising one or two classes: they both produce an ordering containing the class \(\min (C,P)\) and the class containing all other models. For full meet revision, this is the definition and holds in all cases. For plain severe revision this is proved in Theorem 7.
4 The Algorithm
The previous section shows that every considered belief change operator can be reduced to a sequence of lexicographic revisions, refinements, and severe antiwithdrawals. As a result, every sequence of operators can be turned into one made of these three only. This section presents an algorithm for computing the base of the ordering at every time step of such a sequence.
This is done by first proving that refinements and severe antiwithdrawals can be removed by suitably modifying the sequence. This is done differently than the reductions in the previous section, which only modify the sequence locally: Nothing is changed before or after the operator that is replaced. Removing refinements instead requires introducing lexicographic revisions in other points of the sequence, and removing severe antiwithdrawals requires changing the previous lexicographic revisions.
The algorithm that computes the bases of a sequence of lexicographic revisions is then modified to work on the original sequence. The detour to the sequence of lexicographic revisions is necessary to prove that the final algorithm works. In particular, it is shown to do the same as the original algorithm on the simplified sequence.
All sequences are assumed to start with the empty ordering \(\emptyset\). Every other ordering \(C=[C(0),\ldots ,C(m)]\) is the result of a sequence of lexicographic revision applied to the empty ordering: \(\emptyset \mathrm{lex}(F\!orm(C(m))) \ldots \mathrm{lex}(F\!orm(C(0)))\).
4.1 Simplification
A sequence of lexicographic revisions ending in either a refinement or a severe antiwithdrawal can be turned into a sequence of lexicographic revisions that has exactly the same final ordering when applied to the same original ordering. As a result, a sequence containing every of these three operators can be scanned from the beginning until the first operator that is not a lexicographic revising is found. The initial part of the sequence is then turned into a sequence containing only lexicographic revisions, and the process restarted.
Removal of the refinements is done thanks to the following theorems, which proves that a refinement can be moved at the beginning of a sequence of lexicographic revisions, and then turned into a lexicographic revision itself.
This proves that \(\emptyset \mathrm{lex}(L_1) \ldots \mathrm{lex}(L_{n-1}) \mathrm{lex}(L_n)~\mathrm{ref}(R)\) is equal to \(\emptyset \mathrm{lex}(L_1) \ldots \mathrm{lex}(L_{n-1})\)\(\mathrm{ref}(R) \mathrm{lex}(L_n)\). Iteratively applying commutativity produces \(\emptyset ~\mathrm{ref}(R) \mathrm{lex}(L_1) \ldots \mathrm{lex}(L_n)\). Since \(\emptyset =[M\!od(\top)]\), by definition \(\emptyset \mathrm{ref}(R) = [M\!od(R),M\!od(\lnot R)]\), and this is also the total preorder \(\emptyset \mathrm{lex}(R)\). As a result, the whole sequence is equivalent to \(\emptyset \mathrm{lex}(R) \mathrm{lex}(L_1) \ldots \mathrm{lex}(L_n)\).
If a sequence contains \(\mathrm{lex}\) and \(\mathrm{ref}\), then every \(\mathrm{ref}(R)\) in order can be moved to the beginning of the sequence and then replaced by \(\mathrm{lex}(L)\). What results is a sequence containing only lexicographic revisions.
This transformation can be used to prove the folklore theorem linking lexicographic revision with maxcons, usually defined for sets of formulae and extended to sequences by Booth and Nittka [9]:
The theorem establishes that \(\mathrm{maxcon}\) can be used to determine the minimal models of a formula in the ordering resulting from a sequence of lexicographic revisions. The proof is included here for the sake of completeness.
In a sequence of lexicographic revisions, refinements, and severe antiwithdrawals, if the first operator of the sequence that is not a lexicographic revision is a refinement, then it can be turned into a lexicographic revision and moved to the beginning of the sequence. If it is a severe antiwithdrawal, then a more complex change needs to be applied to the sequence.
As the previous refinements can be turned into lexicographic revisions, the previous belief change operators can be all assumed to be lexicographic revisions. In other words, the considered sequence has all lexicographic revisions but the last operator, which is a severe antiwithdrawal. Such a sequence can be modified as follows, where \(B=\mathrm{under}(S;L_1,\ldots ,L_1)\) is a formula defined as follows:
\begin{equation*} \emptyset \mathrm{lex}(L_1) \ldots \mathrm{lex}(L_n) \mathrm{sev}(S) = \emptyset \mathrm{lex}(L_1 \vee B) \ldots \mathrm{lex}(L_n \vee B) \mathrm{lex}(B). \end{equation*}
Intuitively, B is constructed so that it collects all models that are in the same class of the minimal ones of S or in lower classes. Disjoining every revising formula with B ensures that these models remain in class zero over each revision. The claim therefore requires two proofs: first, that B actually comprises these models; second, that modifying the lexicographic sequence this way does not change the resulting total preorder.
Informally, this construction includes as alternatives the formulae that are excluded from \(\mathrm{maxcon}(P,L_n,\ldots ,L_1),\) because they are inconsistent with the partially built maxcon M. Starting from \(M=S\), the procedure of maxcon construction adds \(L_i\) to M if \(M \wedge L_i\) is consistent. Otherwise, \(L_i\) is skipped. This procedure results in the minimal models of S. If \(M \wedge L_i\) is inconsistent only because of S, then its models are in lower classes than all models of S in the final total preorder. Disjoining M with \(L_i\) gathers all such models. This is obtained in the last case of the definition: \(L_i\) is disjoined with the underformula but not added to M. This intuition is formalized by the following theorem.
It is now shown that \(\mathrm{under}(S,L_n,\ldots ,L_1)\) allows rewriting the sequence without affecting the final total preorder.
These theorems tell how to modify a sequence into one that only contains lexicographic revisions: starting from the beginning, the first operator that is not \(\mathrm{lex}\) can be \(\mathrm{ref}(R)\) or \(\mathrm{sev}(S)\); in the first case, it is turned into \(\mathrm{lex}(R)\) and moved at the beginning of the sequence; in the second case, the underformula B of the previous revisions (which are all lexicographic by assumption) is used to replace \(\mathrm{sev}(S)\) with \(\mathrm{lex}(B)\) and is disjoined to all previous revisions. This part of the sequence now contains only lexicographic revisions, and the process can therefore be repeated for the next \(\mathrm{ref}\) or \(\mathrm{sev}\) operator. The final result is a sequence of lexicographic revisions applied to the empty preorder.
Such a sequence not only has the correct vale \(\min (C,P)\) at each step but also the same final preorder of the original sequence. This implies that it is equivalent to it even regarding subsequent revisions.
The only apparent drawback of this procedure is that every \(\mathrm{sev}(S)\) requires the underformula B to be disjoined to all previous formulae. This makes B to be included in the underformula of the next \(\mathrm{sev}(S^{\prime })\). This problem is solved by leaving the sequence as it is and processing it as if the transformation has been done.
4.2 Algorithm
A sequence contains only lexicographic revisions and refinements applied to the empty ordering can be turned into a sequence of lexicographic revisions by moving all refinements to the beginning. After this change, the minimal models of a formula P can be calculated using the maxcon construction. Since the refinements are moved to the start of the sequence in the order in which they are encountered, they end up there in reverse order. As a result, the maxcon can be calculated from the original sequence following the order that would result from the simplification:
The following figure shows how the algorithm proceeds when computing a formula equivalent to the set of the minimal models of P. Every formula encountered following the arrows is conjoined with it if that does not result in contradiction.
The back and forth algorithm works, because it builds the maxcon starting from P and proceding in the same order as if the refinements were moved to the start of the sequence. Its correctness is therefore proved by Corollary 1. For the same reason, a similar mechanism determines an underformula instead of a maxcon.
This is important, because a sequence may contain lexicographic revisions, refinements and severe antiwithdrawals. Assuming that the underformulae for the latter have all been determined, at each severe antiwithdrawal encountered while going back, if B is consistent with the current maxcon \(M,\) then M is turned into \(M \wedge B\) and the procedure “bounces” back in the forward direction. This is because if \(M \wedge B\) is consistent then the previous \(\mathrm{lex}(L)\) in the original sequence would be turned into \(\mathrm{lex}(L \vee B)\) in the modified sequence. As a result, M is consistent with all of them, but their addition is irrelevant, because M is already conjoined with B.
The fourth point can be omitted by placing \(\mathrm{sev}(\top)\) at the very beginning of the sequence. This marker signals the algorithm to bounce forward without the need to verify whether the sequence is at the start. At the end M has the minimal models of the final ordering.
The algorithm works, because it builds a formula that is the same that would have been produced when creating the maxcon of the modified sequence that only contains lexicographic revisions.
The following figure shows how the algorithm moves in a segment of the sequence. When it reaches \(\mathrm{sev}(F)\), if the formula that is currently being built is consistent with the underformula of this severe antiwithdrawal, then the algorithm bounces forward to \(\mathrm{ref}(H)\), otherwise it keeps going back, to \(\mathrm{lex}(D)\).
This construction produces a maxcon. The underformula of each severe antiwithdrawal is built similarly, by going back to the start of the sequence and coming back.
The algorithm can be adapted to work with the other considered revisions without replacing them with \(\mathrm{lex}\), \(\mathrm{ref}\) and \(\mathrm{sev}\), since each of them can be “locally” replaced with a sequence of these three. As a result, while going forward or backwards, suffices to behave in the same way as if the replacement has been done. This is done in the complexity analysis, in the next section.
5 Complexity
The reductions shown in the previous sections prove that each considered belief change operators can be turned into a lexicographic revision, possibly by first calculating a maxcon or an underformula. Both operations can be done by a polynomial number of calls to a propositional satisfiability solver. Therefore, the complexity of a sequence of arbitrary and mixed belief change operators is in the complexity class \(\Delta ^p_{2}\), which contains all problems that can be solved by a polynomial number of calls to an NP oracle.
The problem is also easily shown to be hard for the same class even if all operators are lexicographic revisions. This was previously published without proof [55].
Since severe antiwithdrawal turns an empty total preorder into an empty total preorder, a sequence comprising only this operator has low complexity: entailment is equal to validity, coNP complete.
Sequences of mixed belief change operators are now considered. Two classes can be shown to be \(\Delta ^p_{2}\)-hard: operators that can produce a lexicographically maximal model and operators that can refine the lowest class of an ordering. In both cases, the alternation of operators does not matter, as long as they have the given behavior.
5.1 Lexicographic-finding Revisions
Entailment from a sequence of lexicographic revisions is \(\Delta ^p_{2}\)-hard by Theorem 13. Some other belief change operators can be intermixed without changing complexity. These are the ones that produce the same results when applied after a sequence of revisions whose formulae are consistent.
Moderate severe revision satisfies the premise of this theorem: it coincides with lexicographic revision on all consistent sequences.
A consequence of the two theorems above is that entailment from a sequence of moderate severe revision is \(\Delta ^p_{2}\)-hard. The same holds even if lexicographic and moderate severe revisions are mixed.
5.2 Bottom-refining Revisions
A revision operator is bottom-refining if it “refines” the lowest-index class of the ordering that has models of the revising formula.
Removing empty classes, the minimal models of \(\top\) according to \(C \mathrm{rev}(P)\) are class zero of \(C \mathrm{rev}(P)\). The minimal models of P according to C are the non-empty intersection \(C(i) \cap M\!od(P)\) of minimal i. Equality between these two classes means that a revision operator makes this intersection the new class zero. No constraint is imposed on the other classes.
The new class zero is \(C(0) \cap M\!od(P)\) if this intersection is not empty. Bottom-refining revisions add an additional constraint in this case: revising C by P splits \(C(0)\) based on P.
The name bottom-refining derives from the way the “bottom” class \(C(0)\) is partitioned (refined) into the part satisfying P and the part not satisfy P. How the other classes are changed is not constrained.
The complexity of arbitrary sequences of bottom-refining revisions is established by the following theorem.
5.3 Very Radical Revision
Very radical revision is neither lexicographic-finding nor bottom-refining. It is indeed easier, as the classes of \(\emptyset \mathrm{rad}(R_1) \ldots \mathrm{rad}(R_n)\) are relatively easy to determine.
This theorem tells how to determine \(\min (\emptyset \mathrm{rad}(R_1) \ldots \mathrm{rad}(R_n),\top)\): by conjoining \(R_n\) with \(R_{n-1}\), then with \(R_{n-2}\) and so on until consistent.
A longest sequence is a simplified form of maxcon: the maxcons conjoin formulae in order skipping every one that would create an inconsistency; the longest sequences stop altogether at the first. A sequence of very radical revisions from the empty ordering can be reformulated in terms of this definition.
By this theorem, the complexity of \(\mathrm{longest}(L_1,\ldots ,L_n) \models Q\) is the same as inference from a sequence of very radical revisions from an empty sequence. This problem is investigated under the condition that each formula \(L_i\) is consistent.
Since a sequence of very radical revisions from the empty ordering is exactly the same as the longest consistent conjunction, and all formulae are consistent by assumption, the problem of entailment for very radical revision is \(\Delta ^p_{2}[\log n]\)-complete in the general case and BH\(_{2n-1}\)-complete for constant n.
5.4 Plain Severe Revision and Full Meet Revision
On total preorders comprising at most two classes, plain severe and full meet revision coincide, and always generate an ordering of at most two classes. As a result, when the initial ordering is empty, sequence of plain severe and full meet revision coincide:
More generally, mixed sequences of plain severe and full meet revisions applied to an ordering comprising at most two classes are equivalent to sequence of full meet revisions only and to sequences of plain severe revisions only.
These two revisions are neither lexicographic-finding nor bottom-refining. A lexicographic-finding sequence of revisions \(x_1 \wedge x_3, \lnot x_1, x_2\) produces \(\lnot x_1 \wedge x_2 \wedge x_3\), but the same sequence of full meet revisions instead produces to \(\lnot x_1 \wedge x_2\). A sequence of bottom-refining revisions \(x_1, x_2\) produces an ordering with a class one equal to \(x_1 \wedge \lnot x_2\), but the same sequence of full meet revisions instead gives \(\lnot x_1 \vee \lnot x_2\). They are also different from very radical revision, as seen from the sequence of revisions \(x_1, x_2, \lnot x_1\), where very radical revision produces \(x_2 \wedge \lnot x_1\) while full meet produces \(\lnot x_1\).
5.5 Alternating Sequences
The core of this article is that revisions may not be all of the same kind. Yet, the previous theorems show the hardness of sequences of similar revisions: all lexicographic-finding, all bottom-refining, all very radical, all full meet revisions. Does complexity change when they are intermixed? It depends on the specific alternation. Bottom-revising revisions maintain their complexity even when alternating with other revisions.
Some generic revisions are computationally easier than bottom-refining ones. Yet, intermixing them does not lower complexity. The same holds for lexicographic and very radical revision.
The above two theorems show two cases where complexity does not decrease when intermixing hard and easy revisions. This is however not a general phenomenon. A counterexample is the alternation of lexicographic and full meet revisions. It is easier than lexicographic revisions alone: \(\Delta ^p_{2}[\log n]\)-complete. This result is based on rewriting such alternations as a sequences of full meet revisions only.
Every alternation of lexicographic and full meet revisions reduces to a sequence of full meet revisions:
Each pair \(\mathrm{lex}(L_i) \mathrm{full}(F_i)\) is the same as either \(\mathrm{full}(L_i \wedge F_i)\) or \(\mathrm{full}(F_i)\), depending on the satisfiability of \(L_i \wedge F_i\). The satisfiability of these conjunctions can be checked independently on each other. They are equivalent to a logarithmic sequence of satisfiability tests. Checking entailment from a sequence of full meet revisions only is already known to require another logarithmic number of satisfiability tests.
Same hard revision, lexicographic. Different easy revisions, very radical and full meet. Different complexity: \(\Delta ^p_{2}\)-complete and \(\Delta ^p_{2}[\log n]\)-complete. Complexity differs when alternating the same hard revision with different easy ones. In one case, it is as hard as the hard revision. In the other, it is as easy as the easy one. The same for alternating full meet revisions with bottom-refining or with lexicographic revisions. Hard in one case, easy in the other. No general pattern emerges, like complexity being the same as the hardest of two revisions. It depends on the specific operators.
5.6 Arbitrarily Intermixed Sequences
Sequences of similar revisions, sequences of alternating revisions. Still, specific sequences of revisions. What happens when arbitrarily intermixing sequences?
Theorem 12 gives an upper bound: no matter how iterated revisions are intermixed, entailment is in \(\Delta ^p_{2}\). Theorems 23, 24, and 25 negate an equally general hardness result: complexity depends on which revisions the sequence contains.
Some results easily extend to linear fractions of the same revision. A sequence that contains at least a linear fraction of bottom-refining revisions among other revisions is \(\Delta ^p_{2}\)-hard. Alternating each bottom-refining revision with a number of other revisions of the same kind is the same as a simple alternation if these other revisions are by the same formula; this is the case for the considered revision operators (not in general, for example, it is not when iterating an improvement operator until success [49, 78]). This trick turns the sequence that proves the hardness of simple alternations into a sequence that proves the hardness of the one-two alternations.
Revising multiple times by the same formula may look like cheating. It turns a quite general scheme (bottom-refining revisions and other revisions) into a specific one (where each subsequence of other revisions is by the same formula). Yet, specific schemes are the nature of hardness results. For example, the proof of hardness for lexicographic revision employs a sequence of revisions by single variables. Such sequences are specific. In general, a sequence may contain arbitrary complex formulae.
Hardness proofs are existential proofs. Every instance of an already known \(\Delta ^p_{2}\)-hard problem turns into a sequence of lexicographic revisions. Every instance turns into a sequence. Not the other way around: some sequences are not outcomes of the translation.
The question “Are all sequences hard?” is not meaningful. Its obvious answer is always “No.” Some sequences are easy, like the repetitions of the same revisions by the same formula. Some are hard, like the ones produced by the \(\Delta ^p_{2}\)-hard reductions.
The question that makes sense is whether a specific group of sequences is hard or not. The sequences of lexicographic revisions are \(\Delta ^p_{2}\)-hard. They are because they contain all results of reducing all instances of lexicographically maximal model inference. They contain, they do not coincide.
All hardness results are of this kind. Iterated revisions are not an exception. Hardness reductions prove that certain groups of sequences are hard. A reduction that produces only sequences of a given group prove the group hard, such as the group of all sequences alteranting lexicographic and very radical revisions. All supersets of such a group are equally hard, including the group that contains all possible sequences.
6 A Case for Severe Antiwithdrawal
Severe antiwithdrawal is one of the three belief changing operators all others reduce to. While refinements are lexicographic revisions done in the opposite order, severe antiwithdrawal requires a different computation, that of the underformula. Not only is it central, it is also in a way irreplaceable. Essential.
What makes it so important? One of the referees of this article asked for an analysis of its properties. Being just severe withdrawal with the negated formula, its basic properties for a single step are the same [27, 70]. Postulates for iteration have been proposed by Darwiche and Pearl [18], with variations from Boutilier [11] and Jin and Thielsher [43].
Postulate C1 by Darwiche and Pearl [18] states that \(O \mathrm{sev}(A) sev(B)\) is the same as \(O \mathrm{sev}(B)\) whenever B entails A. This is a postulate for revision. Revision by A ensures that A is entailed; the same for B. The principle of C1 is that if A is achieved (is entailed) whenever B is achieved (is entailed), then a first revision by A is irrelevant if followed by a revision by B. Ubi major, minor cessat.
Yet, severe antiwithdrawal is not a form of revision. Achieving A is not believing A. It is accepting the possibility it may be true. It is “giving a hearing to it,” as advocated by Glaister [33]. Achieving A is making A consistent, not entailed.
Does C1 make sense for antiwithdrawal? Its essence is: If A is achieved whenever B is achieved, then antiwithdrawing A is irrelevant if followed by antiwithdrawing B. Antiwithdrawing is making a formula acceptable. Is ensuring consistency with it. Achieving A is creating consistency with A. Achieving B is creating consistency with B.
Rewriting C1 for antiwithdrawal gives: \(O \mathrm{sev}(A) \mathrm{sev}(B)\) is the same as \(O sev(B)\) whenever the consistency with B implies the consistency with A. The premise is that every formula that is consistent with B is also consistent with A. This is only possible if every model of B is also a model of A. In other words, B entails A. Technically, this is the same property as C1 for revision. It comes from the same principle and arrives to the same condition, but via a different route, where consistency takes the place of entailment.
Postulate C2 is: \(O \mathrm{sev}(A) \mathrm{sev}(B)\) is the same of \(O \mathrm{sev}(B)\) whenever B entails the negation of A. The premise is that entailing B makes A impossible. Achieving B negates achieving A. Achieving is consistency for antiwithdrawal. Consistency with B negates consistency with A. Every model of B negates A. This is the same as B entailing \(\lnot A\). Again, same technical condition, but coming from a different route.
The premise of C3 is \(O \mathrm{sev}(B) \models A\): achieving B in the present case also achieves A. Achieving is entailing for revision. For antiwithdrawal, is ensuring consistency. The consistency of \(O \mathrm{sev}(B)\) with A. This formalizes as \(O \mathrm{sev}(B) \not\models \lnot A\). This is the premise of C3 for antiwithdrawal. Its consequent is the same for \(O \mathrm{sev}(A) \mathrm{sev}(B)\). For antiwithdrawal, \(O \mathrm{sev}(A) \mathrm{sev}(B) \not\models \lnot A\). Overall, C3 for antiwithdrawal is: if \(O \mathrm{sev}(B) \not\models \lnot A\), then \(O \mathrm{sev}(A) \mathrm{sev}(B) \not\models \lnot A\). Contrary to C1 and C2, it differs from its version for revision. Yet, it coincides with C4 for revision.
Adapting C4 to antiwithdrawal has the same effect: it turns it into C3.
Severe antiwithdrawal enjoys C1, C3, and C4 and does not enjoy C2.
C1
The premise is that all models of B are also models of A. If k is the minimal class of models of A and \(k^{\prime }\) that for B, then \(k \le k^{\prime }\). Antiwithdrawing A merges all levels up to k. Antiwithdrawing B further merges all levels up to \(k^{\prime }\). Since \(k^{\prime }\) is greater than k, the second merge takes over the first.
C2
A counterexample is \(O = [C(0), C(1)]\) where \(M\!od(B)\) intersects \(C(0)\) while \(M\!od(A)\) does not. Antiwithdrawing B leaves the same order. Antiwithdrawing A instead merges the two classes into one; antiwithdrawing B after that does not split classes.
C3
The premise is \(O \mathrm{sev}(B) \models A\). The minimal class \(C(k)\) intersecting \(M\!od(B)\) contains only models satisfying A. Antiwithdrawing A leaves the ordering unaffected. Not only A is entailed by the sequence of antiwithdrawing A and B, this sequence is the same as antiwithdrawing B only.
C4
The premise is that the result of antiwithdrawing B is consistent with A. If \(C(k^{\prime })\) is the first class that intersects \(M\!od(B)\), then some models of \(C(0) \cup \cdots \cup C(k^{\prime })\) satisfy A. Therefore, the minimal k such that \(C(k)\) intersects \(M\!od(A)\) is less than \(k^{\prime }\). Antiwithdrawing A merges the classes from 0 to k; further antiwithdrawing B merges all classes up to \(k^{\prime }\) with \(k^{\prime } \ge k\), incorporating all changes done by the first antiwithdrawal.
While C2 is not met by severe antiwithdrawing, its two weakenings CB and Ind may [11, 43]. The premise of CB is \(O \mathrm{sev}(A) \models \lnot B\). How to recast this property in terms of achievements? Achieving A makes B impossible. Making B impossible is different than negating the achievement of B, which would be \(O \mathrm{sev}(A) \not\models B\). It is a different property, with consistency in place of entailment. The other weakening Ind has a similar premise: \(O \mathrm{sev}(B) \not\models \lnot A\), still involving consistency in addition to achievements.
Severe antiwithdrawal satisfies C1, C3, and C4. It does not satisfy C2. The weakened versions CB and Ind do not seem relevant to antiwithdrawal. All of this tells something about the individual properties of severe antiwithdrawal, but does not answer the root question: What is severe antiwithdrawal, exactly?
Antiwithdrawal is a shift in mind to accommodate the belief that something is possible. Formally, it minimally and rationally changes the ordering among possibilities to ensure mutual consistency with a formula. This is what Glaister [33] calls “giving a hearing to” and separates from contraction, “not believing.” The present article further supports this operation from the technical point of view, as a central belief changing mechanism.
However, the specific form of antiwithdrawal that is severe antiwithdrawal exhibits a serious drawback. To accept the possibility that something may be true, it also accepts everything else currently judged equally unlikely. This is too much a shift in mind. When the hunter accepts that a red bird may be in the thicket, he still maintains that no black boar is in the field next to the river.
Yet, if the red bird is indeed there, then the black boar will no longer be so unlikely. The possibility that some exotic animal wanders around the place increases the likeliness of some other doing as well. Something unlikely becoming a possibility opens the door to something else equally unlikely. At the same time, opening a door is not walking through it. Severe antiwithdrawal is an element in this process. It raises the threshold of believable unlikeliness. Some restriction is then necessary to ensure not everything is believed, only what is supported by some kind of evidence. This use of severe antiwithdrawal is supported by the technical analysis as well, which fruitfully employs it as a building block for translating other belief changing operations.
7 Conclusions
This article advocates and studies mixed sequences of belief change operators, in which revisions, refinements, and withdrawals may occur. With some exceptions [7, 19, 29, 42, 48, 53], the semantics for iterated belief revision mostly work on objects that are equivalent to total preorders, which lets using different kinds of changes at different times. Even the memoryless operators such as full meet revision [1] and the distance-based revision [17, 65] can be embedded in this framework: They produce a plausibility order that does not depend at all on the previous one except for their zero class.
The main technical result of this article is a method for computing the result of a mixed sequence of revisions. It directly works on sequences of lexicographic revisions, refinements, and severe antiwithdrawals, which may result from translating an arbitrary sequence of lexicographic revisions, refinements, severe withdrawal, natural, severe, plain severe, moderate severe, and very radical revisions, alternating in every possible way. The requirement of being able to solve propositional satisfiability problems is not too demanding, given the current efficiency of SAT algorithms [3, 5] and given that belief revision is built upon propositional logic, whose basic operations of satisfiability and validity are at the first level of the polynomial hierarchy [21, 58].
Other articles explored the translations from different belief change operators into a single formalism. Rott has shown that severe withdrawal, irrevocable and irrefutable revision can be expressed in terms of revision by comparison [67], natural and lexicographic in terms of bounded revision [69]. Several single-step revisions can be recast in some forms of circumscription [57].
Several computational complexity results about belief revision are known. Eiter and Gottlob [22] proved that most distance-based and syntax approaches are \(\Pi ^p_{2}\)-complete in the single-step case. In a further article [23], the same authors proved (among other results) that the same applies to positive right-nested counterfactuals, which are equivalent to a form of iterated revision. Nebel [62] proved a number of results, the most relevant to the present article being the that one-step syntactic-lexicographic revision is \(\Delta ^p_{2}\)-complete. This operator can encode lexicographic revision as defined in the iterated case by placing each formula in a separate priority class. Other iterated revisions have a similar degree of complexity [55].
A number of problems are left open. The algorithm requires a SAT solver, which is unavoidable given that the underlying language is propositional logic and SAT expresses its basic problems of satisfiability, mutual consistency and entailment. However, some restricted languages such as Horn and Krom require only polynomial time for checking satisfiability [74]. As a result, it makes sense to investigate their computational properties on iterated change. The analysis would not be obvious, because underformulae include both disjunction and conjunction, which may result in a non-Horn and non-Krom formula. The Horn restriction has been studied in single-step revisions by Eiter and Gottlob [20], and has recently been considered as a contributor to the semantics of revision [16].
Some iterated belief change operators such as radical revision (as opposed to very radical revision, considered in this article) consider some models “inaccessible” [25, 75]. In terms of total preorders, this amounts to shifting from a partition into ordered classes into a sequence of non-overlapping subsets; the models that are not in any of them are the inaccessible one. Alternatively, the highest-level class is given the special status of inaccessible model container. These operators have not been considered in this article, but the analysis could be extended to them.
Other operators not considered in this article include the ones based on numerical rankings [43, 72, 76, 80] and bidimensional ones [15, 28, 69]. They allow for specifying the strength of a revision either by a number or indirectly by referring to that of another formula. Either way, revision is by a pair of a formula and an expression of its strength. A preliminary analysis suggests that at least a form of bidimensional change, revision by comparison, can be recast in terms of lexicographic and severe antiwithdrawal, at the cost of first determining an underformula and a maxcon of the previous lexicographic revisions. Other two-place operators may be amenable to such reductions. Other recent work include iterated contraction [6, 50, 73] and operators where conditions on the result are specified, rather than mandating a mechanism for obtaining them [38].
Memoryless revision operators [17, 71] may be treated as if they had memory: this is the case of full meet revision, which is indeed oblivious to the previous history of revision. The ordering it generates is always \([M\!od(K),M\!od(\lnot K)]\). In spite of its simplicity, it is still useful to characterize a tabula rasa step of reasoning, forgetting all previously acquired data to start over from a single simple information.
Operators with full memory [19, 29, 48, 53] require a different analysis, since they work from the complete history of revisions rather than from a total preorder that is modified at each step. The same applies to operators working from structure more complex than total preorders over models [7, 42].
Finally, given that a revision may be performed using different operators, a question is how to decide which. This is related to merging and non-prioritized revision. An answer may be to use the history of previous revision to find out the credibility of a source [56], which affects the kind of incorporation. For example, trustworthy sources produce lexicographic revisions, plausible but not very reliable sources produce natural revisions, the others refinements. Still better, sources providing information that turned out to be valid after all subsequent changes are better treated by lexicographic revisions; source providing information that turned out to be specific to the current case are formalized by natural revision. As an alternative, every new information may be initially treated as a natural revision; if observations suggest its generality, then they are promoted to lexicographic.
References
[1]
C. E. Alchourrón, P. Gärdenfors, and D. Makinson. 1985. On the logic of theory change: Partial meet contraction and revision functions. J. Symbol. Logic 50 (1985), 510–530.
Carlos Areces and Verónica Becher. 2001. Iterable AGM functions. In Frontiers in Belief Revision, M. Williams and H. Rott (Eds.). Springer Netherlands, 261–277. DOI:
T. Balyo, M. J. H. Heule, and M. Järvisalo. 2017. SAT competition 2016: Recent developments. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI’17). AAAI Press/The MIT Press, 5061–5063. Retrieved from http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14977.
S. Benferhat, D. Dubois, and H. Prade. 2001. A computational model for belief change and fusing ordered belief bases. In Frontiers in Belief Revision, M. Williams and H. Rott (Eds.). Springer, 109–134.
R. Booth and J. Chandler. 2020. On strengthening the logic of iterated belief revision: Proper ordinal interval operators. Artific. Intell. 285 (2020), 103–289. DOI:
R. Booth and A. Nittka. 2008. Reconstructing an Agent’s epistemic state from observations about its beliefs and non-beliefs. J. Logic Comput. 18, 5 (2008), 755–782. DOI:
R. Booth, T. A. Meyer, and K. Wong. 2006. A bad day surfing is better than a good day working: How to revise a total preorder. In Proceedings of the 10th International Conference on Principles of Knowledge Representation and Reasoning (KR’06). AAAI Press/The MIT Press, 230–238.
M. Cadoli, F. M. Donini, P. Liberatore, and M. Schaerf. 2000. Space efficiency of propositional knowledge representation formalisms. J. Artific. Intell. Res. 13, 1 (2000), 1–31.
M. Dalal. 1988. Investigations into a theory of knowledge base revision: Preliminary report. In Proceedings of the Seventh National Conference on Artificial Intelligence (AAAI’88). 475–479.
J. P. Delgrande, D. Dubois, and J. Lang. 2006. Iterated revision as prioritized merging. In Proceedings of the 10th International Conference on Principles of Knowledge Representation and Reasoning (KR’06). 210–220.
T. Eiter and G. Gottlob. 1992. Complexity Results for Disjunctive Logic Programming and Application to Nonmonotonic Logics. Technical Report CD-TR 92/41. Technische Universität Wien, Vienna Austria, Christian Doppler Labor für Expertensysteme.
T. Eiter and G. Gottlob. 1992. On the complexity of propositional knowledge base revision, updates and counterfactuals. Artific. Intell. 57 (1992), 227–270.
T. Eiter and G. Gottlob. 1992. On the complexity of propositional knowledge base revision, updates and counterfactuals. Artific. Intell. 57 (1992), 227–270.
T. Eiter and G. Gottlob. 1996. The complexity of nested counterfactuals and iterated knowledge base revisions. J. Comput. Syst. Sci. 53, 3 (1996), 497–512.
T. Eiter and G. Gottlob. 1997. The complexity class \(\Theta _2^p\): Recent results and applications in AI and modal logic. In Proceedings of the 11th International Symposium on Fundamentals of Computer Theory (FCT’97). Springer, 1–18. DOI:
D. M. Gabbay, G. Pigozzi, and J. Woods. 2003. Controlled revision—An algorithmic approach for belief revision. J. Logic Comput. 13, 1 (2003), 3–22. DOI:
P. Gärdenfors and D. Makinson. 1988. Revision of knowledge systems using epistemic entrenchment. In Proceedings of the 2nd Conference on Theoretical Aspects of Reasoning about Knowledge (TARK’88). 83–95.
M. Girlando, B. Lellmann, N. Olivetti, and G. L. Pozzato. 2017. Hypersequent calculi for Lewis’ conditional logics with uniformity and reflexivity. In Proceedings of the International Conference on Automated Reasoning with Analytic Tableaux and Related Methods. 131–148.
S. O. Hanson. 2011. Logic of belief revision. In The Stanford Encyclopedia of Philosophy, E. N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
T. I. I. Aravanis, P. Peppas, and M.-A. Williams. 2019. Observations on Darwiche and Pearl’s approach for iterated belief revision. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19). 1509–1515. DOI:
G. Kern-Isberner and D. Huvermann. 2017. What kind of independence do we need for multiple iterated belief change? J. Appl. Logic 22 (2017), 91–119. DOI:
S. Konieczny and R. Pino Pérez. 2008. Improvement operators. In Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08). AAAI Press/The MIT Press, 177–187.
S. Konieczny and R. Pino Pérez. 2017. On iterated contraction: Syntactic characterization, representation theorem and limitations of the Levi identity. In Proceedings of the 11th International Conference on Scalable Uncertainty Management (SUM’17). Springer, 348–362. DOI:
M. Langlois, R. H. Sloan, B. Szörényi, and G. Turán. 2008. Horn complements: Towards Horn-to-Horn belief revision. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI’08). 466–471.
D. Lehmann. 1995. Belief revision, revised. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI’95). 1534–1540.
P. Liberatore. 1997. The complexity of belief update. In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI’97). 68–73.
P. Liberatore. 1997. The complexity of iterated belief revision. In Proceedings of the 6th International Conference on Database Theory (ICDT’97). 276–290.
T. Meyer, A. Ghose, and S. Chopra. 2002. Syntactic representations of semantic merging operations. In Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence (PRICAI’02). 620. DOI:
B. Nebel. 1998. How hard is it to revise a belief base? In Belief Change—Handbook of Defeasible Reasoning and Uncertainty Management Systems, Vol. 3, D. Dubois and H. Prade (Eds.). Kluwer Academic.
O. Papini. 2001. Iterated revision operations stemming from the history of an agent’s observations. In Frontiers in Belief Revision. Applied Logic Series, Vol. 22. Springer, 279–301.
P. Peppas, A. M. Fotinopoulos, and S. Seremetaki. 2008. Conflicts between relevance-sensitive and iterated belief revision. In Proceedings of the 18th European Conference on Artificial Intelligence (ECAI’08). IOS Press, 85–88. DOI:
P. Peppas and M. A. Williams. 2016. Kinetic consistency and relevance in belief revision. In Proceedings of the 15th European Conference on Logics in Artificial Intelligence (JELIA’16). 401–414. DOI:
H. Rott. 2003. Coherence and conservatism in the dynamics of belief II: Iterated belief change without dispositional coherence. J. Logic Comput. 13, 1 (2003), 111–145. DOI:
H. Rott. 2006. Revision by comparison as a unifying framework: Severe withdrawal, irrevocable revision and irrefutable revision. Theoret. Comput. Sci. 355, 2 (2006), 228–242. DOI:
H. Rott. 2009. Shifting priorities: Simple representations for twenty-seven iterated theory change operators. In Towards Mathematical Philosophy, D. Makinson, J. Malinowski, and H. Wansing (Eds.). Trends in Logic, Vol. 28. Springer Netherlands, 269–296. DOI:
K. Satoh. 1988. Nonmonotonic reasoning by minimal belief revision. In Proceedings of the International Conference on Fifth Generation Computer Systems (FGCS’88). 455–462.
K. Sauerwald, J. Haldimann, M. von Berg, and C. Beierle. 2020. Descriptor revision for conditionals: Literal descriptors and conditional preservation. In Proceedings of KI-2020: Advances in Artificial Intelligence—43rd German Conference on AI. Springer, 204–218. DOI:
K. Sauerwald, G. Kern-Isberner, and C. Beierle. 2020. A conditional perspective for iterated belief contraction. In Proceedings of the 24th European Conference on Artificial Intelligence (ECAI’20). IOS Press, 889–896. DOI:
W. Spohn. 1988. Ordinal conditional functions: A dynamic theory of epistemic states. In Causation in Decision, Belief Change, and Statistics. Kluwer Academics, 105–134.
W. Spohn. 1999. Ranking functions, AGM style. In Internet Festschrift for Peter Gärdenfors, B. Hansson, S. Halld/en, N.-E. Sahlin, and W. Rabinowicz (Eds.).
F. R. Velázquez-Quesada. 2017. On subtler belief revision policies. In Proceedings of the 6th International Workshop on Logic, Rationality, and Interaction(Lecture Notes in Computer Science, Vol. 10455). Springer, 314–329. DOI:
M. Williams. 1994. Transmutations of knowledge systems. In Proceedings of the 4th International Conference on the Principles of Knowledge Representation and Reasoning (KR’94). 619–629.
The AGM postulates for belief revision, augmented by the DP postulates for iterated belief revision, provide widely accepted criteria for the design of operators by which intelligent agents adapt their beliefs incrementally to new information. These ...
Multiple iterated revision requires advanced belief revision techniques that are able to integrate several pieces of new information into epistemic states. A crucial feature of this kind of revision is that the multiple pieces of information should be ...
The area of belief revision studies how a rational agent may incorporate new information about a domain into its belief corpus. An agent is characterised by a belief state K, and receives a new item of information @a which is to be included among its ...
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].