Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Area-driven Boolean bi-decomposition by function approximation

Published: 08 November 2024 Publication History

Abstract

Bi-decomposition rewrites logic functions as the composition of simpler components. It is related to Boolean division, where a given function is rewritten as the product of a divisor and a quotient, but bi-decomposition can be defined for any Boolean operation of two operands. The key questions are how to find a good divisor and then how to compute the quotient. In this article, we select the divisor by approximation of the original function and then characterize by an incompletely specified function the full flexibility of the quotient for each binary operator. We target area-driven exact bi-decomposition, and we apply it to the bi-decomposition of Sum-of-Products (SOP) forms. We report experiments that exhibit significant gains in literals of SOP forms when rewritten as bi-decompositions with respect to the product operator. This suggests the application of this framework to other logic forms and binary operations, both for exact and approximate implementations.

1 Introduction

Bi-decomposition [10, 24, 30, 37, 46] is a well-known form of decomposition, where a Boolean function \(f\) is rewritten as \(f = g \ {\texttt {op}}\ h\) and op is a two-input binary operator. When the operator is conjunction, bi-decomposition can be seen as a special case of Boolean division with an exact quotient [45]: \(f = g \cdot h\), where \(f, g, h\) are respectively the dividend, divisor and quotient functions. Bi-decomposition may be applied with different objectives: A common objective is size optimization (e.g., measured by literals) as in Reference [6] or delay optimization as in Reference [10] (either independent from technology measured by depth of Boolean networks, or timing optimization after technology mapping). When targeting size optimization, we can view bi-decomposition as a way to explore the implementations of a given logic expression by introducing one more level in the circuit and so exploring a larger solution space. For example, the logic expression \(x_1 x_2 x_4 + x_2 x_3 x_4\) of the form AND-OR with two products and six literals can be bi-decomposed as \(x_2 x_4 . (x_1 + x_3)\) of the form AND-OR-AND with four literals. In general, by bi-decomposition a \(k\)-level form can be rewritten as a \((k+1)\)-level form with a potentially smaller size due to more opportunities of factorization and restructuring.
The challenge for bi-decomposition as for Boolean division is as follows: Given \(f\), how to find a good divisor \(g\) and then how to determine the quotient \(h\), where \(f \subseteq g\) and \(f \subseteq h\) are necessary conditions on \(g\) and \(h\). A common technique to find divisors has been to search kernels as factors restricting the operation to algebraic division. Moreover, for bi-decomposition the problem has one more degree of freedom, since the goal is to rewrite a given Boolean function \(f\) as \(f = g \ {\texttt {op}}\ h\), where op is a two-input binary operator, of which there are 10 non-trivial ones. So, different choices of op yield different versions of the problem.
If algebraic techniques work fine in many applications, then Boolean techniques promise to search a larger solution space with a higher computational price. When op is conjunction (conjunctive bi-decomposition), it was suggested in Reference [10] to choose as divisors over-approximations \(g\) of \(f\), and to derive the associated conjuncts \(h\) by Boolean minimization using the don’t care set (dc-set) derived from the divisors \(g\). Therefore, one could obtain a sequence of decompositions \(f = g_i \cdot h_i, i = 0, \dots , n\), by pairs of divisors and quotients, in which the logic is distributed bouncing between \(g_i\) and \(h_i\), from \(g_0 = f, h_0 = 1\) to \(g_n = 1, h_n = f\), choosing the best tradeoff according to some objective function.
As a follow-up to the previous hint, we cast the problem of bi-decomposition \(f = g \ {\texttt {op}}\ h\), as selecting the divisor \(g\) by an approximation of \(f\), where the binary operation op used in the decomposition determines whether to use over-approximations or under-approximations of \(f\) as divisors \(g\). More precisely, according to the specific op, \(g\) must be an approximation for \(f\) or for its complement \(\overline{f}\), introducing only one type of errors: either 0 to 1 complementations of the output bits, or 1 to 0 complementations. The only exception are the bi-decompositions based on the XOR operation, where both types of errors can be introduced. Finding good divisors \(g\) from the approximations of \(f\) is not all, the next challenge is to compute the quotient: Given a function \(f\), an operator op and a divisor \(g\) as an approximation of \(f\), characterize the complete flexibility of the quotient function \(h\). In Reference [6], we expressed the full quotient by an incompletely specified function \(h\) with the smallest on-set and the largest dc-set such that \(f = g \ {\texttt {op}}\ h\).
The idea to compute the full quotient is that, according to the operation used in the bi-decomposition of \(f\), the on-set or the off-set of \(g\) become the dc-set for the function \(h\), together with the dc-set of \(f\), defining the full flexibility to obtain a more compact representation of \(f\), since the larger is the on (or off)-set of \(g\), the higher is the flexibility in the implementation of \(h\). In other words, \(h\) corrects the errors introduced by the approximation \(g\), so that \(g \ {\texttt {op}}\ h\) is an exact alternative representation of \(f\). The methodology of deriving divisors as over- or under-approximations of \(f\) and then computing the full quotient can serve the purpose either to explore different exact bi-decompositions of \(f\) for a given type of logic expressions like AND-OR or XOR-AND-OR forms (to find the best one, e.g., in terms of size, say, literals), or to explore different approximate bi-decompositions of \(f\), to be implemented approximately within a tolerable error rate.
Targeting the first objective of exact bi-decompositions of \(f\), in Reference [6] we studied the bi-decomposition of EXOR-AND-OR forms with op = AND or \(\not\Rightarrow\); here, instead, we report on the bi-decomposition of AND-OR (Sum-of-Products (SOP)) forms with op = AND.
In Section 2, we introduce some basic notation, and in Section 3 we state and prove in detail all the results on the full quotient flexibility for all 10 non-trivial two-input Boolean operators (divided in three classes: AND-like, OR-like, XOR-like). Sections 4 and 5 discuss subtle points of approximations for bi-decomposed forms and their application to SOP synthesis. Experiments for SOP synthesis are discussed in 6. Section 7 summarizes conclusions and future work.
This article is an extended version of a previous conference paper [6]. The novelties are as follows: Section 2 on previous work is mostly new; Section 3 on the full quotient for all 10 binary operators has been extended adding the complete proofs omitted in Reference [6] for lack of space; Sections 4 and 5 are completely new, and the same as Section 6 reporting the implementation of SOP bi-decomposition with op = AND. Section 7 concludes with the summary of where we stand and the expected future work.

2 Preliminaries and Related Work

2.1 Approximation

Let \(f\) be an incompletely specified function, depending on \(n\) binary variables, and let \(g\) be a completely specified approximation of \(f\). We denote by \(f^{\it \,on}\) and \(g^{\it \,on}\) the on-set of the functions \(f\) and \(g\), respectively. Moreover, let \(f^{\it \,off}\) and \(g^{\it \,off}\) denote the off-set of the two functions. Finally, \(f^{\it \,dc}\) denotes the don’t care-set of the incompletely specified function \(f\).
We can classify the approximation \(g\) depending on the errors introduced, as follows.
Definition 2.1.
The function \(g\) is a \(0 \rightarrow 1\) approximation (over-approximation) of \(f\) if it is derived by a 0 to 1 complementation of some output bits of \(f\), i.e., by moving some off-set minterms of \(f\) to the on-set, while there are no restrictions on the don’t cares of \(f\). In this case, it holds that \(f^{{\it \,on}} \subseteq g^{{\it \,on}}\).
Definition 2.2.
The function \(g\) is a \(1 \rightarrow 0\) approximation (under-approximation) of \(f\), if it is derived by a 1 to 0 complementation of some output bits of \(f\), i.e., by moving some on-set minterms of \(f\) to the off-set, while there are no restrictions on the don’t cares of \(f\). In this case, it holds \(g^{{\it \,on}} \subseteq f^{{\it \,on}} \cup f^{{\it \,dc}}\), therefore \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\).
Definition 2.3.
The function \(g\) is a \(0 \leftrightarrow 1\) approximation of \(f\), if it is derived by both 0 to 1 and 1 to 0 complementations of some output bits of \(f\).
Examples of \(0 \rightarrow 1\) approximation and \(1 \rightarrow 0\) approximation are depicted in Figures 1 and 2, respectively.
Fig. 1.
Fig. 1. The completely specified approximation \(g\) is a \(0 \rightarrow 1\) approximation of the incompletely specified function \(f\).
Fig. 2.
Fig. 2. The completely specified approximation \(g\) is a \(1 \rightarrow 0\) approximation of the incompletely specified function \(f\).
A \(0 \rightarrow 1\) approximation method in the context of approximate two-level logic synthesis has been proposed in Reference [39], where the authors designed an algorithm with the objective of synthesizing a SOP circuit with fewer literals under a constrained error rate. The main idea of the algorithm is to identify heuristically output values that can be complemented from 0 to 1 to expand products and reduce the number of literals in the final SOP representation (see Section 5 for more details). This approach has been generalized to three-level logic synthesis in Reference [4], where a similar \(0 \rightarrow 1\)- approximation heuristic for synthesizing three-level EXOR-AND-OR forms, with EXOR gates with fan-in 2, has been discussed and experimentally evaluated.
A strategy for the approximate synthesis of multi-level circuits is presented [38]. The method is based on the idea of inserting a stuck-at-fault in a node in the circuit and simplifying the circuit by propagating this redundancy and provides a \(0 \leftrightarrow 1\) approximation of the target function.
Recently, another heuristic for \(0 \rightarrow 1\) two-level approximate logic synthesis has been proposed in Reference [40]. Again, the fundamental operation used to introduce errors is to complement the outputs of some input combinations from 0 to 1 and then to search for an optimal set of input combinations for \(0 \rightarrow 1\) output complement.
Several other methods have been proposed in the literature; see for instance References [8, 21, 23, 28, 29, 31, 43, 44], mainly for \(0 \leftrightarrow 1\) approximations.
In this analysis of previous works on approximate logic synthesis, we did not find general techniques for deriving under-approximations of functions, i.e., \(1 \rightarrow 0\) approximation heuristics. This fact might be a consequence of the analysis discussed in Reference [39], at least in the context of approximate two-level logic synthesis. Indeed, the authors pointed out that with one \(1 \rightarrow 0\) complement, at most one product can be removed from the original SOP cover. On the contrary, a \(0 \rightarrow 1\) complement can expand many products in the original cover; moreover, an expanded product may entirely cover other products that become redundant and hence can be removed from the SOP cover for the approximate function. Only in Reference [8] is a \(1 \rightarrow 0\) approximation heuristic designed for deriving a D-reducible (Dimension-reducible) approximation of a target Boolean function, i.e., an approximation satisfying certain structural properties.

2.2 Bi-decomposition

A relevant bi-decomposition minimization algorithm was given by Malik, Harrison, and Brayton in Reference [26]. They studied three-level networks of the form
\begin{equation*} f=g_{1}\circ g_{2}, \end{equation*}
where \(g_{1}\) and \(g_{2}\) are SOP forms and \(\circ\) denotes a binary operation. Such an expression can be implemented by a Programmable Logic Device consisting of two Programmable Logic Arrays (PLAs) with a two-input gate (implementing the operation \(\circ\)) at the outputs.
The case of \(\circ =AND\) (called AND-OR-AND networks) was presented in Reference [26], and a faster algorithm was given in Reference [16].
The case of \(\circ =EXOR\) (called AND-OR-EXOR or EX-SOP) has been widely studied (see for details References [11, 12, 13, 14, 15, 17, 18, 19, 33, 35]). An EX-SOP network has a simple three-level architecture, since it contains only a single two-input EXOR gate at the output.
An algorithm for exact minimization of EX-SOP networks is described in Reference [13], limited to functions with up to five variables. Some interesting heuristics are described in References [15, 19, 35], and upper bounds on the number of products of EX-SOP forms are studied in Reference [17] and Reference [11]. An estimation metric that measures whether an input function is suitable for EX-SOP minimization is also developed in Reference [19]. Furthermore, there exist very few examples of exact minimization for three-level networks in the literature [13, 32]. Indeed, the research on three-level logic synthesis has not yet reached results equivalent to the ones obtained for two-level logic synthesis. Finally, the OR-AND-OR networks are described in Reference [34], and later studied in Reference [36].
More recently, some papers [2, 3, 9, 22, 25] study several new aspects of bi-decomposition and decomposition such as approximation or relations with And-Inverter Graphs (AIGs).

3 Bi-decomposition with Approximation

Let \(f : \lbrace 0,1\rbrace ^n \rightarrow \lbrace 0,1,-\rbrace\) be the function that must be synthesized. Let \(g\) be a (completely specified) approximation of \(f\) or of its complement \(\overline{f}\). Note that we define the complement of \(f\) as the function \(\overline{f}\) such that
\[\begin{eqnarray*} \overline{f}^{{\it \,on}} = f^{{\it \,off}}\\ \overline{f}^{{\it \,off}} = f^{{\it \,on}}\\ \overline{f}^{{\it \,dc}} = f^{{\it \,dc}}\,. \end{eqnarray*}\]
Given \(f\) and \(g\), and a two-input Boolean operator op, we want to compute an incompletely specified Boolean function \(h\) such that \(f\) can be represented in bi-decomposed form as \(f = g \ {\texttt {op}}\ h\).
In our analysis, we only consider the 10 (of 16) binary operations depending on both input variables, described in Table 1. Thus, we will not consider the two constant operations, as well as the four degenerate operations depending on only one input.
Table 1.
OperatorBi-decomposed form
AND\(f = g \cdot h\)
\(\not\Leftarrow\)\(f = \overline{g} \cdot h\)
\(\not\Rightarrow\)\(f = g \cdot \overline{h}\)
NOR\(f = \overline{g + h} = \overline{g} \cdot \overline{h}\)
OR\(f = g + h\)
\(\Rightarrow\)\(f = \overline{g} + h\)
\(\Leftarrow\)\(f = g + \overline{h}\)
NAND\(f = \overline{g\cdot h} = \overline{g} + \overline{h}\)
XOR\(f = g \oplus h\)
XNOR\(f = \overline{g \oplus h} = g \overline{\oplus }h\)
Table 1. The 10 Binary Operations Depending on Both Input Variables
Applying the De Morgan’s laws as shown in the table, we can note how the 10 operations can be naturally divided into three sets as follows:
the set of the four operations based on the binary AND applied to \(g\), \(h\) or to their complements (AND, \(\not\Leftarrow\), \(\not\Rightarrow\), NOR);
the set of the four operations based on the binary OR applied to \(g\), \(h\) or to their complements (OR, \(\Rightarrow\), \(\Leftarrow\), NAND);
the set containing the two operations based on the exclusive OR (XOR, XNOR).
We now describe how to approximate \(f\) with \(g\), and how to derive the quotient function \(h\) for each set of operations.
In the following exposition, we use \(f^{{\it \,on}}, g^{{\it \,on}}\), and \(h^{{\it \,on}}\) to denote the on-sets of the three functions \(f\), \(g\), and \(h\), \(f^{{\it \,off}}\), \(g^{{\it \,off}}\), and \(h^{{\it \,off}}\) to denote their off-sets, and \(f^{{\it \,dc}}\) and \(h^{{\it \,dc}}\) to denote the dc-sets of the incompletely specified functions \(f\) and \(h\) (observe that \(g\) is completely specified).

3.1 Decompositions based on AND, \(\not\Leftarrow\), \(\not\Rightarrow\), NOR

Let us first consider the AND binary operation. To represent \(f\) as \(g \cdot h\), it is necessary that
\(f^{{\it \,on}} \subseteq g^{{\it \,on}}\),
\(f^{{\it \,on}} \subseteq h^{{\it \,on}}\) .
In fact, both \(g\) and \(h\) must be equal to 1 on the on-set minterms of \(f\).
Moreover, \(h\) can be equal to 1 on all off-set minterms of \(g\), while it must get the value 0 where \(f\) evaluates to 0 and \(g\) to 1. Finally, \(h\) can get any value on the dc-set minterms of \(f\), independently of the value of \(g\). To maximize the don’t care set of \(h\), we have that
(i)
\(g\) must be a \(0 \rightarrow 1\) approximation of \(f\), so that \(f^{{\it \,on}} \subseteq g^{{\it \,on}}\), while there are no restrictions on the don’t cares of \(f\).
(ii)
\(h\) is the incompletely specified function whose on-set is equal to the on-set of \(f\), while the dc-set contains all off-set minterms of \(g\) and all dc-set minterms of \(f\), i.e.,
\[\begin{eqnarray*} &\ &h^{{\it \,on}} = f^{{\it \,on}}\\ &\ &h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\\ &\ &h^{{\it \,off}} = g^{{\it \,on}} \cap f^{{\it \,off}}\,. \end{eqnarray*}\]
Note that \(h^{{\it \,off}}\) coincides precisely with the set of minterms on which \(f^{\it \,on}\) and \(g^{\it \,on}\) differ, i.e., it describes the error introduced by the approximation, as shown in Figure 3. This implies that the more accurate is the approximation \(g\), the smaller is the off-set of the function \(h\) and the largest is \(h^{\it \,dc}\), thus providing a high flexibility that can be exploited to get a compact representation for the function \(h\), and, consequently, for the target function \(f\). We prove the correctness of our analysis in the following lemma.
Fig. 3.
Fig. 3. The completely specified function \(g\) and the incompletely specified function \(h\) such that \(f=g \cdot h\).
Lemma 3.1.
Let \(f\) be an incompletely specified function depending on \(n\) binary variables, let \(g\) be a completely specified \(0 \rightarrow 1\) approximation of \(f\), and let \(h\) be an incompletely specified function satisfying \(h^{{\it \,on}} = f^{{\it \,on}}\) and \(h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\). Then \(f = g \cdot h\).
Proof.
We first show that for any minterm \(w \in f^{\it \,on}\), \(g(w) \cdot h(w) = 1\). Observe that \(f^{{\it \,on}} \subseteq g^{{\it \,on}}\), as \(g\) is a \(0 \rightarrow 1\) approximation of \(f\). Moreover \(h^{{\it \,on}} = f^{{\it \,on}}\) by hypothesis. Thus, \(w \in g^{\it \,on}\) and \(w \in h^{\it \,on}\), and we immediately have \(g(w) \cdot h(w) = 1\).
Now suppose that \(w \in f^{\it \,off}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to the off-set of \(h\) by construction (as it can be neither in \(h^{\it \,on}\) nor in \(h^{\it \,dc}\)), and we have \(g(w) \cdot h(w) = 0\). Otherwise, if \(w \in g^{\it \,off}\), then, independently of the value of \(h\) on \(w\), we have \(g(w) \cdot h(w) = 0\), and the thesis follows. □
From the proof of this lemma it follows that \(h\) is the quotient function that guarantees the maximum flexibility in the decomposition:
Corollary 3.2.
Given an incompletely specified function \(f\) and an over-approximation \(g\) for \(f\), the function \(h\) with on-set \(h^{{\it \,on}} = f^{{\it \,on}}\) and dc-set \(h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\) is the quotient function with the biggest dc-set satisfying \(f = g \cdot h\), for the given function \(g\).
Example 3.3.
To provide more intuition for the proposed approach, we discuss a simple example for the AND bi-decomposition \(f = g \cdot h\) of the function \(f\) represented by the Karnaugh map in Figure 4
Fig. 4.
Fig. 4. Karnaugh maps for the function \(f=x_1 x_2 x_4 + x_2 x_3 x_4\) (a), for its approximation \(g = x_2x_4\) (b), and for the function \(h = x_1 + x_3\) such that \(f = g \cdot h\).
(a). A minimal SOP representation of \(f\) is given by \(f^{SOP} = x_1 x_2 x_4 + x_2 x_3 x_4\), that has 6 literals. A \(0 \rightarrow 1\) approximation of \(f\) can be obtained by simply adding the minterm \(\overline{x}_1 x_2 \overline{x}_3 x_4\) to the on-set of \(f\). In this way we obtain the function \(g\) depicted in Figure 4(b), that has a SOP representation \(g^{SOP} = x_2x_4\) containing only two literals. Applying Lemma 3.1, we then derive the function \(h\), described in Figure 4(c). Thanks to the large dc-set of \(h\), we can obtain a very compact SOP representation for \(h\), with only two literals: \(h^{SOP} = x_1 + x_3\). The overall bi-decomposed form is given by \(f = g \cdot h = x_2 x_4 \cdot (x_1 + x_3)\), with four literals.
The function \(h\) for the bi-decomposition of \(f\) with respect to the \(\not\Leftarrow\), \(\not\Rightarrow\), and NOR operations can be derived in a similar way, applying the previous considerations to \(\overline{g}\) and \(h\), \(g\) and \(\overline{h}\), and to \(\overline{g}\) and \(\overline{h}\), respectively. The definitions of the on-, off-, and dc-set of the function \(h\) for all three cases are reported in Table 2, and their correctness is proved as follows.
Table 2.
OperatorBi-decomposed formApproximation function \(g\)\({\mathbf {h}^{{\it \,on}}}\)\({\mathbf {h}^{{\it \,dc}}}\)\({\mathbf {h}^{{\it \,off}}}\)
AND\(f = g \cdot h\)\(0 \rightarrow 1\) approximation of \(f\) (i.e., \(f^{{\it \,on}} \subseteq g^{{\it \,on}}\))\(f^{{\it \,on}}\)\(g^{{\it \,off}}\cup f^{\it \,dc}\)\(g^{{\it \,on}}\cap f^{{\it \,off}}\)
\(\not\Leftarrow\)\(f = \overline{g} \cdot h\)\(1 \rightarrow 0\) approximation of \(\overline{f}\) (i.e., \(f^{{\it \,on}} \subseteq g^{{\it \,off}}\))\(f^{{\it \,on}}\)\(g^{{\it \,on}}\cup f^{\it \,dc}\)\(g^{{\it \,off}}\cap f^{{\it \,off}}\)
\(\not\Rightarrow\)\(f = g \cdot \overline{h}\)\(0 \rightarrow 1\) approximation of \(f\) (i.e., \(f^{{\it \,on}} \subseteq g^{{\it \,on}}\))\(g^{{\it \,on}}\cap f^{{\it \,off}}\)\(g^{{\it \,off}}\cup f^{\it \,dc}\)\(f^{{\it \,on}}\)
NOR\(f = \overline{g} \cdot \overline{h}\)\(1 \rightarrow 0\) approximation of \(\overline{f}\) (i.e., \(f^{{\it \,on}} \subseteq g^{{\it \,off}}\))\(g^{{\it \,off}}\cap f^{{\it \,off}}\)\(g^{{\it \,on}}\cup f^{\it \,dc}\)\(f^{{\it \,on}}\)
OR\(f = g + h\)\(1 \rightarrow 0\) approximation of \(f\) (i.e., \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\))\(g^{{\it \,off}}\cap f^{{\it \,on}}\)\(g^{{\it \,on}}\cup f^{\it \,dc}\)\(f^{{\it \,off}}\)
\(\Rightarrow\)\(f = \overline{g} + h\)\(0 \rightarrow 1\) approximation of \(\overline{f}\) (i.e., \(f^{{\it \,off}} \subseteq g^{{\it \,on}}\))\(g^{{\it \,on}}\cap f^{{\it \,on}}\)\(g^{{\it \,off}}\cup f^{\it \,dc}\)\(f^{{\it \,off}}\)
\(\Leftarrow\)\(f = g + \overline{h}\)\(1 \rightarrow 0\) approximation of \(f\) (i.e., \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\))\(f^{{\it \,off}}\)\(g^{{\it \,on}}\cup f^{\it \,dc}\)\(g^{{\it \,off}}\cap f^{{\it \,on}}\)
NAND\(f = \overline{g} + \overline{h}\)\(0 \rightarrow 1\) approximation of \(\overline{f}\) (i.e., \(f^{{\it \,off}} \subseteq g^{{\it \,on}}\))\(f^{{\it \,off}}\)\(g^{{\it \,off}}\cup f^{\it \,dc}\)\(g^{{\it \,on}}\cap f^{{\it \,on}}\)
XOR\(f = g \oplus h\)\(0 \leftrightarrow 1\) approximation of \(f\)\((g^{{\it \,on}} \cap f^{{\it \,off}})\cup (g^{{\it \,off}} \cap f^{{\it \,on}})\)\(f^{\it \,dc}\)\((g^{{\it \,off}} \cap f^{{\it \,off}})\cup (g^{{\it \,on}} \cap f^{{\it \,on}})\)
XNOR\(f = g\, \overline{\oplus }\, h\)\(0 \leftrightarrow 1\) approximation of \(\overline{f}\)\((g^{{\it \,off}} \cap f^{{\it \,off}})\cup (g^{{\it \,on}} \cap f^{{\it \,on}})\)\(f^{\it \,dc}\)\((g^{{\it \,on}} \cap f^{{\it \,off}})\cup (g^{{\it \,off}} \cap f^{{\it \,on}})\)
Table 2. Functions \(g\) and \(h\) Occurring in the Bi-decomposed Forms Based on the 10 Binary Operations Depending on Both Inputs
Lemma 3.4.
Let \(f\) be an incompletely specified function depending on \(n\) binary variables, and let \(g\) be a completely specified approximation of \(f\).
(1)
If \(g\) is a \(1 \rightarrow 0\) approximation of \(\overline{f}\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = f^{{\it \,on}}\) and \(h^{{\it \,dc}} = g^{{\it \,on}} \cup f^{{\it \,dc}}\), then \(f = \overline{g} \cdot h\), i.e., \(f= (g \not\Leftarrow h)\).
(2)
If \(g\) is a \(0 \rightarrow 1\) approximation of \(f\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = g^{\it \,on}\cap f^{{\it \,off}}\) and \(h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\), then \(f = g \cdot \overline{h}\), i.e., \(f= (g \not\Rightarrow h)\).
(3)
If \(g\) is a \(1 \rightarrow 0\) approximation of \(\overline{f}\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = g^{\it \,on}\cap f^{{\it \,off}}\) and \(h^{{\it \,dc}} = g^{{\it \,on}} \cup f^{{\it \,dc}}\), then \(f = \overline{g} \cdot \overline{h}\), i.e., \(f= g \mbox{ NOR } h\).
Proof.
We only prove the correctness of the decomposition based on \(\not\Leftarrow\). The correctness of the other two decompositions can be proved in a similar way.
First, suppose that \(w \in f^{\it \,on}\). The fact that \(g\) is a \(1 \rightarrow 0\) approximation of \(\overline{f}\) implies that some on-set minterms of \(\overline{f}\) have been moved to its off-set, so that \(g^{\it \,on}\subseteq \overline{f}^{\it \,on}\cup \overline{f}^{\it \,dc}\), i.e., \(g^{\it \,on}\subseteq f^{\it \,off}\cup f^{\it \,dc}\), which is equivalent to \(f^{\it \,on}\subseteq g^{\it \,off}\) (recall that \(g\) is completely specified). This in turns implies that \(w \in g^{\it \,off}= \overline{g}^{\it \,on}\). Moreover, since \(h^{\it \,on}= f^{\it \,on}\), we have that \(w \in h^{\it \,on}\), and we immediately derive that \(\overline{g}(w) \cdot h(w) = 1\).
Now suppose that \(w \in f^{\it \,off}\). Since \(g^{\it \,on}\subseteq f^{\it \,off}\cup f^{\it \,dc}\), \(w\) might belong either to \(g^{\it \,on}\) or to \(g^{\it \,off}\). In the first case, we immediately have that \(\overline{g}(w) \cdot h(w) = 0\), independently of the value of \(h\) on \(w\). Otherwise, if \(w \in g^{\it \,off}\), then \(w\) belongs to \(h^{\it \,off}\) by construction. In fact, \(w\) can be neither in \(h^{\it \,on}\), which is equal to \(f^{\it \,on}\), nor in \(h^{\it \,dc}\), which is equal to \(g^{{\it \,on}} \cup f^{{\it \,dc}}\). Thus, we have \(\overline{g}(w) \cdot h(w) = 0\), and the thesis follows. □
As before, the functions \(h\) used in the bi-decompositions guarantee the maximum flexibility thanks to the definition of their dc-sets:
Corollary 3.5.
The functions \(h\) defined as in Lemma 3.4 are the functions with the biggest dc-set satisfying the bi-decompositions with respect to the \(\not\Leftarrow\), \(\not\Rightarrow\), and NOR operations, for a given incompletely specified function \(f\) and its approximations \(g\).
From Table 2 we can observe that, depending on the specific operation used in the decomposition, \(h^{\it \,on}\) or \(h^{\it \,off}\) describe the differences between \(f\) or \(\overline{f}\) and their approximation \(g\). Thus, the more accurate is the approximation \(g\), the smaller will be \(h^{\it \,on}\) or \(h^{\it \,off}\).

3.2 Decompositions Based on OR, \(\Rightarrow\), \(\Leftarrow\), NAND

We first consider the OR binary operation. To represent \(f\) as \(g + h\) it is necessary that \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\) and \(f^{{\it \,off}} \subseteq h^{{\it \,off}}\). In fact, both \(g\) and \(h\) must be equal to 0 on the off-set minterms of \(f\); otherwise, \(g+h\) would be equal to 1. Moreover, \(h\) must be equal to 1 on all on-set minterms of \(f\) where the approximation \(g\) is equal to 0, while it can get any value on the dc-set minterms of \(f\), independently of the value of \(g\). To maximize the don’t care set of \(h\), we have that
(i)
\(g\) is a \(1 \rightarrow 0\) approximation of \(f\), so that \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\). Moreover, \(g\) can take either the value 0 or the value 1 on both the on-set and the dc-set of \(f\).
(ii)
\(h\) is the incompletely specified function whose on-set contains all on-set minterms of \(f\) where \(g\) is equal to 0, while the dc-set contains all on-set minterms of \(g\) and all dc-set minterms of \(f\), i.e.,
\[\begin{eqnarray*} &\ &h^{{\it \,on}} = g^{{\it \,off}} \cap f^{{\it \,on}} \\ &\ &h^{{\it \,dc}} = g^{{\it \,on}} \cup f^{\it \,dc}\\ &\ &h^{{\it \,off}} = f^{{\it \,off}}\,. \end{eqnarray*}\]
Observe that, in this case, \(h^{{\it \,on}}\) describes the error introduced by the approximation \(g\), while \(h^{\it \,off}\) coincides with the off-set of the target function \(f\), as depicted in Figure 5. So, if the quality of the approximation is good, \(h\) will have a limited number of on-set minterms, and a dc-set bigger than \(f^{\it \,dc}\). We prove the correctness of our analysis in the following lemma.
Fig. 5.
Fig. 5. The completely specified function \(g\) and the incompletely specified function \(h\) such that \(f=g + h\).
Lemma 3.6.
Let \(f\) be an incompletely specified function depending on \(n\) binary variables, let \(g\) be a completely specified \(1 \rightarrow 0\) approximation of \(f\), and let \(h\) be an incompletely specified function satisfying \(h^{{\it \,on}} = g^{\it \,off}\cap f^{{\it \,on}}\) and \(h^{{\it \,dc}} = g^{{\it \,on}} \cup f^{{\it \,dc}}\). Then \(f = g + h\).
Proof.
We first show that for any minterm \(w \in f^{\it \,on}\), \(g(w) + h(w) = 1\). Observe that \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\), i.e., \(g^{{\it \,on}} \subseteq f^{{\it \,on}} \cup f^{{\it \,dc}}\), as \(g\) is a \(1 \rightarrow 0\) approximation of \(f\). Moreover, \(h^{{\it \,on}} = g^{\it \,off}\cap f^{{\it \,on}}\) by hypothesis. Thus, either \(w \in g^{\it \,on}\) or \(w \in h^{\it \,on}\), and we immediately have \(g(w) + h(w) = 1\). Now, suppose that \(w \in f^{\it \,off}\). Then, \(w \in g^{\it \,off}\), since \(f^{{\it \,off}} \subseteq g^{{\it \,off}}\). Moreover by construction, \(w\) belongs to the off-set of \(h\), as it can be neither in \(h^{\it \,on}\) (which is equal to \(g^{\it \,off}\cap f^{{\it \,on}}\)), nor in \(h^{\it \,dc}\) (equal to \(g^{{\it \,on}} \cup f^{{\it \,dc}})\). Thus, \(g(w) + h(w) = 0\) and the thesis follows. □
Even in this case, the proof of the lemma implies that \(h\) is the function that guarantees the maximum flexibility in the decomposition:
Corollary 3.7.
The function \(h\) defined as in Lemma 3.6 is the function with the biggest dc-set satisfying the bi-decompositions with respect to the OR operation, for a given incompletely specified function \(f\) and its under-approximations \(g\).
The function \(h\) for the bi-decomposition of \(f\) with respect to the \(\Rightarrow\), \(\Leftarrow\), and NAND operators can be derived in a similar way, applying the previous considerations to \(\overline{g}\) and \(h\), \(g\) and \(\overline{h}\), and to \(\overline{g}\) and \(\overline{h}\), respectively. The definitions of the on-, off-, and dc-set of the function \(h\) for these cases are shown in Table 2, and the correctness is proved in the following lemma.
Lemma 3.8.
Let \(f\) be an incompletely specified function depending on \(n\) binary variables, and let \(g\) be a completely specified approximation of \(f\).
(1)
If \(g\) is a \(0 \rightarrow 1\) approximation of \(\overline{f}\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = g^{\it \,on}\cap f^{{\it \,on}}\) and \(h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\), then \(f = \overline{g} + h\), i.e., \(f= (g \Rightarrow h)\).
(2)
If \(g\) is a \(1 \rightarrow 0\) approximation of \(f\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = f^{{\it \,off}}\) and \(h^{{\it \,dc}} = g^{{\it \,on}} \cup f^{{\it \,dc}}\), then \(f = g + \overline{h}\), i.e., \(f= (g \Leftarrow h)\).
(3)
If \(g\) is a \(0 \rightarrow 1\) approximation of \(\overline{f}\) and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = f^{{\it \,off}}\) and \(h^{{\it \,dc}} = g^{{\it \,off}} \cup f^{{\it \,dc}}\), then \(f = \overline{g} + \overline{h}\), i.e., \(f= g \mbox{ NAND } h\).
Proof.
We only prove the correctness of the decomposition based on the \(NAND\). The correctness of the other two decompositions can be proved in a similar way.
Suppose that \(w \in f^{\it \,on}\). The fact that \(g\) is a \(0 \rightarrow 1\) approximation of \(\overline{f}\) implies that some off-set minterms of \(\overline{f}\) have been moved to its on-set, so that \(\overline{f}^{\it \,on}\subseteq g^{\it \,on}\), i.e., \(f^{\it \,off}\subseteq g^{\it \,on}\). Thus, \(w\) might belong either to \(g^{\it \,on}\) or to \(g^{\it \,off}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to the off-set of \(h\) by construction, as it can be neither in \(h^{\it \,on}\), which is equal to \(f^{\it \,off}\), nor in \(h^{\it \,dc}\), which is equal to \(g^{{\it \,off}} \cup f^{{\it \,dc}}\). Thus, we have \(\overline{g(w)} + \overline{h(w)} = 1\), and the thesis follows. Otherwise, if \(w \in g^{\it \,off}\), then \(\overline{g(w)} + \overline{h(w)} = 1\), independently of the value of \(h\) on \(w\). Now, suppose that \(w \in f^{\it \,off}\). Since \(f^{\it \,off}\subseteq g^{\it \,on}\), \(w\) belongs to \(g^{\it \,on}\). Moreover, \(w\) also belongs to the on-set of \(h\), as by construction \(h^{\it \,on}= f^{\it \,off}\). Thus, we have \(\overline{g(w)} + \overline{h(w)} = 0\), and the thesis follows. □
As before, the functions \(h\) guarantee the maximum flexibility thanks to the definition of their dc-sets:
Corollary 3.9.
The functions \(h\) defined as in Lemma 3.8 are the functions with the biggest dc-set satisfying the bi-decompositions with respect to the \(\Rightarrow\), \(\Leftarrow\), and NAND operations.

3.3 Decompositions Based on XOR and XNOR

If we want to represent \(f\) as the XOR between its approximation \(g\) and the function \(h\), so that \(f = g \oplus h\), then the linearity of the XOR operator implies that \(h\) must be defined as \(h = g \oplus f\). Indeed, where \(f\) and \(g\) assume the same (specified) value, \(h\) must evaluate to 0, while \(h\) must be equal to 1 where \(f\) and \(g\) differ. Finally, \(h\) can get any value on the dc-set set of \(f\), independently of the value of \(g\). Thus,
(i)
\(g\) can be any approximation of \(f\), derived by both \(0 \rightarrow 1\) and \(1 \rightarrow 0\) complementations of some output bits of \(f\).
(ii)
\(h^{\it \,on}\) describes the error introduced by the approximation:
\[\begin{eqnarray*} &\ &h^{{\it \,on}} = (g^{{\it \,on}} \cap f^{{\it \,off}})\cup (g^{{\it \,off}} \cap f^{{\it \,on}})\\ &\ &h^{{\it \,dc}} = f^{\it \,dc}\\ &\ &h^{{\it \,off}} = (g^{{\it \,off}} \cap f^{{\it \,off}})\cup (g^{{\it \,on}} \cap f^{{\it \,on}})\,. \end{eqnarray*}\]
Analogously, we can observe that to represent \(f\) as \(g \mbox{ XNOR } h\), it is necessary that
\[\begin{eqnarray*} &\ &h^{{\it \,on}} = (g^{{\it \,off}} \cap f^{{\it \,off}})\cup (g^{{\it \,on}} \cap f^{{\it \,on}})\\ &\ &h^{{\it \,dc}} = f^{\it \,dc}\\ &\ &h^{{\it \,off}} = (g^{{\it \,on}} \cap f^{{\it \,off}})\cup (g^{{\it \,off}} \cap f^{{\it \,on}})\,. \end{eqnarray*}\]
In this case, \(g\) is a \(0 \leftrightarrow 1\) approximation of \(\overline{f}\), whose errors are described by \(h^{\it \,off}\), as shown in Figure 6.
Fig. 6.
Fig. 6. The completely specified function \(g\) and the incompletely specified function \(h\) such that \(f=g \oplus h\).
We conclude proving the correctness of our approach for these last two binary operations.
Lemma 3.10.
Let \(f\) be an incompletely specified function depending on \(n\) binary variables, and let \(g\) be a completely specified approximation of \(f\).
(1)
If \(g\) is a \(0 \leftrightarrow 1\) approximation of \(f\), and \(h\) is an incompletely specified function s.t. \(h^{{\it \,on}} = (g^{{\it \,on}} \cap f^{{\it \,off}})\cup (g^{{\it \,off}} \cap f^{{\it \,on}})\) and \(h^{{\it \,dc}} = f^{{\it \,dc}}\), then \(f = g \oplus h\), i.e., \(f= g \mbox{ XOR } h\).
(2)
If \(g\) is a \(0 \leftrightarrow 1\) approximation of \(\overline{f}\), and \(h\) is an incompletely specified function satisfying \(h^{{\it \,on}} = (g^{{\it \,off}} \cap f^{{\it \,off}})\cup (g^{{\it \,on}} \cap f^{{\it \,on}})\) and \(h^{{\it \,dc}} = f^{{\it \,dc}}\), then \(f = g \ \overline{\oplus }\ h\), i.e., \(f= g \mbox{ XNOR } h\).
Proof.
Let us first consider the decomposition based on the XOR. Suppose that \(w \in f^{\it \,on}\). Then, the fact that \(g\) is a \(0 \leftrightarrow 1\) approximation of \(\overline{f}\) implies that \(w\) might belong either to \(g^{\it \,on}\) or to \(g^{\it \,off}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to the off-set of \(h\) by construction, and we have \({g(w)} \oplus {h(w)} = 1 \oplus 0 = 1\). If instead \(w \in g^{\it \,off}\), then \(w\) belongs to the on-set of \(h\), and we have \({g(w)} \oplus {h(w)} = 0 \oplus 1 = 1\). Now, suppose \(w \in f^{\it \,off}\); as before \(w\) might belong either to \(g^{\it \,on}\) or to \(g^{\it \,off}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to the on-set of \(h\) by construction, and we have \({g(w)} \oplus {h(w)} = 1 \oplus 1 = 0\). If instead \(w \in g^{\it \,off}\), then \(w\) belongs to the off-set of \(h\), and we have \({g(w)} \oplus {h(w)} = 0 \oplus 0 = 0\). Thus, the thesis follows. Let us now consider the decomposition based on the XNOR. Suppose that \(w \in f^{\it \,on}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to the on-set of \(h\), and we have \({g(w)}\ \overline{\oplus }\ {h(w)} = 1 \ \overline{\oplus }\ 1 = 1\). If instead \(w \in g^{\it \,off}\), then \(w\) belongs to the off-set of \(h\), and we have \({g(w)} \overline{\oplus }{h(w)} = 0 \ \overline{\oplus }\ 0 = 1\). Now, let \(w \in f^{\it \,off}\). If \(w \in g^{\it \,on}\), then \(w\) belongs to \(h^{{\it \,off}}\), and we have \({g(w)}\ \overline{\oplus }\ {h(w)} = 1\ \overline{\oplus }\ 0 = 0\). Finally, if \(w \in g^{\it \,off}\), then \(w\in h^{\it \,on}\), and we have \({g(w)} \ \overline{\oplus }\ {h(w)} = 0 \ \overline{\oplus }\ 1 = 0\). Thus, the thesis follows. □
The overall results of this analysis, together with the definitions of the on-, off-, and dc-set of the function \(h\) are summarized in Table 2.

4 A Discussion on the Bi-decomposed Forms

In this section we discuss several interesting aspects of approximation and bi-decomposed forms.
First, we notice that we cannot simply exploit \(1 \rightarrow 0\) approximation to mimic exactly the minimization based on \(0 \rightarrow 1\) approximation, and vice-versa, even if we use the negation of the considered Boolean function. We explain this point through a simple example. Consider, for instance, the Boolean function \(f\) depicted in Figure 7(a). The minimal SOP \(0 \rightarrow 1\) approximated form is shown in Figure 7(c), with an error in the point 0101 and the corresponding algebraic SOP form: \(x_2 x_4\). If we negate \(f\), then we obtain the function \(\overline{f}\) depicted in Figure 7(b). If we minimize \(\overline{f}\) using a \(1 \rightarrow 0\) approximation, then we obtain the solution depicted in Figure 7(d), with an error in the point 1001 and algebraic SOP form: \(\overline{x}_1 \overline{x}_3\). We can notice that the two solutions do not correspond. This means that we cannot exploit a \(1 \rightarrow 0\) approximation on \(\overline{f}\) to mimic exactly a minimization based on \(0 \rightarrow 1\) approximation. This example shows that even if we could easily get \(\overline{f}\), in some way, one kind of approximation cannot be implemented with the other one. Of course, we could consider a different approximation (for example the one given by Figure 7(d)), and then complement the given solution. This is possible only in the cases when the computation of the complemented function \(\overline{f}\) is easy. Moreover, the resulting solution for \(f\) is given by the negation of the final approximated circuits (of \(\overline{f}\)), resulting in an additional level of logic. For this reason, in this article we always distinguish cases where \(0 \rightarrow 1\) approximation or \(1 \rightarrow 0\) approximation is required.
Fig. 7.
Fig. 7. \(0 \rightarrow 1\) approximation of \(f\) and \(1 \rightarrow 0\) approximation of \(\overline{f}\).
Another aspect, worth to be highlighted, of the proposed technique is the number of don’t cares that are generated to enhance the exact minimization (i.e., \(h^{dc}\)). This is the key idea of the proposed method. In fact, the higher is the number of don’t cares, the higher is the flexibility of the synthesis. For this reason, we notice that this method is not very effective for the operator XOR and XNOR. In fact, in Table 2 we can observe that the don’t care set of \(h\) (i.e., the column labeled with \(h^{dc}\)) in all the cases, but XOR and XNOR, contains the union of the don’t care set \(f^{dc}\) of the given function \(f\), and of the on or off set of the approximation function \(g\).

5 Approximation Techniques for SOP Synthesis

The proposed bi-decomposition method is very general and can be applied to any approximation technique. As a case study in this article, we consider the standard synthesis of Boolean functions in two-level Sum of Products or SOP forms, to derive bi-decompositions \(f = g \ {\texttt {op}}\ h\), where both the approximation \(g\) and the quotient function \(h\) are represented in SOP forms.
In the previous section, we observed that bi-decomposition with operations based on the binary AND and OR operators generate more don’t cares than bi-decompositions based on EXOR, and therefore might provide more interesting results.
In the literature, we found many techniques for over-approximation, i.e., for a \(0 \rightarrow 1\) approximation, which can be used for the AND-based bi-decomposition, while we did not find general heuristics for \(1 \rightarrow 0\) approximation, required to implement the bi-decomposition based on the OR function. As pointed out in Reference [39], a possible reason might be that, in the context of approximate SOP synthesis, approximate functions obtained by complementing a minterm in the off-set (i.e., by a \(0 \rightarrow 1\) complement) usually contain the smallest number of literals. Indeed, with one \(1 \rightarrow 0\) complement, we can remove at most one product in the original SOP cover. Instead, a \(0 \rightarrow 1\) complement can expand many products in the original cover, and in turn, expanded products may entirely cover other products that become redundant and hence can be removed from the SOP cover for the approximate function. For these reasons, we have chosen to implement a \(0 \rightarrow 1\) approximation heuristic inspired by References [5, 39] to derive and experimentally validate the bi-decomposition \(f = g \cdot h\) based on the AND operation.
The heuristic proposed in Reference [39] identifies minterms that can be complemented to maximally reduce the number of literals in the final SOP form, for a given error rate threshold \(r\), defined as the percentage of input vectors on which the output computed by the circuit can be different from the exact one. In particular, for an error rate \(r\) and a target function \(f\) depending on \(n\) binary variables, let \(M_t = 2^n \times r/100\) denote the number of minterms on which the output computed by the circuit can be different, on at least one bit, from the exact one. The heuristic exploits a so-called Assisting Expansion procedure, consisting in the removal of a single literal from a prime implicant in an initial SOP representation \(S\) of the target function \(f\), and optimizes the resulting representation by use of redundancy removal techniques and heuristic minimizers. If the expansion due to the literal removal implies the complement of a number of minterms greater than the chosen error threshold \(M_t\), then the expansion is discarded; otherwise, the generated product is subjected to further evaluation and eventually added to the new approximated SOP form. Note that since this heuristic involves only the removal of literals from the implicants, the only modified minterms are part of the off-set \(f^{{\it \,off}}\) of the function, so that the heuristic produces a \(0 \rightarrow 1\) approximation of \(f\).
We have implemented the Assisting Expansion heuristic of Reference [39] in the following way: Let \(S\) be an initial SOP representation of a function \(f\), and let \(P\) be the set of all products in \(S\).
For each product \(p_i \in P\), \(p_i = e_{i,1} \cdot e_{i,2} \cdots e_{i,k_i}\), we generate all expansions \(p^j_i =e_{i,1} \cdots e_{i,j-1} \cdot e_{i,j+1}\cdots e_{i,k_i}\) by removing a single literal \(e_{i,j}\), for all \(1 \le j \le k_i\).
The cardinality of the intersection between \(p^j_i\) and \(f^{{\it \,off}}\) gives the number of minterms that must be moved from \(f^{{\it \,off}}\) to \(f^{{\it \,on}}\), which represents the cost \(c_i^j\) of the expansion.
If \(c^j_i\) is less than or equal to the threshold \(M_t\), then the product \(p^j_i\) is inserted in a maximum priority queue, with weight \(w_i^j/c^j_i\), where \(w_i^j\) is the number of products in the initial SOP \(S\) that are covered by the new expanded product (gain of the expansion): \(w_i^{\,j} = |\lbrace p_k \in S \ |\ p_k \subset p_i^{\,j} \rbrace |\,.\) The priority defined in this way allows us to determine the expansions that cover the greatest number of products by complementing the smallest number of minterms.
If the cost exceeds the threshold \(M_t\), then the expanded product \(p_i^{\,j}\) is discarded.
This procedure is repeated for all products in \(P\). Once all products have been expanded in all possible directions, we must select the subset \(I\) of expanded products that maximizes the overall gain within the minterm threshold \(M_t\). This final step consists basically in solving an instance of the well known Knapsack problem [20], which is an NP-hard problem. For this reason, we perform the step heuristically, by means of a greedy selection of the expanded products, as done for three-level forms in Reference [5]. More precisely, all products are extracted from the priority queue and then selected until the number of 0 to 1 complements introduced in the approximate version of \(f\) reaches the threshold \(M_t\). Whenever an expanded product \(p_i^{\,j}\) has been selected and inserted in \(I\), all other expansions of \(p_i\), i.e., all \(p_i^k\) with \(k \ne j\), are discarded from the priority queue. The reason is that the choice of another expansion \(p_i^k\) of the same product \(p_i\) is never convenient as it introduces one more product in the final algebraic form.
Once the set \(I\) has been determined, it is possible to proceed with the construction of the final SOP representation in the following way:
(1)
Copy all products in the original SOP \(S\) into a new set \(P^{\prime }\)
(2)
Remove all products in \(P^{\prime }\) covered by the individual expanded products in \(I\)
(3)
Insert all products in \(I\) in \(P^{\prime }\)
(4)
Remove from \(P^{\prime }\) all redundant products, i.e., all products covered by the union of other products in \(P^{\prime }\)
(5)
Return the SOP \(S^{\prime }\) obtained minimizing the disjunction of all products left in \(P^{\prime }\).
Observe that the SOP \(S^{\prime }\) satisfies the following property.
Proposition 5.1.
Let \(S\) be an SOP form, and let \(L_S\) and \(P_S\) denote the number of literals and the number of products in \(S\). Let \(S^{\prime }\) be the approximated SOP derived from \(S\) applying the Assisting Expansion heuristic. Then \(L_{S^{\prime }} \le L_S \quad\ \text{and }\ \quad P_{S^{\prime }} \le P_S \,.\) Moreover, if the number of complemented minterms is greater than 0, then \(L_S \lt L_{S^{\prime }}\).
Proof.
Follows since the assisted expansion guarantees that for each cube at most one expansion is added, and that the same cube is removed, since it is covered by the expanded one. □

6 Experimental Results

In this section we report the experimental results conducted to evaluate the bi-decomposition by the approximation strategy discussed in the previous sections. We consider completely specified and incompletely specified benchmarks in PLA form (from the Espresso and LGSynth’89 benchmark suite [47]), and we study the performance of the proposed approach by comparing the size (in terms of number of literals) of the SOP form of a given benchmark \(f\) vs. the size of the AND-based bi-decomposition form \(f = g \cdot h\), where \(g\) and \(h\) are represented as SOPs. The choice of limiting the analysis to the AND-based bi-decomposition is due to the following facts:
First, OR-based bi-decompositions require \(1 \rightarrow 0\) approximation heuristics and we did not find in the literature techniques for deriving these under-approximations. As pointed out in Reference [39], and discussed in Section 5, a possible reason might be that in the context of approximate SOP synthesis, over-approximated functions usually contain the smallest number of literals. Indeed, a \(0 \rightarrow 1\) complement can expand many products in the original cover, and in turn, expanded products may entirely cover other products that can be removed from the SOP cover for the approximate function. Instead, a \(1 \rightarrow 0\) complement can remove at most one product in the original SOP cover.
XOR-based bi-decompositions do not generate new don’t cares. In fact, in Table 2 we can observe that the don’t care set of \(h\) for XOR and XNOR operators coincides with the original don’t care set \(f^{dc}\) of the given function \(f\). This represents a serious limitation of the proposed approach, whose key idea is that of exploiting the flexibility due to the insertion of new don’t care conditions in the synthesis process. Thus, we expect XOR-based bi-decompositions to be less effective than other functional decompositions based on the XOR operator (see Section 2.2 for some literature).
To derive the over-approximation \(g\), we have implemented the heuristic described in Section 5 in C, using the CUDD library to represent and manipulate Boolean functions with BDDs, e.g., to compute unions and intersections between \(f\) and \(g\) to define the quotient \(h\). Sum of product forms have been minimized using the standard SOP minimizer Espresso [27]. The experiments have been run on a Linux Intel Core i5-8250U CPU with 8 GB of RAM.
Finally, the choice of benchmarks for our experiments is due to the fact that we consider SOP expressions to optimize with tools such as Espresso, which require inputs represented in PLA form. The benchmarks available in other standard sets (such as EPFL benchmark suite [41, 42]) are, unfortunately, not given in PLA form. We further discuss this point in the concluding section.
In the experiments, the bi-decomposition of \(f\) is described in Algorithm 1, and consists of the following steps:
(1)
the over-approximation \(g\) of \(f\) is computed following the strategy described in Section 5 for a given error rate \(r\), then \(g\) is minimized in SOP form, using Espresso [27] in heuristic mode;
(2)
the on-set and dc-set of \(h\) are computed by the formulas in Table 2 for the AND function, with OBDD operations;
(3)
\(h\) is minimized using Espresso [27] in heuristic mode;
(4)
the bi-decomposition of \(f\) is computed as the AND of the two SOPs for \(g\) and \(h\).
Table 3 reports the results of the experiments on multi-output benchmarks (we report only a significant subset of the benchmarks as representative indicators of our experiments). We consider different error rates for the construction of \(g\), ranging from 1% to 20%. The first two columns report the name of the benchmarks, the number of their inputs and outputs, and the number of literals in their initial SOP representation (minimized heuristically by Espresso). The remaining groups of two columns show the overall number of literals in the bi-decomposed forms \(g \cdot h\) obtained for each benchmark \(f\), for seven different error rates allowed in the approximation \(g\), together with the percentage gain in the number of literals. Whenever the bi-decomposition method produces forms with a higher number of literals, the benchmark is left unchanged (i.e., we set \(g=f\), and \(h=1\)), and the gain is set to 0. The last three rows of the table report the average gain for the whole set of benchmarks, and the average gain computed only on the subset of benchmarks that benefit from the bi-decomposition, i.e., which present a strictly positive gain. For each benchmark, in Table 3, we underline in bold the error rate that exhibits better results.
Table 3.
  \(r = 1\%\)\(r = 2\%\)\(r = 5\%\)\(r = 10\%\)\(r = 12\%\)\(r = 15\%\)\(r = 20\%\)
Bench (in/out)# L# Lgain# Lgain# Lgain# Lgain# Lgain# Lgain# Lgain
add6 (12/7)2196179018.49170922.18188714.07167623.68173720.90147932.65152530.56
addm4 (9/8)153114306.60136111.10137710.06136410.9114723.8515310.0014624.51
adr4 (8/5)3403205.8829114.4127220.0030211.183128.243361.183264.12
alcom (15/38)2112110.002110.001948.061986.161995.691919.482.110.00
alu1 (12/8)41410.00410.00410.00410.00410.00410.00410.00
bcc (26/45)870587050.0087050.0087050.0087050.0087050.0087050.0087050.00
ex7 (16/5)75465413.2663715.5249334.6247237.4046438.4650932.4954128.25
mainpla (27/54)873977858610.08807597.60840353.85843523.48853322.36873970.00873970.00
max1024 (10/6)266924747.3124279.07223616.22229314.09229913.86237011.2025305.21
misg (56/23)18015911.6713425.561800.001800.001800.001800.001800.00
newapla1 (12/7)786319.236023.086812.82780.00780.001780.00780.00
pdc (16/40)3466238731.13243429.77228234.16222335.86220136.50201641.83211039.12
t1 (21/23)7317310.007310.007310.007310.007033.837102.877310.00
t2 (17/16)4154150.004150.004150.003984.103964.584032.893993.86
t4 (12/8)1081080.001080.001080.001025.561080.001080.001080.00
tms (8/16)1804124930.76131127.33135225.06147618.1816478.70140921.90146518.79
xparc (41/73)49901495300.74499010.00499010.00499010.00496300.54486302.55467296.36
Average benchmark suite4.955.876.807.437.147.306.90
Average benchmark suite (\(gain \gt 0\))9.3110.8814.7815.3614.2714.5915.26
Table 3. Experimental Evaluation of the AND Bi-decomposition for Multi-output Benchmarks, with Different Error Rates for the Approximation \(g\)
The average results show how the bi-decomposed forms provide gains of about 5–7% in the number of literals, if we consider the whole set of benchmarks. However, considering only benchmarks that benefit from the bi-decomposition approach, which are about 75% of the total, we can observe how the average gain increases up to 15.36%. In both cases, the gains initially grow as the error rate \(r\) increases, and then stabilize when \(r\) exceeds \(5\%\). A possible explanation for this trend could be that, when the error rate \(r\) increases, we obtain a more compact representation for the approximation \(g\), but the flexibility in the implementation of the quotient function \(h\) decreases, so that the size of \(h\) increases. Therefore, these two trends offset each other and the gains stabilize.
Regarding the overall CPU time required for the synthesis of bi-decomposed forms, which includes the computation of both the approximation function \(g\) and the quotient function \(h\), we have noticed from our experiments that the running times do not exhibit significant variations as the error rate changes. The average CPU time is quite short, about 0.18–0.20 s, for all the different values of the error rate \(r\) chosen for the construction of \(g\), with variances ranging from 0.8 to 1.14 s, and maximum values of about 10 seconds for benchmarks with a high number of inputs and outputs.
Table 4 shows the average results obtained studying each benchmark’s output independently of the others, as if it were a single-output function. We notice how in this case the gain is higher for low error rates \(r = 1\)\(2\%\), and then it decreases, and stabilizes at the value of about 5%. On the contrary, if we consider only the outputs presenting a positive gain, the average gain increases with the error rate, and becomes maximum for \(r = 20\%\), reaching the value of 17.17%.
Table 4.
 \(r = 1\%\)\(r = 2\%\)\(r = 5\%\)\(r = 10\%\)\(r = 12\%\)\(r = 15\%\)\(r = 20\%\)
 gaingaingaingaingaingaingain
Average benchmark suite5.526.355.765.014.874.904.56
Average benchmark suite (\(gain \gt 0\))15.1315.9216.2016.7616.8117.0217.17
Table 4. Average Gains for the AND Bi-decomposition of Single Benchmark Outputs with Different Error Rates for the Approximation \(g\)
Figure 8 shows how the gain varies as the error rate increases for both sets of experiments, on multi-output and single output functions. The plot on the left reports the average gains relative to the entire set of benchmarks, while the plot on the right considers only the average gains computed on the subset of benchmarks that benefit from the bi-decomposition. Observe that when the outputs of the benchmarks are studied independently, we get slightly higher gains, if we consider only outputs with positive gain. On the contrary, the average gains computed on all benchmarks are generally higher for multi-output bi-decompositions.
Fig. 8.
Fig. 8. Variation of the average gain as the error rate increases from 1% to 20%, for multi-output and single output functions: (a) average gain for all benchmarks and (b) average gain for benchmarks showing a positive gain.
In Table 5, we report the average gains obtained choosing for each benchmark, or benchmark output, the most convenient bi-decomposed form, i.e., the form obtained using the error rate that provides the highest gain for that particular benchmark, including the trivial bi-decomposition where \(g = f\) and \(h = 1\). In the first row of the table we report the averages obtained over all benchmarks, considering multi-output functions as a whole; the averages in the second row are obtained considering each output function independently. The second column of the table reports the averages computed considering only bi-decompositions with a strictly positive gain. The fact that the left column (average) decreases from top to bottom, whereas the right column (average with positive gain) increases from top to bottom is likely due to the fact that for single output there are more benchmarks where the algorithm chooses the trivial decomposition (\(g = f, h = 1)\) with \(gain = 0\). These results suggest that the application of the algorithm using multiple error rates, is generally more convenient, as expected.
Table 5.
 AverageAverage (\(gain \gt 0\))
Multi-output12.1316.26
Single-output10.5419.25
Table 5. Average Gains Obtained Choosing for Each Benchmark, or Benchmark Output, the Most Convenient Bi-decomposed Form (i.e., the Form with the Least Number of Literals among Those Obtained with Different Values of the Error Rate
We finally run a last test, considering only multi-output bi-decomposition. The idea of this test is to choose for each output of a benchmark the bi-decomposed form with the smallest number of literals, independently of the error rate, and then to merge these forms back together to obtain a bi-decomposed multi-output representation for the whole benchmark, equivalent to the original one. A subset of results of this last experimental evaluation is reported in Table 6. The first two columns report the name of the benchmarks, the number of their inputs and outputs, and the number of literals in their initial SOP representation (computed heuristically). The remaining two columns show the overall number of literals in the final bi-decomposed forms \(f = g \cdot h\) obtained choosing the best bi-decomposed form for each benchmark output, together with the overall percentage gain in the number of literals. The last two rows report the average gain for the whole set of benchmarks, and the average gain computed only on the subset of benchmarks that benefit from the bi-decomposition. In this test, the average gain for all the 124 benchmarks studied is of about 27%; moreover, considering only the 99 benchmarks showing a positive gain, the average reduction in the number of literals increases to about 34%. Thus, the gain in size is quite interesting, also considering that the overall synthesis time is very short.
Table 6.
Benchmark (in/out)# L# L% gain# L% gain
    [2, 3]w.r.t. [2, 3]
al2 (16/47)54530643.8539923.31
alcom (15/38)21117616.5925631.25
amd (14/24)1,52187542.471,01814.05
b2 (16/17)8,7495,15241.115,90212.71
b3 (32/20)4,0032,77130.783,24114.50
b4 (33/23)83270315.5088520.56
dk17 (10/11)1971970.00145\(-\)35.86
dk48 (15/17)1271270.00123\(-\)3.25
ex7 (16/5)75443042.9770939.35
exep (30/63)1,1751,1750.001,2828.35
in3 (35/29)1,8151,09139.891,42223.28
in4 (32/20)4,2772,91931.753,37713.56
in5 (24/14)1,9521,33631.561,4125.38
jbp (36/57)1,5361,00634.511,23118.28
mainpla (27/54)87,39723,79772.7739,28739.43
misg (56/23)18013425.561457.59
mp2d (14/14)24519420.8236246.41
rckl (32/7)1,89696049.37758\(-\)26.65
shift (19/16)3993980.254317.66
spla (16/46)7,9394,20147.083,592\(-\)19.04
t1 (21/23)73147734.755269.32
t2 (17/16)41535913.493600.28
t3 (12/8)21717519.35167\(-\)4.79
t4 (12/8)1081080.0098\(-\)10.20
tial (14/8)5,2184,50213.723,617\(-\)24.47
vg2 (25/8)80466517.291,27247.72
vtx1 (27/6)96483513.3894011.17
x1dn (27/6)3983980.004327.87
x6dn (39/5)1,3881,06623.201,28116.78
x9dn (27/7)1,1381,00911.341,33624.48
xparc (41/73)49,90120,77558.3718,682\(-\)11.20
Average  27.31 14.47
Average (\(gain \gt 0\))  34.13 27.08
Table 6. Experimental Evaluation of the Multi-output Bi-decomposition Obtained Choosing for Each Benchmark’s Output the Form with the Smallest Number of Literals among Those Obtained for Different Values of the Error Rate and Comparison with the AND-based Decomposition Approach Discussed in References [2, 3]
Last, the last two columns of Table 6 provide a comparison with the bi-decomposition approach discussed in References [2, 3], consisting of a new bounded-level form, called complemented circuits. A complemented circuit is a special type of decomposition aiming at representing a Boolean function \(f\) as \(\overline{f}_0 \ op \ f_1\), where \(op\) is a two input gate, and \(f_0\) and \(f_1\) are related to the off-set and on-set of \(f\). In particular, we consider the complemented circuits defined in terms of the AND gate, heuristically minimized using Boolean relations [1]. Even if in the original papers [2, 3] the cost of complemented circuits is measured in terms of the number of products, for the current comparison we refer to the number of literals, which we believe is a more accurate cost metric. Thus, we report the number of literals of the AND-based complemented circuits in the next-to-last column, and the percentage gain (or loss) in the number of literals provided by AND-based bi-decompositions compared to complemented circuits in the last column.
The results show that AND-based bi-decomposition forms are in general more compact than complemented circuits, with some exceptions (see, for example, \(dk17\) and \(tial\)). In summary, the average gain for all the benchmarks studied in this comparison is of about 14%; moreover, considering only the benchmarks showing a positive gain, the average reduction in the number of literals guaranteed by the new approach increases to 27%.
Eventually, we refer the reader to the conference version of this article [6] for the experimental evaluation of a similar approach in the framework of three-level EXOR-AND-OR forms (i.e., SPP expressions).

7 Conclusions

We presented an approach to the bi-decomposition of a function \(f = g \ {\texttt {op}}\ h\), where the divisor \(g\) is selected as an approximation of \(f\) (\(0 \rightarrow 1\) or \(1 \rightarrow 0\) or both, according to the type of \({\tt op}\)). Then, given a selection of the divisor \(g\), we provided the full flexibility of the quotient \(h\), for each operator.
Bi-decomposition can be applied to area-driven or delay-driven optimization flows. In the case of area, we can target exact or approximate optimizations. In Reference [6] we had reported an application of this approach to EXOR-AND-OR forms, for the two operators \({\tt op} =\) AND and \({\tt op} = \not\Rightarrow\). In this article we applied area-driven exact decomposition to SOP forms for the operator \({\tt op} =\) AND and \(0 \rightarrow 1\) approximation. The experiments show significant gains in literals.
Future work includes and is not limited to the following:
Design approximations schemes for the \(1 \rightarrow 0\) and \(0 \leftrightarrow 1\) cases.
For a given logic form, consider exact bi-decompositions with respect to all binary operators and all approximations: \(0 \rightarrow 1\), \(1 \rightarrow 0\), and \(0 \leftrightarrow 1\).
Explore more logic forms, besides XOR-AND-OR and AND-OR.
Explore approximate bi-decompositions, in which \(h\) is approximate too, i.e., it corrects only some of the errors introduced by the approximate divisor \(g\).
Study new methods for deriving bi-decomposed expressions starting from other representations of the target functions, as for instance AIGs. This would allow us to run experiments on benchmarks from other sets, such as the EPFL benchmark suite [41, 42].
Investigate bi-decompositions that derive the closest approximate regular version of a given Boolean function, see Reference [7] and Reference [8], where the quotient \(h\) can correct the high number of errors introduced by the divisor \(g\), when targeting regularity.
Consider different cost metrics besides the number of literals, as for instance technology mapping for FPGA/ASIC, to evaluate the area reduction obtained thanks to bi-decomposition.
Finally, we note that in the literature on approximation it is common to find \(0 \rightarrow 1\) over-approximations, but not \(1 \rightarrow 0\) under-approximations, since under-approximations (\(1 \rightarrow 0\)) produce worse results than over-approximations (\(0 \rightarrow 1\)), see Reference [39]. However, in bi-decomposition it is possible to correct errors with the quotient \(h\), so that even if \(1 \rightarrow 0\) under-approximations may not be helpful for approximations, they can still be advantageous for exact or approximate bi-decompositions.

References

[1]
David Bañeres, Jordi Cortadella, and Michael Kishinevsky. 2009. A recursive paradigm to solve boolean relations. IEEE Trans. Comput. 58, 4 (2009), 512–527.
[2]
Anna Bernasconi, Robert K. Brayton, Valentina Ciriani, Gabriella Trucco, and Tiziano Villa. 2015. Bi-decomposition using boolean relations. In Proceedings of the Euromicro Conference on Digital System Design, DSD. IEEE Computer Society, 72–78.
[3]
Anna Bernasconi, Robert K. Brayton, Valentina Ciriani, Gabriella Trucco, and Tiziano Villa. 2018. Synthesis of complemented circuits. In Further Improvements in the Boolean Domain. Cambridge Scholars Publishing, 214–239.
[4]
Anna Bernasconi and Valentina Ciriani. 2014. 2-SPP approximate synthesis for error tolerant applications. In Proceedings of the 17th Euromicro Conference on Digital System Design (DSD’14). 411–418.
[5]
Anna Bernasconi and Valentina Ciriani. 2014. 2-SPP approximate synthesis for error tolerant applications. In Proceedings of the 17th Euromicro Conference on Digital System Design (DSD’14). 411–418.
[6]
Anna Bernasconi, Valentina Ciriani, Jordi Cortadella, and Tiziano Villa. 2020. Computing the full quotient in bi-decomposition by approximation. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’20). IEEE, 580–585.
[7]
Anna Bernasconi, Valentina Ciriani, and Tiziano Villa. 2019. Approximate logic synthesis by symmetrization. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’19). 1655–1660.
[8]
Anna Bernasconi, Valentina Ciriani, and Tiziano Villa. 2022. Exploiting symmetrization and D-Reducibility for approximate logic synthesis. IEEE Trans. Comput. 71, 1 (2022), 121–133.
[9]
Mihir Choudhury and Kartik Mohanram. 2010. Bi-decomposition of large Boolean functions using blocking edge graphs. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD’10). 586–591.
[10]
Jordi Cortadella. 2003. Timing-driven logic bi-decomposition. IEEE Trans. CAD Integr. Circ. Syst. 22, 6 (2003), 675–685.
[11]
D. Debnath and T. Sasao. 1997. An optimization of AND-OR-EXOR three-level networks. In Proceedings of the Asia and South Pacific Design Automation Conference. 545–550.
[12]
D. Debnath and T. Sasao. 1997. Exclusive-OR of two sum-of-products expressions: Simplification and an upper bound on the number of products. In Proceedings of the 3rd International Workshop on the Applications of the Reed-Muller Expansion in Circuit Design. 45–60.
[13]
D. Debnath and T. Sasao. 1997. Minimization of AND-OR-EXOR three-level networks with AND gate sharing. IEICE Trans. Inf. Syst. E80-D, 10 (1997), 1001–1008.
[14]
D. Debnath and T. Sasao. 1998. A heuristic algorithm to design AND-OR-EXOR three-level networks. In Proceedings of the Asia and South Pacific Design Automation Conference. 69–74.
[15]
D. Debnath and T. Sasao. 1999. Multiple–valued minimization to optimize PLAs with output EXOR gates. In Proceedings of the IEEE International Symposium on Multiple-Valued Logic. 99–104.
[16]
E. Dubrova and P. Ellervee. 1999. A fast algorithm for three-level logic optimization. In Proceedings of the International Workshop on Logic Synthesis. 251–254.
[17]
E. Dubrova, D. Miller, and J. Muzio. 1995. Upper bounds on the number of products in AND-OR-XOR expansion of logic functions. Electr. Lett. 31, 7 (1995), 541–542.
[18]
E. Dubrova, D. Miller, and J. Muzio. 1997. AOXMIN: A three-level heuristic AND-OR-XOR minimizer for boolean functions. In Proceedings of the 3rd International Workshop on the Applications of the Reed-Muller Expansion in Circuit Design. 209–218.
[19]
E. Dubrova, D. Miller, and J. Muzio. 1999. AOXMIN-MV: A heuristic algorithm for AND-OR-XOR minimization. In Proceedings of the 4th International Workshop on the Applications of the Reed Muller Expansion in circuit Design. 37–54.
[20]
M. R. Garey and D. S. Johnson. 1979. Computer and Intractability: A Guide to the Theory of NP-completeness. W.H. Freeman & Company.
[21]
Soheil Hashemi, Hokchhay Tann, and Sherief Reda. 2019. Approximate logic synthesis using boolean matrix factorization. In Approximate Circuits, Methodologies and CAD, Sherief Reda and Muhammad Shafique (Eds.). Springer, 141–154. DOI:
[22]
Victor N. Kravets and Alan Mishchenko. 2009. Sequential logic synthesis using symbolic bi-decomposition. In Proceedings of the Design, Automation and Test in Europe (DATE’09). IEEE, 1458–1463.
[23]
Yung-An Lai, Chia-Chun Lin, Chia-Cheng Wu, Yung-Chih Chen, and Chun-Yao Wang. 2018. Efficient synthesis of approximate threshold logic circuits with an error rate guarantee. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’18). 773–778.
[24]
Ruei-Rung Lee, Jie-Hong R. Jiang, and Wei-Lun Hung. 2008. Bi-decomposing large boolean functions via interpolation and satisfiability solving. In Proceedings of the 45th Design Automation Conference (DAC’08). 636–641.
[25]
Lucas Machado and Jordi Cortadella. 2017. Boolean decomposition for AIG optimization. In Proceedings of the Great Lakes Symposium on VLSI. ACM, 143–148.
[26]
A. A. Malik, D. Harrison, and R. K. Brayton. 1991. Three-level decomposition with application to PLDs. In Proceedings of the IEEE International Conference on Computer Design: VLSI in Computer & Processors (ICCD’91). 628–633.
[27]
P. McGeer, J. Sanghavi, R. Brayton, and A. Sangiovanni-Vincentelli. 1993. Espresso-signature: A new exact minimizer for logic functions. IEEE Trans. VLSI 1, 4 (1993), 432–440.
[28]
Jin Miao, A. Gerstlauer, and M. Orshansky. 2013. Approximate logic synthesis under general error magnitude and frequency constraints. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD’13). 779–786.
[29]
Jin Miao, A. Gerstlauer, and M. Orshansky. 2014. Multi-level approximate logic synthesis under general error constraints. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD’14). 504–510.
[30]
A. Mishchenko, B. Steinbach, and M. Perkowski. 2001. An algorithm for bi-decomposition of logic functions. In Proceedings of the ACM/IEEE 38th Design Automation Conference (DAC’01). 103–108.
[31]
Ghasem Pasandi, Shahin Nazarian, and Massoud Pedram. 2019. Approximate logic synthesis: A reinforcement learning-based technology mapping approach. In Proceedings of the 20th International Symposium on Quality Electronic Design (ISQED’19). IEEE, 26–32.
[32]
Marek Perkowski. 1990. A program for exact synthesis of three-level NAND networks. In Proceedings of the IEEE International Symposium on Circuits and Systems. 1118–1121.
[33]
Marek Perkowski. 1995. A new representation of strongly unspecified switching functions and its application to multi-level AND/OR/EXOR synthesis. In Proceedings of the IFIP WG 10.5 Workshop on Applications of the Reed-Muller Expansion. 143–151.
[34]
T. Sasao. 1981. Multiple-Valued decomposition of generalized boolean functions and the complexity of programmable logic arrays. IEEE Trans. Comput. 30, 9 (1981), 635–643.
[35]
T. Sasao. 1995. A design method for AND-OR-EXOR three level networks. In Proceedings of the International Workshop on Logic Synthesis. 8:11–8:20.
[36]
T. Sasao. 1996. OR-AND-OR three-level networks. In Representation of Discrete Functions, T. Sasao and M. Fujita (Eds.). Kluwier Academic.
[37]
Tsutomu Sasao and Jon T. Butler. [n.d.]. On bi-decompositions of logic functions. In Proceedings of the International Workshop on Logic and Synthesis (IWLS’97).
[38]
Doochul Shin and S. K. Gupta. 2011. A new circuit simplification method for error tolerant applications. In Proceedings of the Design, Automation Test in Europe Conference Exhibition (DATE’11). 1–6.
[39]
Doochul Shin and Sandeep K. Gupta. 2010. Approximate logic synthesis for error tolerant applications. In Proceedings of the Design, Automation Test in Europe Conference Exhibition (DATE’10). 957–960.
[40]
Sanbao Su, Chen Zou, Weijiang Kong, Jie Han, and Weikang Qian. 2020. A novel heuristic search method for two-level approximate logic synthesis. IEEE Trans. Comput.-Aid. Des. Integr. Circ. Syst. 39, 3 (2020), 654–669. DOI:
[41]
Eleonora Testa, Mathias Soeken, Luca G. Amarù, and Giovanni De Micheli. 2019. Reducing the multiplicative complexity in logic networks for cryptography and security applications. In Proceedings of the 56th Annual Design Automation Conference (DAC’19). 74.
[42]
E. Testa, M. Soeken, H. Riener, L. Amaru, and G. D. Micheli. 2020. A logic synthesis toolbox for reducing the multiplicative complexity in logic networks. In Proceedings of the Design, Automation Test in Europe Conference Exhibition (DATE’20). 568–573. DOI:
[43]
Swagath Venkataramani, Kaushik Roy, and Anand Raghunathan. 2013. Substitute-and-simplify: A unified design paradigm for approximate and quality configurable circuits. In Proceedings of the Design, Automation and Test in Europe, (DATE’13). 1367–1372.
[44]
S. Venkataramani, A. Sabne, V. Kozhikkottu, K. Roy, and A. Raghunathan. 2012. SALSA: Systematic logic synthesis of approximate circuits. In Proceedings of the 49th ACM/EDAC/IEEE Design Automation Conference (DAC’12). 796–801.
[45]
Tiziano Villa, Robert K. Brayton, and Alberto L. Sangiovanni-Vincentelli. 2010. Synthesis of multi-level boolean networks. In Boolean Methods and Models in Mathematics, Computer Science and Engineering, Encyclopedia of Mathematics and its Applications 134, Yves Crama and Peter L. Hammer (Eds.). Cambridge University Press, 675–722.
[46]
S. Yamashita, H. Sawada, and A. Nagoya. 1998. New methods to find optimal non-disjoint bi-decompositions. In Proc. of the Asia and South Pacific Design Automation Conference. 59–68.
[47]
S. Yang. 1991. Logic Synthesis and Optimization Benchmarks User Guide Version 3.0. User Guide. Microelectronic Center.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Design Automation of Electronic Systems
ACM Transactions on Design Automation of Electronic Systems  Volume 30, Issue 1
January 2025
198 pages
EISSN:1557-7309
DOI:10.1145/3697150
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 08 November 2024
Online AM: 08 October 2024
Accepted: 18 September 2024
Revised: 22 August 2024
Received: 28 June 2024
Published in TODAES Volume 30, Issue 1

Check for updates

Author Tags

  1. Logic Synthesis
  2. Bi-Decomposition
  3. Function Approximation

Qualifiers

  • Research-article

Funding Sources

  • NRRP MUR program funded by the EU - NGEU
  • Spanish Agencia Estatal de Investigación
  • AGAUR
  • MIUR, Project Italian Outstanding Departments, 2018-2022
  • Project Analisi simbolica e numerica di systemi ciberfisici

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 97
    Total Downloads
  • Downloads (Last 12 months)97
  • Downloads (Last 6 weeks)97
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media