Abstract
Dodis, Kalai and Lovett (STOC 2009) initiated the study of the Learning Parity with Noise (LPN) problem with (static) exponentially hard-to-invert auxiliary input. In particular, they showed that under a new assumption (called Learning Subspace with Noise) the above is quasi-polynomially hard in the high (polynomially close to uniform) noise regime.
Inspired by the “sampling from subspace” technique by Yu (eprint 2009/467) and Goldwasser et al. (ITCS 2010), we show that standard LPN can work in a mode (reducible to itself) where the constant-noise LPN (by sampling its matrix from a random subspace) is robust against sub-exponentially hard-to-invert auxiliary input with comparable security to the underlying LPN. Plugging this into the framework of [DKL09], we obtain the same applications as considered in [DKL09] (i.e., CPA/CCA secure symmetric encryption schemes, average-case obfuscators, reusable and robust extractors) with resilience to a more general class of leakages, improved efficiency and better security under standard assumptions.
As a main contribution, under constant-noise LPN with certain sub-exponential hardness (i.e., \(2^{\omega (n^{1/2})}\) for secret size n) we obtain a variant of the LPN with security on poly-logarithmic entropy sources, which in turn implies CPA/CCA secure public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. Prior to this, basing PKE and OT on constant-noise LPN had been an open problem since Alekhnovich’s work (FOCS 2003).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Learning Parity with Noise. The computational version of learning parity with noise (LPN) assumption with parameters \(n\in \mathbb {N}\) (length of secret) and \(0<\mu <1/2\) (noise rate) postulates that for any \(q=\mathsf {poly}(n)\) (number of queries) it is computationally infeasible for any probabilistic polynomial-time (PPT) algorithm to recover the random secret \(\mathbf x\xleftarrow {\$} \{0, 1\}^{n} \) given \((\mathbf A,~\mathbf {A}\cdot \mathbf {x}+{\mathbf e})\), where \(\mathbf a\) is a random \(q{\times }n\) Boolean matrix, \(\mathbf e\) follows \(\mathcal {B}_\mu ^q=(\mathcal {B}_\mu )^q\), \(\mathcal {B}_\mu \) denotes the Bernoulli distribution with parameter \(\mu \) (i.e., \(\Pr [\mathcal {B}_\mu =1]=\mu \) and \(\Pr [\mathcal {B}_\mu =0]=1-\mu \)), ‘\(\cdot \)’ denotes matrix vector multiplication over GF(2) and ‘\(+\)’ denotes bitwise addition over GF(2). The decisional version of LPN simply assumes that \((\mathbf A,~\mathbf {A}\cdot \mathbf {x}+{\mathbf e})\) is pseudorandom. The two versions are polynomially equivalent [4, 8, 34].
Hardness of LPN. The computational LPN problem represents a well-known NP-complete problem “decoding random linear codes” [6] whose worst-case hardness is well-investigated. LPN was also extensively studied in learning theory, and it was shown in [21] that an efficient algorithm for LPN would allow to learn several important function classes such as 2-DNF formulas, juntas, and any function with a sparse Fourier spectrum. Under a constant noise rate, the best known LPN solvers [9, 39] require time and query complexity both \(2^{O(n/\log {n})}\). The time complexity goes up to \(2^{O(n/\log \log {n})}\) when restricted to \(q=\mathsf {poly}(n)\) queries [40], or even \(2^{O(n)}\) given only \(q=O(n)\) queries [42]. Under low noise rate \(\mu =n^{-c}\) (for constant \(0<c<1\)), the best attacks [5, 7, 12, 38, 48] solve LPN with time complexity \(2^{O(n^{1-c})}\) and query complexity \(q=O(n)\) or moreFootnote 1. The low-noise LPN is mostly believed a stronger assumption than constant-noise LPN. In noise regime \(\mu =O(1/\sqrt{n})\), LPN can be used to build public-key encryption (PKE) schemes and oblivious transfer (OT) protocols (more discussions below). Quantum algorithms are not known to have any advantages over classic ones in solving LPN, which makes LPN a promising candidate for “post-quantum cryptography”. Furthermore, LPN enjoys simplicity and is more suited for weak-power devices (e.g., RFID tags) than other quantum-secure candidates such as LWE [46].
Cryptography in minicrypt . LPN was used as a basis for building lightweight authentication schemes against passive [29] and even active adversaries (e.g. [32, 34], see [1] for a more complete literature). Kiltz et al. [37] and Dodis et al. [18] constructed randomized MACs from LPN, which implies a two-round authentication scheme with man-in-the-middle security. Lyubashevsky and Masny [41] gave an more efficient three-round authentication scheme from LPN (without going through the MAC transformation) and recently Cash, Kiltz, and Tessaro [13] reduced the round complexity to 2 rounds. Applebaum et al. [3] used LPN to construct efficient symmetric encryption schemes with certain key-dependent message (KDM) security. Jain et al. [30] constructed an efficient perfectly binding string commitment scheme from LPN. We refer to a recent survey [45] on the current state-of-the-art about LPN.
Cryptography beyond minicrypt . Alekhnovich [2] constructed the first (CPA secure) public-key encryption scheme from LPN with noise rateFootnote 2 \(\mu =1/\sqrt{n}\). By plugging the correlated products approach of [47] into Alekhnovich’s CPA secure PKE scheme, Döttling et al. [20] constructed the first CCA secure PKE scheme from low-noise LPN. After observing that the complexity of the scheme in [20] was hundreds of times worse than Alekhnovich’s original scheme, Kiltz et al. [36] proposed a neat and more efficient CCA secure construction by adapting the techniques from LWE-based encryption in [44] to the case of LPN. More recently, Döttling [19] constructed a PKE with KDM security. All the above schemes are based on LPN of noise rate \(O(1/\sqrt{n})\). To see that noise rate \(1/\sqrt{n}\) is inherently essential for PKE, we illustrate the (weakly correct) single-bit PKE protocol by Döttling et al. [20] in Figure 1, which is inspired by the counterparts based on LWE [23, 46]. First, the decisional \(\mathsf {LPN}_{\mu ,n}\) assumption implies that \((\mathbf A, \mathbf {Ax}+ \mathbf e)\) is pseudorandom even when \(\mathbf x\) is drawn from \(X{\sim }\mathcal {B}_\mu ^n\) (instead of \(X{\sim }U_n\)), which can be shown by a simple reduction [20]. Second, the passive security of the protocol is straightforward as \((pk,\mathbf c_1)\) is pseudorandom even when concatenated with the Goldreich-LevinFootnote 3 hardcore bit \(\mathbf {s}_1^\mathsf{T}{\cdot }\mathbf b\) (replacing \(\mathbf b\) with \(U_n\) by a hybrid argument). The final and most challenging part is correctness, i.e., \(m'\) needs to correlate with m at least noticeably. It is not hard to see for \(n\mu ^2=O(1)\) and \(\mathbf e, \mathbf s\leftarrow \mathcal {B}_\mu ^n\) we have \(\Pr [\langle \mathbf e, \mathbf s\rangle =0]\ge {1/2}+\varOmega (1)\), and thus noise rate \(\mu =O(1/\sqrt{n})\) seems an inherent barrierFootnote 4 for the PKE to be correct. The scheme is “weak” in the sense that correctness is only \(1/2+\varOmega (1)\) and it can be transformed into a standard CPA scheme (that encrypts multiple-bit messages with overwhelming correctness) using standard techniques (e.g., [15, 20]). Notice a correct PKE scheme (with certain properties) yields also a (weak form of) 2-round oblivious transfer protocol against honest-but-curious receiver. Suppose that Alice has a choice \(i\in \{0, 1\}^{} \), and she samples \(pk_i\) with trapdoor \(\mathbf s\) (as described in the protocol) and a uniformly random \(pk_{1-i}\) without trapdoor. Upon receiving \(pk_0\) and \(pk_1\), Bob uses the scheme to encrypt two bits \(\sigma _0\) and \(\sigma _1\) under \(pk_0\) and \(pk_1\) respectively, and sends them to Alice. Alice can then recover \(\sigma _i\) and but knows nothing about \(\sigma _{1-i}\). David et al. [16] constructed a universally composable OT under LPN with noise rate \(1/\sqrt{n}\). Therefore, basing PKE (and OT) on LPN with noise rate \(\mu =n^{-1/2+\epsilon }\) (and ideally a constant \(0<\mu <1/2\)) remains an open problem for the past decade.
LPN with auxiliary input. Despite being only sub-exponentially secure, LPN is known to be robust against any constant-fraction of static linear leakages, i.e., for any constant \(0<\alpha <1\) and any \(f(\mathbf x; \mathbf Z)=(\mathbf Z, {\mathbf Z}{\mathbf x})\) it holds that
where \(\mathbf Z\) is any \((1-\alpha ){n}\times {n}\) matrix (that can be sampled in polynomial time and independent of \(\mathbf A\)). The above can be seen by a change of basis so that the security is reducible from the LPN assumption with the same noise rate on a uniform secret of size \(\alpha {n}\). Motivated by this, Dodis, Kalai and Lovett [17] further conjectured that LPN is secure against any polynomial-time computable f such that 1) \(\mathbf x\) given \(f({\mathbf x})\) has average min-entropy \(\alpha {n}\); or even 2) any f that is \(2^{-\alpha {n}}\)-hard-to-invert for PPT algorithms (see Definition 2 for a formal definition). Note the distinction between the two types of leakages: the former f is a lossy function and the latter can be even injective (the leakage \(f(\mathbf x)\) may already determine \(\mathbf x\) in an information theoretical sense). However, they didn’t manage to prove the above claim (i.e., LPN with auxiliary input) under standard LPN. Instead, they introduced a new assumption called Learning Subspace with Noise (LSN) as below, where the secret to be learned is the random subspace \(\mathbf V\).
Assumption 1
(The LSN Assumption [17]). For any constant \(\beta >0\), there exists a polynomial \(p = p_\beta (n)\) such that for any polynomial \(q=\mathsf {poly}(n)\) the following two distributions are computationally indistinguishable:
where \({\mathbf V} \sim U_{n\times \beta {n}}\) is a random \(n\times \beta {n}\) matrix, \(\mathbf a_1\), \(\cdots \), \(\mathbf a_q\) are vectors i.i.d. to \(U_{\beta {n}}\), and \(E_1\), \(\cdots \), \(E_q\) are Boolean variables (determining whether the respective noise is uniform randomness or nothing) i.i.d. to \(\mathcal {B}_{1-\frac{1}{p}}\).
Then, the authors of [17] showed that LSN with parameters \(\beta \) and \(p_\beta =p_\beta (n)\) implies the decisional LPN (as in (1)) under noise rate \(\mu =(\frac{1}{2}-\frac{1}{4p_\beta })\) holds with \(2^{-\alpha {n}}\)-hard-to-invert auxiliary input (for any constant \(\alpha >\beta \)). Further, this yields many interesting applications such as CPA/CCA secure symmetric encryption schemes, average-case obfuscators for the class of point functions, reusable and robust extractors, all remain secure with exponentially hard-to-invert auxiliary input (see [17] for more details). We note that [17] mainly established the feasibility about cryptography with auxiliary input, and there remain issues to be addressed or improved. First, to counteract \(2^{-\alpha {n}}\)-hard-to-invert auxiliary input one needs to decide in advance the noise rate noise rate \(1/2-1/4p_\beta \) (recall the constraint \(\beta <\alpha \)). Second, Raz showed that for any constant \(\beta \), \(p_\beta =n^{\varOmega (1)}\) is necessary (otherwise LSN can be broken in polynomial-time) and even with \(p_\beta =n^{\varTheta (1)}\) there exist quasi-polynomial attacks (see the full version of [17] for more discussions about Raz’s attacks). Therefore, the security reduction in [17] is quite loose. As the main end result of [17], one needs a high-noise LPN for \(\mu =1/2-1/\mathsf {poly}(n)\) (and thus low efficiency due to the redundancy needed to make a correct scheme) only to achieve quasi-polynomial security (due to Raz’s attacks) against \(2^{-\alpha {n}}\)-hard-to-invert leakage for some constant \(\alpha \) (i.e., not any exponentially hard-to-invert leakage). Third, LSN is a new (and less well-studied) assumption and it was left as an open problem in [17] whether the aforementioned cryptographic applications can be based on the hardness of standard LPN, ideally admitting more general class of leakages, such as sub-exponentially or even quasi-polynomially hard-to-invert auxiliary input.
The main observation. Yu [49] introduced the “sampling from subspace” technique to prove the above “LPN with auxiliary input” conjecture under standard LPN but the end result of [49] was invalid due to a flawed intermediate step. A similar idea was also used by Goldwasser et al. [26] in the setting of LWE, where the public matrix was drawn from a (noisy) random subspace. Informally, the observation (in our setting) is that, the decisional LPN with constant noise rate \(0<\mu <1/2\) implies that for any constant \(0<\alpha <1\), any \(2^{-2n^{\alpha }}\)-hard-to-invert f and any \(q'=\mathsf {poly}(n)\) it holds that
where \(\mathbf x\sim {U_n}\) Footnote 5, \(\mathbf e\sim \mathcal {B}_\mu ^{q'}\), and \(\mathbf A'\) is a \(q'\times {n}\) matrix with rows sampled from a random subspace of dimension \(\lambda =n^{\alpha }\). Further, if the underlying LPN is \(2^{\omega (n^{\frac{1}{1+\beta }})}\)-hardFootnote 6 for any constant \(\beta >0\), then by setting \(\lambda =\log ^{1+\beta }{n}\), (2) holds for any \(q'=\mathsf {poly}(n)\) and any \(2^{-2\log ^{1+\beta }{n}}\)-hard-to-invert f. The rationale is that distribution \(\mathbf A'\) can be considered as the multiplication of two random matrices \(\mathbf A\xleftarrow {\$} \{0, 1\}^{q'\times {\lambda }} \) and \(\mathbf V\xleftarrow {\$} \{0, 1\}^{{\lambda }\times {n}} \), i.e., \(\mathbf A'\sim {(\mathbf A{\cdot }\mathbf V)}\), where \(\mathbf V\) constitutes the basis of the \(\lambda \)-dimensional random subspace and \(\mathbf A\) is the random coin for sampling from \(\mathbf V\). Unlike the LSN assumption whose subspace \(\mathbf V\) is secret, the \(\mathbf V\) and \(\mathbf A\) in (2) are public coins (implied by \(\mathbf A'\), see Remark 1). We have by the associative law \(\mathbf A'{\cdot }\mathbf x={\mathbf A}({\mathbf V}\cdot {\mathbf x})\) and by the Goldreich-Levin theorem \({\mathbf V}\cdot {\mathbf x}\) is a pseudorandom secret (even conditioned on \(\mathbf V\) and \(f(\mathbf x)\)), and thus (2) is reducible from the standard decisional LPN on noise rate \(\mu \), secret size \(\lambda \) and query complexity \(q'\). Concretely, assume that the LPN problem is \(2^{\omega (n^{3/4})}\)-hard then by setting \(\lambda =n^{2/3}\) (resp., \(\lambda =\log ^{4/3}n\)) we have that (2) is \(2^{\varOmega (n^{1/2})}\)-secure (resp., \(n^{\omega (1)}\)-secure) with any auxiliary input that is \(2^{-2n^{2/3}}\)-hard (resp., \(2^{-2\log ^{4/3}n}\)-hard) to invert. Plugging (2) into the framework of [17] we obtain the same applications (CPA/CCA secure symmetric encryption schemes, average-case obfuscators for point functions, reusable and robust extractors) under standard (constant-noise) LPN with improved efficiency (as the noise is constant rather than polynomially close to uniform) and tighter security against sub-exponentially (or even quasi-polynomially) hard-to-invert auxiliary input.
PKE from Constant-Noise LPN. More surprisingly, we show a connection from “LPN with auxiliary input” to “basing PKE on (constant-noise) LPN”. The feasibility can be understood by the single-bit weak PKE in Fig. 1 with some modifications: assume that LPN is \(2^{\omega (n^{\frac{1}{2}})}\)-hard (i.e., \(\beta =1\)), then for \(\lambda =\log ^2n/4\) we have that (2) holds on any \(\mathbf x\sim {X}\) with min-entropy \({{\mathbf {H}}_{\infty }}(X)\ge \log ^2n/2\). Therefore, by replacing the uniform matrix \(\mathbf A\) with \(\mathbf A'\sim ({U_{n{\times }\lambda }}{\cdot }U_{\lambda {\times }n})\), and sampling \({\mathbf s}\),\({\mathbf s_1}\leftarrow {X}\) and \({\mathbf e}\),\({\mathbf e_1} \leftarrow {\mathcal {B}_\mu ^n}\) for constant \(\mu \) and \(X\sim \chi _{\log {n}}^n\) Footnote 7, we get that \({\mathbf s_1^\mathsf{T}}{\mathbf e}\) and \({\mathbf e_1^\mathsf{T}}{\mathbf s}\) are both \((1/2+1/\mathsf {poly}(n))\)-biased to 0 independently, and thus the PKE scheme has noticeable correctness. We then transform the weak PKE into a full-fledged CPA secure scheme, where the extension is not trivial (more than a straightforward parallel repetition plus error-correction codes). In particular, neither \(X\sim \chi _{\log {n}}^n\) or \(X\sim \mathcal {B}_{\log {n}/n}^n\) can guarantee security and correctness simultaneously and thus additional ideas are needed (more details deferred to Sect. 4.3).
PKE with CCA Security. Once we have a CPA scheme based on constant-noise LPN, we can easily extend it to a CCA one by using the techniques in [20], and thus suffer from the same performance slowdown as that in [20]. A natural question is whether we can construct a simpler and more efficient CCA scheme as that in [36]. Unfortunately, the techniques in [36] do not immediately apply to the case of constant-noise LPN. The reason is that in order to employ the ideas from the LWE-based encryption scheme [44], the scheme in [36] has to use a variant of LPN (called knapsack LPN), and the corresponding description key is exactly the secret of some knapsack LPN instances. Even though there is a polynomial time reduction [43] from the LPN problem to the knapsack LPN problem, such a reduction will map the noise distribution of the LPN problem into the secret distribution of the knapsack LPN problem. If we directly apply the techniques in [36], the resulting scheme will not have any guarantee of correctness because the corresponding decryption key follows the Bernoulli distribution with constant parameter \(\mu \). Recall that for the correctness of our CPA secure PKE scheme, the decryption key cannot simply be chosen from either \(\chi _{\log {n}}^n\) or \(\mathcal {B}_{\log {n}/n}^n\). Fortunately, based on several new observations and some new technical lemmas, we mange to adapt the idea of [36, 44] to construct a simpler and efficient CCA secure PKE scheme from constant-noise LPN.
OT from constant-noise LPN. PKE and OT are incomparable in general [24]. But if the considered PKE scheme has some additional properties, then we can build OT protocol from it in a black-box way [24]. Gertner et al. [24] showed that if the public key of some CPA secure PKE scheme can be indistinguishably sampled (without knowing the corresponding secret key) from the public key distribution produced by honestly running the key generation algorithm, then we can use it to construct an OT protocol with honest parties (and thus can be transformed into a standard OT protocol by using zero-knowledge proof). It is easy to check that our CPA secure PKE scheme satisfies this property under the LPN assumption. Besides, none of the techniques used in transforming Alekhnovich’s CPA secure PKE scheme into a universally composable OT protocol [16] prevent us from obtaining a universally composable OT protocol from our CPA secure PKE scheme. In summary, our results imply that there exists (universally composable) OT protocol under constant-noise LPN assumption. We omit the details, and refer to [16, 24] for more information.
2 Preliminaries
Notations and definitions. We use capital letters (e.g., X, Y) for random variables and distributions, standard letters (e.g., x, y) for values, and calligraphic letters (e.g. \(\mathcal {X}\), \(\mathcal {E}\)) for sets and events. Vectors are used in the column form and denoted by bold lower-case letters (e.g., \(\mathbf a\)). We treat matrices as the sets of its column vectors and denoted by bold capital letters (e.g., \(\mathbf A\)). The support of a random variable X, denoted by Supp(X), refers to the set of values on which X takes with non-zero probability, i.e., \(\{x:\Pr [X=x]>0\}\). For set \(\S \) and binary string s, \(|\S |\) denotes the cardinality of \(\S \) and |s| refers to the Hamming weight of s. We use \(\mathcal {B}_\mu \) to denote the Bernoulli distribution with parameter \(\mu \), i.e., \(\Pr [\mathcal {B}_\mu =1] = \mu \), \(\Pr [\mathcal {B}_\mu = 0] = 1 - \mu \), while \(\mathcal {B}_\mu ^q\) denotes the concatenation of q independent copies of \(\mathcal {B}_\mu \). We use \(\chi _i^n\) to denote a uniform distribution over \(\{\mathbf e\in \{0, 1\}^{n} :|\mathbf e|=i \}\). We denote by \(\mathcal {D}^{n_1{\times }n}_\lambda \mathop {=}\limits ^{\text {def}}(U_{n_1\times \lambda }{\cdot }U_{\lambda \times {n}})\) to be a matrix distribution induced by multiplying two random matrices. For \(n,q\in \mathbb {N}\), \(U_n\) (resp., \(U_{q\times n}\)) denotes the uniform distribution over \( \{0, 1\}^{n} \) (resp., \( \{0, 1\}^{q\times n} \)) and independent of any other random variables in consideration, and \(f(U_n)\) (resp., \(f(U_{q\times n})\)) denotes the distribution induced by applying function f to \(U_n\) (resp., \(U_{q\times n}\)). \(X{\sim }D\) denotes that random variable X follows distribution D. We use \(s\leftarrow {S}\) to denote sampling an element s according to distribution S, and let \(s\xleftarrow {\$}{\S }\) denote sampling s uniformly from set \(\S \).
Entropy notions. For \(0<\mu <1/2\), the binary entropy function is defined as \({{\mathbf {H}}}(\mu )\mathop {=}\limits ^{\text {def}}\mu \log (1/\mu )+(1-\mu )\log (1/(1-\mu ))\). We define the Shannon entropy and min-entropy of a random variable X respectively, i.e.,
Note that \({{\mathbf {H}}_{1}}(\mathcal {B}_\mu )={{\mathbf {H}}}(\mu )\). The average min-entropy of a random variable X conditioned on another random variable Z is defined as
Indistinguishability and statistical distance. We define the (t,\(\varepsilon \))- computational distance between random variables X and Y, denoted by \(X ~{\mathop \sim \limits _{(t,\varepsilon )}} ~Y\), if for every probabilistic distinguisher \({\mathcal {D}}\) of running time t it holds that
The statistical distance between X and Y, denoted by \(\mathsf {SD}(X,Y)\), is defined by
Computational/statistical indistinguishability is defined with respect to distribution ensembles (indexed by a security parameter). For example, \(X\mathop {=}\limits ^\mathsf{def}\{X_n\}_{n\in \mathbb {N}}\) and \(Y\mathop {=}\limits ^\mathsf{def}\{Y_n\}_{n\in \mathbb {N}}\) are computationally indistinguishable, denoted by \(X ~{\mathop \sim \limits ^{c}} ~Y\), if for every \(t=\mathsf {poly}(n)\) there exists \(\varepsilon =\mathsf{negl}(n)\) such that \(X ~{\mathop \sim \limits _{(t,\varepsilon )}} ~Y\). X and Y are statistically indistinguishable, denoted by \(X ~{\mathop \sim \limits ^{s}} ~Y\), if \(\mathsf {SD}(X,Y)=\mathsf{negl}(n)\).
Simplifying Notations. To simplify the presentation, we use the following simplified notations. Throughout, n is the security parameter and most other parameters are functions of n, and we often omit n when clear from the context. For example, \(q=q(n)\in \mathbb {N}\), \(t=t(n)>0\), \(\epsilon =\epsilon (n)\in (0,1)\), and \(m=m(n)=\mathsf {poly}(n)\), where \(\mathsf {poly}\) refers to some polynomial.
Definition 1
(Learning Parity with Noise). The \({\varvec{decisional}}\) \({\mathbf {\mathsf{{LPN}}}}_{\mu ,n}\) problem (with secret length n and noise rate \(0<\mu <1/2\)) is hard if for every \(q=\mathsf {poly}(n)\) we have
where \(q\times {n}\) matrix \(\mathbf A~{\sim }~U_{q\times n}\), \(\mathbf x\sim {U_n}\) and \(\mathbf e\sim \mathcal {B}_\mu ^q\). The \({\varvec{computational}}\) \({\mathbf {\mathsf{{LPN}}}}_{\mu ,n}\) problem is hard if for every \(q=\mathsf {poly}(n)\) and every PPT algorithm \({\mathcal {D}}\) we have
where \(\mathbf A~{\sim }~U_{q \times n}\), \(\mathbf x\sim {U_n}\) and \(\mathbf e\sim \mathcal {B}_\mu ^q\).
LPN with specific hardness. We say that the decisional (resp., computational) \(\mathsf {LPN}_{\mu ,n}\) is T-hard if for every \(q{\le }T\) and every probabilistic adversary of running time T the distinguishing (resp., inverting) advantage in (3) (resp., (4)) is upper bounded by 1 / T.
Definition 2
(Hard-to-Invert Function). Let n be the security parameter and let \(\kappa =\omega (\log {n})\). A polynomial-time computable function \(f : \{0, 1\}^{n} \rightarrow \{0, 1\}^{l} \) is \(2^{-\kappa }\)-hard-to-invert if for every PPT adversary \({\mathcal {A}}\)
Lemma 1
(Union Bound). Let \(\mathcal {E}_1\), \(\cdots \), \(\mathcal {E}_l\) be any (not necessarily independent) events such that \(\Pr [\mathcal {E}_i]\ge (1-\epsilon _i)\) for every \(1{\le }i{\le }l\), then we have
We will use the following (essentially the Hoeffding’s) bound on the Hamming weight of a high-noise Bernoulli vector.
Lemma 2
For any \(0<p<1/2\) and \(\delta \le (\frac{1}{2}-p)\), we have
3 Learning Parity with Noise with Auxiliary Input
3.1 Leaky Sources and (Pseudo)randomness Extraction
We define below two types of leaky sources and recall two technical lemmas for (pseudo)randomness extraction from the respective sources, where \(\mathbf x\) for TYPE-II source is assumed to be uniform only for alignment with [17] (see Footnote 3).
Definition 3
(Leaky Sources). Let \(\mathbf x\) be any random variable over \( \{0, 1\}^{n} \) and let \(f: \{0, 1\}^{n} \rightarrow \{0, 1\}^{l} \) be any polynomial-time computable function. \((\mathbf x, f(\mathbf x))\) is called an (n,\(\kappa \)) TYPE-I (resp., TYPE-II) leaky source if it satisfies condition 1 (resp., condition 2) below:
-
1.
Min-entropy leaky sources. \({{\mathbf {H}}_{\infty }}(\mathbf x|f(\mathbf x))~{\ge }~\kappa \) and \(f(\mathbf x)\) is polynomial-time sampleable.
-
2.
Hard-to-invert leaky sources. \(\mathbf x\sim {U_n}\) and f is \(2^{-\kappa }\)-hard-to-invert.
Lemma 3
(Goldreich-Levin Theorem [25]). Let n be a security parameter, let \(\kappa =\omega (\log {n})\) be polynomial-time computable from n, and let \(f: \{0, 1\}^{n} \rightarrow \{0, 1\}^{l} \) be any polynomial-time computable function that is \(2^{-\kappa }\)-hard-to-invert. Then, for any constant \(0<\beta <1\) and \(\lambda =\lceil \beta \kappa \rceil \), it holds that
where \(\mathbf x\sim {U_n}\) and \(\mathbf V \sim U_{\lambda \times {n}}\) is a random \(\lambda \times {n}\) Boolean matrix.
Lemma 4
(Leftover Hash Lemma [28]). Let \((X,Z)\in \mathcal {X}\times {\mathcal {Z}}\) be any joint random variable with \({{\mathbf {H}}_{\infty }}(X|Z)\ge {k}\), and let \(\mathcal {H}=\{h_{\mathbf V}:\mathcal {X}\rightarrow \{0, 1\}^{l} ,\mathbf V\in \{0, 1\}^{s} \}\) be a family of universal hash functions, i.e., for any \( x_1\ne { x_2}\in \mathcal {X}\), \(\Pr _{\mathbf V\xleftarrow {\$} \{0, 1\}^{s} }[h_{\mathbf V}(x_1)=h_{\mathbf V}(x_2)]\le {2^{-l}}\). Then, it holds that
where \(\mathbf V\sim {U_{s}}\).
3.2 The Main Technical Lemma and Immediate Applications
Inspired by [26, 49], we state a technical lemma below where the main difference is that we sample from a random subspace of sublinear-sized dimension (rather than linear-sized one [49] or from a noisy subspace in the LWE setting [26]).
Theorem 1
(LPN with Hard-to-Invert Auxiliary Input). Let n be a security parameter and let \(0<\mu <1/2\) be any constant. Assume that the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is hard, then for every constant \(0<\alpha <1\), \(\lambda =n^{\alpha }\), \(q'=\mathsf {poly}(n)\), and every \((n,2\lambda )\) TYPE-I or TYPE-II leaky source (\(\mathbf x,f(\mathbf x)\)), we have
where \(\mathbf e\sim \mathcal {B}_\mu ^{q'}\), and \(\mathbf A'\sim \mathcal {D}_{\lambda }^{q'{\times }n}\) is a \(q'\times {n}\) matrix, i.e., \(\mathbf A'\sim {(\mathbf A{\cdot }\mathbf V)}\) for random matrices \(\mathbf A\xleftarrow {\$} \{0, 1\}^{q'\times {\lambda }} \) and \(\mathbf V\xleftarrow {\$} \{0, 1\}^{{\lambda }\times {n}} \).
Furthermore, if the \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{1+\beta }})}\)-hard for any constant \(\beta >0\) and any superconstant hidden by \(\omega (\cdot )\) then the above holds for any \(\lambda =\varTheta (\log ^{1+\beta }{n})\), any \(q'=\mathsf {poly}(n)\) and any \((n,2\lambda )\) TYPE-I/TYPE-II leaky source.
Remark 1
(Closure Under Composition). The random subspace \(\mathbf V\) and the random coin \(\mathbf A\) can be public as well, which is seen from the proof below but omitted from (5) to avoid redundancy (since they are implied by \(\mathbf A'\)). That is, there exists a PPT \(\mathsf {Simu}\) such that (\(\mathbf A'\), \(\mathsf {Simu}(\mathbf A')\)) is \(2^{-\varOmega (n)}\)-close to (\(\mathbf A'\), (\(\mathbf A\),\(\mathbf V\)) ). Therefore, (5) can be written in an equivalent form that is closed under composition, i.e., for any \(q'=\mathsf {poly}(n)\) and \(l=\mathsf {poly}(n)\)
where \(\mathbf A_1,\cdots ,\mathbf A_l\xleftarrow {\$} \{0, 1\}^{q'\times {\lambda }} \), \(\mathbf e_1,\cdots ,\mathbf e_l\sim {\mathcal {B}_\mu ^{q'}}\) and \(\mathbf V\xleftarrow {\$} \{0, 1\}^{{\lambda }\times {n}} \). This will be a useful property for constructing symmetric encryption schemes w.r.t. hard-to-invert auxiliary input (see more details in [17]).
Proof of
Theorem 1. We have by the assumption of \((\mathbf x,f(\mathbf x))\) and Lemma 3 or Lemma 4 that
where \(\mathbf y{\sim }U_{\lambda }\). Next, consider T-hard decisional \(\mathsf {LPN}_{\mu ,\lambda }\) problem on uniform secret \(\mathbf y\) of length \(\lambda \) (instead of n), which postulates that for any \(q'{\le }T\)
Under the LPN assumption with standard asymptotic hardness (i.e., \(T=\lambda ^{\omega (1)}\)) and by setting parameter \(\lambda =n^{\alpha }\) we have \(T=n^{\omega (1)}\), which suffices for our purpose since for any \(q'=\mathsf {poly}(n)\), any PPT adversary wins the above distinguishing game with advantage no greater than \(n^{-\omega (1)}\). In case that \(\mathsf {LPN}_{\mu ,\lambda }\) is \(2^{\omega (n^{\frac{1}{1+\beta }})}\)-hard, substitution of \(\lambda =\varTheta (\log ^{1+\beta }{n})\) into \(T=2^{\omega (\lambda ^{\frac{1}{1+\beta }})}\) also yields \(T=n^{\omega (1)}\). Therefore, in both cases the above two ensembles are computationally indistinguishable in security parameter n. The conclusion then follows by a triangle inequality. \(\square \)
A comparison with [17]. The work of [17] proved results similar to Theorem 1. In particular, [17] showed that the LSN assumption with parameters \(\beta \) and \(p=\mathsf {poly}_\beta (n)\) implies LPN with \(2^{-\alpha {n}}\)-hard auxiliary input (for constant \(\alpha >\beta \)), noise rate \(\mu =1/2-1/4p\) and quasi-polynomial security (in essentially the same form as (5) except for a uniform matrix \(\mathbf A'\)). In comparison, by sampling \(\mathbf A'\) from a random subspace of sublinear dimension \(\lambda =n^{\alpha }\) (for \(0<\alpha <1\)), constant-noise LPN implies that (5) holds with \(2^{-{\varOmega (n^{\alpha }})}\)-hard auxiliary input, constant noise and comparable security to the underlying LPN. Furthermore, assume constant-noise LPN with \(2^{\omega (n^{\frac{1}{1+\beta }})}\)-hardness (for constant \(\beta >0\)), then (2) holds for \(2^{-\varOmega (\log ^{1+\beta })}\)-hard auxiliary input, constant noise and quasi-polynomial security.
Immediate applications. This yields the same applications as considered in [17], such as CPA/CCA secure symmetric encryption schemes, average-case obfuscators for point functions, reusable and robust extractors, all under standard (constant-noise) LPN with improved efficiency (by bringing down the noise rate) and tighter security against sub-exponentially (or even quasi-polynomially) hard-to-invert auxiliary input. The proofs simply follow the route of [17] and can be informally explained as: the technique (by sampling from random subspace) implicitly applies pseudorandomness extraction (i.e., \(\mathbf y=\mathbf V\cdot \mathbf x\)) so that the rest of the scheme is built upon the security of \((\mathbf A,\mathbf A{\mathbf y}+\mathbf e)\) on secret \(\mathbf y\) (which is pseudorandom even conditioned on the leakage), and thus the task is essentially to obtain the aforementioned applications from standard LPN (without auxiliary input). In other words, our technique allows to transform any applications based on constant-noise LPN into the counterparts with auxiliary input under the same assumption. Therefore, we only sketch some applications in the full version of this work and refer to [17] for the redundancy.
4 CPA Secure PKE from Constant-Noise LPN
We show a more interesting application, namely, to build public-key encryption schemes from constant-noise LPN, which has been an open problem since the work of [2]. We refer to Appendix A.2 for standard definitions of public-key encryption schemes, correctness and CPA/CCA security.
4.1 Technical Lemmas
We use the following technical tool to build PKE scheme from constant-noise LPN. It would have been an immediate corollary of Theorem 1 for sub-exponential hard LPN on squared-logarithmic min-entropy sources (i.e., \(\beta =1\)), except for the fact that the leakage is also correlated with noise. Notice that we lose the “closure under composition” property by allowing leakage to be correlated with noise, and thus our PKE scheme will avoid this property.
Theorem 2
(LPN on Squared-Log Entropy). Let n be a security parameter and let \(0<\mu <1/2\) be any constant. Assume that the computational \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard (for any superconstant hidden by \(\omega (\cdot )\)), then for every \(\lambda =\varTheta (\log ^2n)\), \(q'=\mathsf {poly}(n)\), and every polynomial-time sampleable \(\mathbf x\in \{0, 1\}^{n} \) with \({{\mathbf {H}}_{\infty }}(\mathbf x)\ge {2\lambda }\) and every probabilistic polynomial-time computable function \(f: \{0, 1\}^{n+q'} \times {\mathcal {Z}}\rightarrow \{0, 1\}^{O(\log {n})} \) with public coin Z, we have
where noise vector \(\mathbf e\sim \mathcal {B}_\mu ^{q'}\) and \(q'\times {n}\) matrix \(\mathbf A'\sim {\mathcal {D}_{\lambda }^{q'{\times }n}}\).
Proof sketch
It suffices to adapt the proof of Theorem 1 as follows. First, observe that (by the chain rule of min-entropy)
For our convenience, write \(\mathbf A'\sim (\mathbf A\cdot {\mathbf V})\) for \(\mathbf A{\sim }U_{q'\times {\lambda }}\), \(\mathbf V\sim {U}_{{\lambda }\times {n}}\), and let \(\mathbf y,\mathbf r\sim U_\lambda \). Then, we have by Lemma 4
Next, \(2^{\omega (\lambda ^{\frac{1}{2}})}\)-hard computational \(\mathsf {LPN}_{\mu ,\lambda }\) problem with secret size \(\lambda \) postulates that for any \(q'{\le }2^{\omega (\lambda ^{\frac{1}{2}})}=n^{\omega (1)}\) (recall \(\lambda =\varTheta (\log ^2{n})\)) and any probabilistic \({\mathcal {D}}\), \({\mathcal {D}}'\) of running time \(n^{\omega (1)}\)
where the first implication is trivial since Z is independent of (\(\mathbf A\),\(\mathbf y\),\(\mathbf e\)) and any \(O(\log {n})\) bits of leakage affects unpredictability by a fact of \(\mathsf {poly}(n)\), the second step is the Goldreich-Levin theorem [25] with \(\mathbf r\sim {U_\lambda }\), and the third implication uses the sample-preserving reduction from [4] and is reproduced as Lemma 18. The conclusion follows by a triangle inequality.
We will use Lemma 5 to estimate the noise rate of an inner product between Bernoulli-like vectors .
Lemma 5
For any \(0<\mu {\le }1/8\) and \(\ell \in \mathbb {N}\), let \(E_1\), \(\cdots \), \(E_\ell \) be Boolean random variables i.i.d. to \(\mathcal {B}_{\mu }\), then \(\Pr [~\bigoplus _{i=1}^{\ell }E_i=0~]>\frac{1}{2}+2^{-(4\mu \ell +1)}\).
Proof
We complete the proof by Fact 1 and Fact 2
Fact 1
(Piling-Up Lemma). For \(0<\mu \,<\,1/2\) and random variables \(E_1\), \(E_2\), \(\cdots \), \(E_\ell \) that are i.i.d. to \(\mathcal {B}_\mu \) we have \(\bigoplus _{i=1}^{\ell }E_i~\sim ~\mathcal {B}_{\sigma }\) with \(\sigma =\frac{1}{2}(1-(1-2\mu )^\ell )\).
Fact 2
(Mean Value Theorem).For any \(0<x{\le }1/4\) we have \(1-x>2^{-2x}\).
We recall the following facts about the entropy of Bernoulli-like distributions. In general, there’s no closed formula for binomial coefficient, but an asymptotic estimation like Fact 3 already suffices for our purpose, where the binary entropy function can be further bounded by Fact 4 (see also Footnote 11).
Fact 3
(Asymptotics for Binomial Coefficients (e.g.[27], p.492). For any 0 < \(\mu \) \(<1/2\), and any \(n\in \mathbb {N}\) we have \({n\atopwithdelims (){\mu {n}}}~=~2^{n{{\mathbf {H}}}(\mu )-\frac{\log {n}}{2}+O(1)}\).
Fact 4
For any \(0<\mu <1/2\), we have \(\mu \log (1/\mu )<{{\mathbf {H}}}(\mu )<\mu (\log (1/\mu )+\frac{3}{2})\).
4.2 Weakly Correct 1-bit PKE from Constant-Noise LPN
As stated in Theorem 2, for any constant \(0<\mu <1/2\), \(2^{\omega (n^{\frac{1}{2}})}\)-hard \(\mathsf {LPN}_{\mu ,n}\) implies that \((\mathbf A'{\cdot }{\mathbf x}+{\mathbf e})\) is pseudorandom conditioned on \(\mathbf A'\) for \({\mathbf x}{\sim }X\) with squared-log entropy, where the leakage due to f can be omitted for now as it is only needed for CCA security. If there exists X satisfying the following three conditions at the same time then the 1-bit PKE as in Fig. 1 instantiated with the square matrix \(\mathbf A'\leftarrow \mathcal {D}_\lambda ^{n{\times }n}\), \({\mathbf s}\),\({\mathbf s_1} \leftarrow {X}\) and \({\mathbf e}\),\({\mathbf e_1} \leftarrow {\mathcal {B}_\mu ^n}\) will be secure and noticeably correct (since \({\mathbf s_1^\mathsf{T}}{\mathbf e}\) and \({\mathbf e_1^\mathsf{T}}{\mathbf s}\) are both \((1/2+1/\mathsf {poly}(n))\)-biased to 0 independently).
-
1.
(Efficiency) \(X\in \{0, 1\}^{n} \) can be sampled in polynomial time.
-
2.
(Security) \({{\mathbf {H}}_{\infty }}(X)=\varOmega (\log ^2n)\) as required by Theorem 2.
-
3.
(Correctness) \(|X|=O(\log {n})\) such that \(\Pr [\langle {X},\mathcal {B}_\mu ^n\rangle =0]{\ge }1/2+1/\mathsf {poly}(n)\).
Note that any distribution \(X\in \{0, 1\}^{n} \) satisfying \(|X|=O(\log {n})\) implies that \({{\mathbf {H}}_{\infty }}(X)=O(\log ^2n)\) (as the set \(\{\mathbf x\in \{0, 1\}^{n} :|\mathbf x|=O(\log {n})\}\) is of size \(2^{O(\log ^2n)}\)), so the job is to maximize the entropy of X under constraint \(|X|=O(\log {n})\). The first candidate seems \(X\sim \mathcal {B}_{\mu '}^n\) for \(\mu '=\varTheta (\frac{\log {n}}{n})\), but it does not meet the security condition because the noise rate \(\mu '\) is so small that a Chernoff bound only ensures (see Lemma 19) that \(\mathcal {B}_{\mu '}^n\) is \((2^{-O(\mu '{n})}=1/\mathsf {poly}(n))\)-close to having min-entropy \(\varTheta (n{{\mathbf {H}}}(\mu '))=\varTheta (\log ^2{n})\). In fact, we can avoid the lower-tail issue by letting \(X\sim \chi _{\log {n}}^n\), namely, a uniform distribution of Hamming weight exact \(\log {n}\), which is of min-entropy \(\varTheta (\log ^2n)\) by Fact 3. Thus, \(X\sim \chi _{\log {n}}^n\) is a valid option to obtain a single-bit PKE with noticeable correctness.
4.3 CPA Secure PKE from Constant-Noise LPN
Unlike [20] where the extension from the weak single-bit PKE to a fully correct scheme is almost immediate (by a parallel repetition and using error correcting codes), it is not trivial to amplify the noticeable correctness of the single-bit scheme to an overwhelming probability, in particular, the scheme instantiated with distribution \(X\sim \chi _{\log {n}}^n\) would no longer work. To see the difficulty, we define below our CPA secure scheme \(\varPi _X=(\mathsf {KeyGen},\mathsf {Enc},\mathsf {Dec})\) that resembles the counterpart for low-noise LPN (e.g., [15, 20]), where distribution X is left undefined (apart from the entropy constraint).
-
Distribution X: X is a polynomial-time sampleable distribution satisfying \({{\mathbf {H}}_{\infty }}(X)=\varOmega (\log ^2{n})\) and we set \(\lambda =\varTheta (\log ^2n)\) such that \(2\lambda \le {{\mathbf {H}}_{\infty }}(X)\).
-
\(\mathsf {KeyGen}(1^n)\): Given a security parameter \(1^n\), it samples matrix \(\mathbf A\sim \mathcal {D}_{\lambda }^{n{\times }n}\), column vectors \(\mathbf s\sim {X}\), \(\mathbf e\sim \mathcal {B}_\mu ^n\), computes \(\mathbf b=\mathbf {As}+\mathbf e\) and sets \((pk,sk):= ((\mathbf A,\mathbf b), \mathbf s)\).
-
\(\mathsf {Enc}_{pk}(\mathbf m)\): Given the public key \(pk=(\mathbf A,\mathbf b)\) and a plaintext \(\mathbf m\in \{0, 1\}^{n} \), \(\mathsf {Enc}_{pk}\) chooses
$$ \mathbf S_1\sim (X^{(1)},\cdots ,X^{(q)})\in \{0, 1\}^{n{\times }q} , \mathbf E_1 \sim \mathcal {B}_\mu ^{n{\times }q} $$where \(X^{(1)},\cdots ,X^{(q)}\) are i.i.d. to X. Then, it outputs \(C=(\mathbf C_1,\mathbf c_2)\) as ciphertext, where
$$ \begin{array}{ll} \mathbf C_1: = \mathbf A^\mathsf{T}\mathbf S_1+ \mathbf E_1 &{} \in \{0, 1\}^{n{\times }q} ,\\ \mathbf c_2~: = \mathbf S_1^\mathsf{T}\mathbf b+\mathbf G{\cdot }\mathbf m &{} \in \{0, 1\}^{q} , \end{array} $$and \(\mathbf G\in \{0, 1\}^{q\times {n}} \) is a generator matrix for an efficiently decodable code (with error correction capacity to be defined and analyzed in Sect. 4.4).
-
\(\mathsf {Dec}_{sk}(\mathbf C_1,\mathbf c_2)\): On secret key \(sk=\mathbf s\), ciphertext \((\mathbf C_1,\mathbf c_2)\), it computes
$$ \tilde{\mathbf c}_0: = \mathbf c_2 - \mathbf C_1^\mathsf{T}\mathbf s~=~\mathbf G{\cdot }\mathbf m+\mathbf S_1^\mathsf{T}\mathbf e-\mathbf E_1^\mathsf{T}\mathbf s $$and reconstructs \(\mathbf m\) from the error \(\mathbf S_1^\mathsf{T}\mathbf e- \mathbf E_1^\mathsf{T}\mathbf s\) using the error correction property of \(\mathbf G\).
We can see that the CPA security of \(\varPi _X\), for any X with \({{\mathbf {H}}_{\infty }}(X)=\varOmega (\log ^2n)\), follows from applying Theorem 2 twice (once for replacing the pubic key \(\mathbf b\) with uniform randomness, and again together with the Goldreich Levin Theorem for encrypting a single bit) and a hybrid argument (to encrypt many bits).
Theorem 3
(CPA Security). Assume that the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard for any constant \(0<\mu <1/2\), then \(\varPi _X\) is IND-CPA secure.
4.4 Which X Makes a Correct Scheme?
\(X\sim \chi _{\log {n}}^n\) may not work. To make a correct scheme, we need to upper bound \(|\mathbf S_1^\mathsf{T}\mathbf e-\mathbf E_1^\mathsf{T}\mathbf s|\) by \(q(1/2-1/\mathsf {poly}(n))\), but in fact we do not have any useful bound even for \(|\mathbf S_1^\mathsf{T}\mathbf e|\). Recall that \(\mathbf S_1^\mathsf{T}\) is now a \(q{\times }n\) matrix and parse \(\mathbf S_1^\mathsf{T}\mathbf e\) as Boolean random variables \(W_1,\cdots ,W_q\). First, although every \(W_i\) satisfies \(\Pr [W_i=0]{\ge }1/2+1/\mathsf {poly}(n)\), they are not independent (correlated through \(\mathbf e\)). Second, if we fix any \(|\mathbf e|=\varTheta (n)\), all \(W_1\), \(\cdots \), \(W_q\) are now independent conditioned on \(\mathbf e\), but then we could no longer guarantee that \(\Pr [W_i=0|\mathbf e]\ge 1/2+\mathsf {poly}(n)\) as \(\mathbf S_1\) follows \((\chi _{\log {n}}^n)^q\) rather than \((\mathcal {B}_{\log {n}/n}^n)^q\). Otherwise said, condition #3 (as in Sect. 4.2) is not sufficient for overwhelming correctness. We introduced a tailored version of Bernoulli distribution (with upper/lower tails chopped off).
Definition 4
(Distribution \(\widetilde{\mathcal {B}}_{\mu _1}^n\) ). Define \(\widetilde{\mathcal {B}}_{\mu _1}^n\) to be distributed to \(\mathcal {B}_{\mu _1}^n\) conditioned on \((1-\frac{\sqrt{6}}{3})\mu _1{n}\) \({\le }\) \(|\mathcal {B}_{\mu _1}^n|\) \({\le }\) \(2\mu _1{n}\). Further, we define an \(n{\times }q\) matrix distribution, denoted by \((\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), where every column is i.i.d. to \(\widetilde{\mathcal {B}}_{\mu _1}^n\).
\(\widetilde{\mathcal {B}}_{\mu _1}^n\) is efficiently sampleable. \(\widetilde{\mathcal {B}}_{\mu _1}^n\) can be sampled in polynomial-time with exponentially small error, e.g., simply sample \(\mathbf e\leftarrow \mathcal {B}_{\mu _1}^n\) and outputs \(\mathbf e\) if \((1-\frac{\sqrt{6}}{3})\mu _1{n}{\le }|\mathbf e|{\le }2\mu _1{n}\). Otherwise, repeat the above until such \(\mathbf e\) within the Hamming weight range is obtained or the experiment failed (then output \(\bot \) in this case) for a predefined number of times (e.g., n).
\(\widetilde{\mathcal {B}}_{\mu _1}^n\) is of min-entropy \(\varOmega (\log ^2n)\). For \(\mu _1=\varOmega (\log {n}/n)\), it is not hard to see that \(\widetilde{\mathcal {B}}_{\mu _1}^n\) is a convex combination of \(\chi _{(1-\frac{\sqrt{6}}{3})\mu _1{n}}^n\), \(\cdots \), \(\chi _{2\mu _1{n}}^n\), and thus of min-entropy \(\varOmega (\log ^2n)\) by Fact 3.
Therefore, \(\varPi _{X}\) when instantiated with \(X\sim \widetilde{\mathcal {B}}_{\mu _1}^n\) is CPA secure by Theorem 4, and we proceed to the correctness of the scheme.
Lemma 6
For constants \(\alpha >0\), \(0<\mu {\le }1/10\) and \(\mu _1=\alpha \log {n}/n\) , let \(\mathbf S_1\sim (\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), \(\mathbf e\sim \mathcal {B}_\mu ^n\), \(\mathbf E_1\sim \mathcal {B}_{\mu }^{n{{\times }q}}\) and \(\mathbf s \sim \widetilde{\mathcal {B}}_{\mu _1}^n\), we have
Proof
It is more convenient to consider \(\big |\mathbf S_1^\mathsf{T}\mathbf e-\mathbf E_1^\mathsf{T}\mathbf s\big |\) conditioned on \(|\mathbf e|\le 1.01{\mu }n\) (except for a \(2^{-\varOmega (n)}\)-fraction) and \(|\mathbf s|\le 2\mu {n}\). We have by Lemmas 7 and 8 that \(\mathbf S_1^\mathsf{T}\mathbf e\) and \(\mathbf E_1^\mathsf{T}\mathbf s\) are identical distributed to \(\mathcal {B}_{\delta _1}^q\) and \(\mathcal {B}_{\delta _2}^q\) respectively, where \(\delta _1{\le }1/2-n^{-\alpha /2}\) and \(\delta _2{\le }1/2-n^{-\alpha }/2\). Thus, \((\mathbf S_1^\mathsf{T}\mathbf e- \mathbf E_1^\mathsf{T}\mathbf s)\) follows \(\mathcal {B}_\delta ^q\) for \(\delta {\le }1/2-n^{-3\alpha /2}\) by the Piling-up lemma, and then we complete the proof with Lemma 2.
Concrete parameters. \(\mathsf {Enc}_{pk}\) simply uses a generator matrix \(\mathbf G: \{0, 1\}^{q{\times }n} \) that efficiently corrects up to a \((1/2-n^{-3\alpha /2}/2)\)-fraction of bit flipping errors, which exists for \(q=O(n^{3\alpha +1})\) (e.g., [22]). We can now conclude the correctness of the scheme since every encryption will be correctly decrypted with overwhelming probability and thus so is the event that polynomially many of them occur simultaneously (even when they are not independent, see Lemma 1).
Theorem 4
(Correctness). Let \(0<\mu \le {1/10}\) and \(\alpha >0\) be any constants, let \(q=\varTheta (n^{3\alpha +1})\) and \(\mu _1=\alpha \log {n}/n\), and let \(X\sim \widetilde{\mathcal {B}}_{\mu _1}^n\). Assume that the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard, then \(\varPi _X\) is a correct scheme.
Lemma 7
For any \(0<\mu {\le }1/10\), \(\mu _1=O(\log {n}/n){\le }1/8\) and any \(\mathbf e\in \{0, 1\}^{n} \) with \(|\mathbf e|\le {1.01}\mu {n}\),
Proof
Denote by \(\mathcal {E}\) the event \((1-\frac{\sqrt{6}}{3})\mu _1{n}{\le }|\mathcal {B}_{\mu _1}^n|{\le }2\mu _1{n}\) and thus \(\Pr [\mathcal {E}]\ge (1-2\exp ^{-\mu _1{n}/3})\) by the Chernoff bound. We have by Lemma 5
For \(0<\mu \le 1/10\), \(\Pr [\langle \widetilde{\mathcal {B}}_{\mu _1}^n,\mathbf e\rangle =0]\ge {1/2}+2^{-(4.04\mu \mu _1{n}+1)}-2\exp ^{-\mu _1{n}/3}>1/2+2^{-\mu _1n/2}\).
Lemma 8
For any \(0<\mu {\le }1/8\), \(\mu _1=O(\log {n}/n)\), and any \(\mathbf s\in \{0, 1\}^{n} \) with \(|\mathbf s|\le {2}\mu _1{n}\), we have by Lemma 5
5 CCA-Secure PKE from Constant-Noise LPN
In this section, we show how to construct CCA-secure PKE from constant-noise LPN. Our starting point is the construction of a tag-based PKE against selective tag and chosen ciphertext attacks from LPN, which can be transformed into a standard CCA-secure PKE by using known techniques [11, 35]. We begin by first recalling the definitions of tag-based PKE.
5.1 Tag-Based Encryption
A tag-based encryption (TBE) scheme with tag-space \(\mathcal {T}\) and message-space \(\mathcal {M}\) consists of three PPT algorithms \(\mathcal {TBE} =(\mathsf {KeyGen}, \mathsf {Enc},\) \( \mathsf {Dec})\). The randomized key generation algorithm \(\mathsf {KeyGen}\) takes the security parameter n as input, outputs a public key pk and a secret key sk, denoted as \((pk,sk)\leftarrow \mathsf {KeyGen}(1^n)\). The randomized encryption algorithm \(\mathsf {Enc}\) takes pk, a tag \(\mathbf t\in \mathcal {T}\), and a plaintext \(\mathbf m\in \mathcal {M}\) as inputs, outputs a ciphertext C, denoted as \(C\leftarrow \mathsf {Enc}(pk,\mathbf t,\mathbf m)\). The deterministic algorithm \(\mathsf {Dec}\) takes sk and C as inputs, outputs a plaintext \(\mathbf m\), or a special symbol \(\perp \), which is denoted as \(\mathbf m\leftarrow \mathsf {Dec}(sk,\mathbf t,C)\). For correctness, we require that for all \((pk,sk)\leftarrow \mathsf {KeyGen}(1^n)\), any tag \(\mathbf t\), any plaintext \(\mathbf m\) and any \(C\leftarrow \mathsf {Enc}(pk,\mathbf t,\mathbf m)\), the equation \(\mathsf {Dec}(sk,\mathbf t,C) = \mathbf m\) holds with overwhelming probability.
We consider the following game between a challenger \(\mathcal {C}\) and an adversary \(\mathcal {A}\) given in [35].
-
Init. The adversary \(\mathcal {A}\) takes the security parameter n as inputs, and outputs a target tag \(\mathbf t^*\) to the challenger \(\mathcal {C}\).
-
KeyGen. The challenger \(\mathcal {C}\) computes \((pk,sk)\leftarrow \mathsf {KeyGen}(1^n)\), gives the public key pk to the adversary \(\mathcal {A}\), and keeps the secret key sk to itself.
-
Phase 1. The adversary \(\mathcal {A}\) can make decryption queries for any pair \((\mathbf t,C)\) for any polynomial time, with a restriction that \(\mathbf t\ne \mathbf t^*\), and the challenger \(\mathcal {C}\) returns \(\mathbf m\leftarrow \mathsf {Dec}(sk,\mathbf t,C)\) to \(\mathcal {A}\) accordingly.
-
Challenge. The adversary \(\mathcal {A}\) outputs two equal length plaintexts \(\mathbf m_0,\mathbf m_1\in \mathcal {M}\). The challenger \(\mathcal {C}\) randomly chooses a bit \(b^*\xleftarrow {\$} \{0, 1\}^{} \), and returns the challenge ciphertext \(C^*\leftarrow \mathsf {Enc}(pk,\mathbf t^*,\mathbf m_{b^*})\) to the adversary \(\mathcal {A}\).
-
Phase 2. The adversary can make more decryption queries as in Phase 1.
-
Guess. Finally, \(\mathcal {A}\) outputs a guess \(b\in \{0, 1\}\). If \(b=b^*\), the challenger \(\mathcal {C}\) outputs 1, else outputs 0.
-
Advantage. \(\mathcal {A}\)’s advantage is defined as \(\mathrm {Adv}^{ind-stag-cca }_{\mathcal {TBE},\mathcal {A}}(1^n) \mathop {=}\limits ^{\text {def}}|\Pr [b=b^*]-\frac{1}{2}|.\)
Definition 5
(IND-sTag-CCA). We say that a TBE scheme \(\mathcal {TBE}\) is IND-sTag-CCA secure if for any PPT adversary \(\mathcal {A}\), its advantage \(\mathrm {Adv}^{ind-stag-cca }_{\mathcal {TBE},\mathcal {A}}(1^n)\) is negligible in n.
For our convenience, we will use the following corollary, which is essentially a q-foldFootnote 8 (transposed) version of Theorem 2 with \(q'=n\) and 2 bits of linear leakage (rather than \(O(\log {n})\) bits of arbitrary leakage) per copy. Following [36], the leakage is crucial for the CCA security proof.
Corollary 1
Let n be a security parameter and let \(0<\mu <1/2\) be any constant. Assume that the computational \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard (for any superconstant hidden by \(\omega (\cdot )\)). Then, for every \(\mu _1=\varOmega ({\log }n/n)\) and \(\lambda =\varTheta (\log ^2n)\) such that \(2\lambda \le {{\mathbf {H}}_{\infty }}(\widetilde{\mathcal {B}}_{\mu _1}^n)\), and every \(q=\mathsf {poly}(n)\), we have
where the probability is taken over \(\mathbf S_0 \sim (\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), \(\mathbf E_0\sim \mathcal {B}_\mu ^{n\times q}\), \(\mathbf A\sim \mathcal {D}^{n{\times }n}_\lambda \), \(U_{q{\times }n}\), \(\mathbf s\leftarrow {\widetilde{\mathcal {B}}_{\mu _1}^n}\), \(\mathbf e\leftarrow {\mathcal {B}_\mu ^n}\) and internal random coins of the distinguisher.
5.2 Our Construction
Our construction is built upon previous works in [36, 44]. A couple of modifications are made to adapt the ideas of [36, 44], which seems necessary due to the absence of a meaningful knapsack version for our LPN (with poly-log entropy and non-uniform matrix). Let n be the security parameter, let \(\alpha >0\), \(0<\mu {\le }1/10\) be any constants, let \(\mu _1=\alpha \log {n}/n\), \(\beta =(\frac{1}{2}-\frac{1}{n^{3\alpha }})\), \(\gamma =(\frac{1}{2}-\frac{1}{2n^{3\alpha /2}})\) and choose \(\lambda =\varTheta (\log ^2n)\) such that \(2\lambda \le {{\mathbf {H}}_{\infty }}(\widetilde{\mathcal {B}}_{\mu _1}^n)\). Let the plaintext-space \(\mathcal {M} = \{0, 1\}^{n} \), and let \(\mathbf G\in \{0, 1\}^{q\times n} \) and \(\mathbf G_2\in \{0, 1\}^{\ell \times n} \) be the generator matrices that can correct at least \(\beta {q}\) and \(2\mu \ell \) bit flipping errors in the codeword respectively, where \(q=O(n^{6\alpha +1})\), \(\ell =O(n)\) and we refer to [22] and [33] for explicit constructions of the two codes respectively. Let the tag-space \(\mathcal {T}= \mathbb {F}_{2^n}\). We use a matrix representation \(\mathbf H_{\mathbf t} \in \{0, 1\}^{n\times n} \) for finite field elements \(\mathbf t\in \mathbb {F}_{2^n}\) [10, 14, 36] such that \(\mathbf H_{\mathbf 0} = \mathbf 0\), \(\mathbf H_{\mathbf t}\) is invertible for any \(\mathbf t\ne \mathbf 0\), and \(\mathbf H_{\mathbf t_1} + \mathbf H_{\mathbf t_2} = \mathbf H_{\mathbf t_1+\mathbf t_2}\). Our TBE scheme \(\mathcal {TBE}\) is defined as follows:
-
\(\mathsf {KeyGen}(1^n)\): Given a security parameter n, first uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell \times n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\) and \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \), and set (pk, sk) \(=\) ( \((\mathbf A\), \(\mathbf B_0\), \(\mathbf B_1\), \(\mathbf C\)), \((\mathbf S_0,\mathbf S_1))\).
-
\(\mathsf {Enc}(pk,\mathbf t,\mathbf m)\): Given the public key \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1,\mathbf C)\), a tag \(\mathbf t\in \mathbb {F}_{2^n}\), and a plaintext \(\mathbf m\in \{0, 1\}^{n} \), randomly choose
$$ \mathbf s \xleftarrow {\$}\widetilde{\mathcal {B}}_{\mu _1}^{n}, \mathbf e_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n}, \mathbf e_2 \xleftarrow {\$}\mathcal {B}_\mu ^{\ell }, \mathbf S_0',\mathbf S_1'\xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q, \mathbf E_0',\mathbf E_1'\xleftarrow {\$}\mathcal {B}_\mu ^{n\times q} $$and define
$$ \begin{array}{ll} \mathbf c: = \mathbf A \mathbf s + \mathbf e_1 &{} \in \{0, 1\}^{n} \\ \mathbf c_0: = (\mathbf {GH}_{\mathbf t} + \mathbf B_0)\mathbf s + (\mathbf S_0')^\mathsf{T}\mathbf e_1 - (\mathbf E_0')^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_1:= (\mathbf {GH}_{\mathbf t} + \mathbf B_1)\mathbf s + (\mathbf S_1')^\mathsf{T}\mathbf e_1 - (\mathbf E_1')^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_2:= \mathbf {Cs} + \mathbf e_2 + \mathbf G_2 \mathbf m &{} \in \{0, 1\}^{\ell } . \end{array} $$Finally, return the ciphertext \(C=(\mathbf c,\mathbf c_0,\mathbf c_1,\mathbf c_2)\).
-
\(\mathsf {Dec}(sk,\mathbf t,C)\): Given the secret key \(sk=(\mathbf S_0,\mathbf S_1)\), tag \(\mathbf t\in \mathbb {F}_{2^n}\) and ciphertext \(C=(\mathbf c,\mathbf c_0,\mathbf c_1,\mathbf c_2)\), first compute
$$ \tilde{\mathbf c}_0: = \mathbf c_0 - \mathbf S_0^\mathsf{T}\mathbf c =\mathbf {GH}_{\mathbf t} \mathbf s + (\mathbf S_0'-\mathbf S_0)^\mathsf{T}\mathbf e_1 + (\mathbf E_0-\mathbf E_0')^\mathsf{T}\mathbf s. $$Then, reconstruct \(\mathbf b=\mathbf H_{\mathbf t}\mathbf s\) from the error \((\mathbf S_0'-\mathbf S_0)^\mathsf{T}\mathbf e_1 + (\mathbf E_0-\mathbf E_0')^\mathsf{T}\mathbf s\) by using the error correction property of \(\mathbf G\), and compute \(\mathbf s = \mathbf H_{\mathbf t}^{-1}\mathbf b\). If it holds that
$$\begin{aligned} |\underbrace{\mathbf c- \mathbf {As}}_{=\mathbf e_1}|\le 2\mu {n} \wedge |\underbrace{\mathbf c_0 - (\mathbf {GH}_{\mathbf t} + \mathbf B_0)\mathbf s}_{=(\mathbf S_0')^\mathsf{T}\mathbf e_1 - (\mathbf E_0')^\mathsf{T}\mathbf s} |\le \gamma {q} \wedge |\underbrace{\mathbf c_1 - (\mathbf {GH}_t + \mathbf {B}_1) \mathbf s}_{=(\mathbf S_1')^\mathsf{T}\mathbf e_1 - (\mathbf E_1')^\mathsf{T}\mathbf s} |\le \gamma {q} \end{aligned}$$then reconstruct \(\mathbf m\) from \(\mathbf c_2- \mathbf {Cs} = \mathbf G_2 \mathbf m+ \mathbf e_2\) by using the error correction property of \(\mathbf G_2\), else let \(\mathbf m=\bot \). Finally, return the decrypted result \(\mathbf m\).
Remark 2
As one can see, the matrix \(\mathbf S_1\) in the secret key \(sk=(\mathbf S_0,\mathbf S_1)\) can also be used to decrypt the ciphertext, i.e., compute \(\tilde{\mathbf c}_1: = \mathbf c_1 - \mathbf S_1^\mathsf{T}\mathbf c =\mathbf {GH}_{\mathbf t} \mathbf s + (\mathbf S_1'-\mathbf S_1)^\mathsf{T}\mathbf e_1 + (\mathbf E_1-\mathbf E_1')^\mathsf{T}\mathbf s\) and recover \(\mathbf s\) from \(\tilde{\mathbf c}_1\) by using the error correction property of \(\mathbf G\). Moreover, the check condition
guarantees that the decryption results are the same when we use either \(\mathbf S_0\) or \(\mathbf S_1\) in the decryption. This fact seems not necessary for the correctness, but it is very important for the security proof. Looking ahead, it allows us to switch the “exact decryption key” between \(\mathbf S_0\) and \(\mathbf S_1\).
Correctness and Equivalence of the Secret Keys \(\mathbf S_0,\mathbf S_1\). In the following, we show that for appropriate choice of parameters, the above scheme \(\mathcal {TBE}\) is correct, and has the property that both \(\mathbf S_0\) and \(\mathbf S_1\) are equivalent in terms of decryption.
-
The correctness of the scheme requires the following:
-
1.
\(|(\mathbf S_0'-\mathbf S_0)^\mathsf{T}\mathbf e_1 + (\mathbf E_0-\mathbf E_0')^\mathsf{T}\mathbf s|\le {\beta {q}}\) (to let \(\mathbf G\) reconstruct \(\mathbf s\) from \(\tilde{\mathbf c}_0\)).
-
2.
\(|\mathbf c-\mathbf {As}|\le 2\mu {n} \wedge |\mathbf c_0 - (\mathbf {GH}_{\mathbf t} + \mathbf B_0)\mathbf s |\le \gamma {q} \wedge |\mathbf c_1 - (\mathbf {GH}_{\mathbf t} + \mathbf B_1)\mathbf s |\le \gamma {q}\).
-
3.
\(|\mathbf e_2|\le 2\mu \ell \) (such that \(\mathbf G_2\) can reconstruct \(\mathbf m\) from \(\mathbf c_2-\mathbf {Cs} = \mathbf G_2\mathbf m + \mathbf e_2\)).
-
1.
-
For obtaining CCA security, we also need to show that \(\mathbf S_0\) and \(\mathbf S_1\) have the same decryption ability except with negligible probability, namely,
-
1.
If \(|\mathbf c-\mathbf {As}|\le 2\mu {n} \wedge |\mathbf c_0 - (\mathbf {GH}_{\mathbf t} + \mathbf B_0)\mathbf s |\le \gamma {q}\), then \(\mathbf G\) can reconstruct \(\mathbf s\) from a code within bounded error \(|(\mathbf S_0'-\mathbf S_0)\mathbf e_1 + (\mathbf E_0-\mathbf E_0')\mathbf s|\le {\beta }q\).
-
2.
If \(|\mathbf c-\mathbf {As}|\le 2\mu {n} \wedge |\mathbf c_1 - (\mathbf {GH}_{\mathbf t} + \mathbf B_1)\mathbf s |\le \gamma {q}\), then \(\mathbf G\) can reconstruct \(\mathbf s\) from a code within bounded error \(|(\mathbf S_1'-\mathbf S_1)\mathbf e_1 + (\mathbf E_1-\mathbf E_1')\mathbf s|\le {\beta }q\).
-
1.
It suffices to show that each Hamming weight constraint above holds (with overwhelming probability) individually and thus polynomially many of them hold simultaneously (with overwhelming probability as well) by Lemma 1. First, Chernoff bound guarantees that \(\Pr [|\mathbf e_1|\le 2\mu {n}]=1-2^{-\varOmega (n)}\) and \(\Pr [|\mathbf e_2|\le 2\mu {\ell }]=1-2^{-\varOmega (\ell )}\). Second, for \(i\in \{0, 1\}^{} \) the bound \(|(\mathbf S_i')^\mathsf{T}\mathbf e_1 - (\mathbf E_i')^\mathsf{T}\mathbf s|\le \gamma {q}\) is ensured by Lemma 6 and we further bound \(|(\mathbf S_i'-\mathbf S_i)\mathbf e_1 + (\mathbf E_i-\mathbf E_i')\mathbf s|\le {\beta }q\) with Lemma 9 below (proof similar to Lemma 6 and thus deferred to Appendix B).
Lemma 9
For constants \(\alpha >0\), \(0<\mu {\le }1/10\) and \(\mu _1=\alpha \log {n}/n\), let \(\mathbf S\) and \(\mathbf S'\) be i.i.d. to \((\widetilde{\mathcal {B}}^{n}_{\mu _1})^q\), \(\mathbf E\) and \(\mathbf E'\) be i.i.d. to \(\mathcal {B}^{n{\times }q}_{\mu }\), \(\mathbf s\sim \widetilde{\mathcal {B}}_{\mu _1}^n\) and \(\mathbf e\sim \mathcal {B}_\mu ^n\). Then,
Security of the TBE Scheme.We now show that under the LPN assumption, the above scheme \(\mathcal {TBE}\) is IND-sTag-CCA secure in the standard model.
Theorem 5
Assume that the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard for any constant \(0<\mu {\le }1/10\), then our TBE scheme \(\mathcal {TBE}\) is IND-sTag-CCA secure.
Proof
Let \(\mathcal {A}\) be any PPT adversary that can attack our TBE scheme \(\mathcal {TBE}\) with advantage \(\epsilon \). We show that \(\epsilon \) must be negligible in n. We continue the proof by using a sequence of games, where the first game is the real IND-sTag-CCA security game, while the last is a random game in which the challenge ciphertext is independent from the choices of the challenge plaintexts. Since any PPT adversary \(\mathcal {A}\)’s advantage in a random game is exactly 0, the security of \(\mathcal {TBE}\) can be established by showing that \(\mathcal {A}\)’s advantage in any two consecutive games are negligibly close.
Game 0. The challenger \(\mathcal {C}\) honestly runs the adversary \(\mathcal {A}\) with the security parameter n, and obtains a target tag \(\mathbf t^*\) from \(\mathcal {A}\). Then, it simulates the IND-sTag-CCA security game for \(\mathcal {A}\) as follows:
-
KeyGen. First uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell {\times }n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\) and \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \). Finally, \(\mathcal {C}\) sends \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1,\mathbf C)\) to the adversary \(\mathcal {A}\), and keeps \(sk=(\mathbf S_0,\mathbf S_1)\) to itself.
-
Phase 1. After receiving a decryption query \((\mathbf t,(\mathbf c,\mathbf c_0,\mathbf c_1,\mathbf c_2))\) from the adversary \(\mathcal {A}\), the challenger \(\mathcal {C}\) directly returns \(\bot \) to \(\mathcal {A}\) if \(\mathbf t=\mathbf t^*\). Otherwise, it first computes
$$ \tilde{\mathbf c}_0: = \mathbf c_0 - \mathbf S_0^\mathsf{T}\mathbf c = \mathbf {GH}_{\mathbf t}\mathbf s + (\mathbf S_0'-\mathbf S_0)^\mathsf{T}\mathbf e_1 + (\mathbf E_0-\mathbf E_0')^\mathsf{T}\mathbf s. $$Then, it reconstruct \(\mathbf b=\mathbf H_{\mathbf t}\mathbf s\) from the error \((\mathbf S_0'-\mathbf S_0)^\mathsf{T}\mathbf e_1 + (\mathbf E_0-\mathbf E_0')^\mathsf{T}\mathbf s\) by using the error correction property of \(\mathbf G\), and compute \(\mathbf s = \mathbf H_{\mathbf t}^{-1}\mathbf b\). If
$$\begin{aligned} |\mathbf c-\mathbf {As}|\le 2\mu {n} \wedge |\mathbf c_0 - (\mathbf {GH}_{\mathbf t} + \mathbf B_0)\mathbf s |\le \gamma q \wedge |\mathbf c_1 - (\mathbf {GH}_{\mathbf t} + \mathbf B_1)\mathbf s |\le \gamma q \end{aligned}$$is true, reconstruct M from \(\mathbf c_2-\mathbf {Cs} = \mathbf G_2\mathbf m+ \mathbf e_2\) by using the error correction property of \(\mathbf G_2\), else let \(\mathbf m=\bot \). Finally, return the decrypted result \(\mathbf m\) to the adversary \(\mathcal {A}\).
-
Challenge. After receiving two equal length plaintexts \(\mathbf m_0, \mathbf m_1\in \mathcal {M}\) from the adversary \(\mathcal {A}\), the challenger \(\mathcal {C}\) first randomly chooses a bit \(b^*\xleftarrow {\$} \{0, 1\}^{} \), and
$$ \mathbf s \xleftarrow {\$}\widetilde{\mathcal {B}}_{\mu _1}^{n}, \mathbf e_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n}, \mathbf e_2 \xleftarrow {\$}\mathcal {B}_\mu ^{\ell }, \mathbf S_0',\mathbf S_1'\xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q, \mathbf E_0',\mathbf E_1'\xleftarrow {\$}\mathcal {B}_\mu ^{n\times q} $$Then, it defines
$$ \begin{array}{ll} \mathbf c^*: = \mathbf {As} + \mathbf e_1 &{} \in \{0, 1\}^{n} \\ \mathbf c_0^*: = (\mathbf {GH}_{\mathbf t^*} + \mathbf B_0)\mathbf s + (\mathbf S_0')^\mathsf{T}\mathbf e_1 - (\mathbf E_0')^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_1^*:= (\mathbf {GH}_{\mathbf t^*} + \mathbf B_1)\mathbf s + (\mathbf S_1')^\mathsf{T}\mathbf e_1 - \mathbf (\mathbf E_1')^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_2^*:= \mathbf {Cs} + \mathbf e_2 + \mathbf G_2 \mathbf m_{b^*} &{} \in \{0, 1\}^{\ell } , \end{array} $$and returns the challenge ciphertext \((\mathbf c^*,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) to the adversary \(\mathcal {A}\).
-
Phase 2. The adversary can adaptively make more decryption queries, and the challenger \(\mathcal {C}\) responds as in Phase 1.
-
Guess. Finally, \(\mathcal {A}\) outputs a guess \(b\in \{0, 1\}\). If \(b=b^*\), the challenger \(\mathcal {C}\) outputs 1, else outputs 0.
Event. Let \(F_i\) be the event that \(\mathcal {C}\) outputs 1 in Game i for \(i\in \{0,1,\dots , 6\}\).
Lemma 10
\(|\Pr [F_0] -\frac{1}{2}| = \epsilon \).
Proof
This lemma immediately follows the fact that \(\mathcal {C}\) honestly simulates the attack environment for \(\mathcal {A}\), and only outputs 1 if and only if \(b=b^*\).
Game 1. This game is identical to Game 0 except that the challenger \(\mathcal {C}\) changes the key generation phase as follows:
-
KeyGen. First uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell {\times }n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\), and \(\mathbf B_1' \xleftarrow {\$} \{0, 1\}^{q\times n} \). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \). Finally, \(\mathcal {C}\) sends \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1',\mathbf C)\) to the adversary \(\mathcal {A}\), and keeps \(sk=(\mathbf S_0,\mathbf S_1)\) to itself.
Lemma 11
If the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard, then we have \(|\Pr [F_1] -\Pr [F_0]| \le \mathsf{negl}(n)\).
Proof
Since the only difference between Game 0 and Game 1 is that \(\mathcal {C}\) replaces \(\mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \) in Game 0 with a randomly chosen \(\mathbf B_1'\xleftarrow {\$} \{0, 1\}^{q\times n} \) in Game 1. we have that Game 0 and Game 1 are computationally indistinguishable for any PPT adversary \(\mathcal {A}\) by our assumption and Corollary 1. This means that \(|\Pr [F_1] -\Pr [F_0]| \le \mathsf{negl}(n)\) holds.
Game 2. This game is identical to Game 1 except that the challenger \(\mathcal {C}\) changes the key generation phase as follows:
-
KeyGen. First uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell {\times }n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\), and \(\mathbf B_1''\xleftarrow {\$} \{0, 1\}^{q\times n} \). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \) and \(\mathbf B_1' = \mathbf B_1'' - \mathbf {GH}_{\mathbf t^*}\). Finally, \(\mathcal {C}\) sends \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1',\mathbf C)\) to the adversary \(\mathcal {A}\), and keeps \(sk=(\mathbf S_0,\mathbf S_1)\) to itself.
-
Challenge. After receiving two equal length plaintexts \(\mathbf m_0,\mathbf m_1\in \mathcal {M}\) from the adversary \(\mathcal {A}\), the challenger \(\mathcal {C}\) first randomly chooses a bit \(b^*\xleftarrow {\$} \{0, 1\}^{} \), and
$$ \mathbf s \xleftarrow {\$}\widetilde{\mathcal {B}}_{\mu _1}^{n}, \mathbf e_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n}, \mathbf e_2 \xleftarrow {\$}\mathcal {B}_\mu ^{\ell }, \mathbf S_0',\mathbf S_1'\xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q, \mathbf E_0',\mathbf E_1'\xleftarrow {\$}\mathcal {B}_\mu ^{n\times q} $$Then, it defines
$$ \begin{array}{ll} \mathbf c^*: = \mathbf {As} + \mathbf e_1 &{} \in \{0, 1\}^{n} \\ \mathbf c_0^*: = (\mathbf {GH}_{\mathbf t^*} + \mathbf B_0)\mathbf s + (\mathbf S_0')^\mathsf{T}\mathbf e_1 - (\mathbf E_0')^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_1^*:= (\mathbf {GH}_{\mathbf t^*} + \mathbf B_1)\mathbf s + (\mathbf S_1)^\mathsf{T}\mathbf e_1 - \mathbf (\mathbf E_1)^\mathsf{T}\mathbf s &{} \in \{0, 1\}^{q} \\ \mathbf c_2^*:= \mathbf {Cs} + \mathbf e_2 + \mathbf G_2 \mathbf m_{b^*} &{} \in \{0, 1\}^{\ell } , \end{array} $$and returns the challenge ciphertext \((\mathbf c^*,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) to the adversary \(\mathcal {A}\).
Lemma 12
\(\Pr [F_2] = \Pr [F_1]\).
Proof
Because of \(\mathbf B_1''\xleftarrow {\$} \{0, 1\}^{q\times n} \), we have that \(\mathbf B_1' = \mathbf B_1'' - \mathbf {GH}_{\mathbf t^*}\) is also uniformly distributed over \( \{0, 1\}^{q\times n} \). This means that the public key in Game 2 has the same distribution as that in Game 1. In addition, since \(\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\) and \(\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\) are chosen from the same distribution as \(\mathbf S_1'\) and \(\mathbf E_1'\) respectively. By the fact that \(\mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \) is not included in the public key \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1',\mathbf C)\) (and thus \(\mathcal {A}\) has no information about \(\mathbf S_1\) and \(\mathbf E_1\) before the challenge phase), we have that the challenge ciphertext in Game 2 also has the same distribution as that in Game 1. In all, Game 2 is identical to Game 1 in the adversary’s view. Thus, we have \(\Pr [F_2] = \Pr [F_1]\).
Game 3. This game is identical to Game 2 except that the challenger \(\mathcal {C}\) changes the key generation phase as follows:
-
KeyGen. First uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell {\times }n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), and \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \) and \(\mathbf B_1' = \mathbf B_1 - \mathbf {GH}_{\mathbf t^*}\). Finally, \(\mathcal {C}\) sends \(pk=(\mathbf A,\mathbf B_0,\mathbf B_1',\mathbf C)\) to the adversary \(\mathcal {A}\), and keeps \(sk=(\mathbf S_0,\mathbf S_1)\) to itself.
Lemma 13
If the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard, then \(|\Pr [F_3] -\Pr [F_2]| \le \mathsf{negl}(n)\).
Proof
Since the only difference between Game 2 and Game 3 is that \(\mathcal {C}\) replaces the randomly chosen \(\mathbf B_1''\xleftarrow {\$} \{0, 1\}^{q\times n} \) in Game 2 with \(\mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \) in Game 3, by our assumption and Corollary 1 we have that Game 2 and Game 3 are computationally indistinguishable for any PPT adversary \(\mathcal {A}\) seeing \((\mathbf S_1^\mathsf{T}\mathbf e_1,\mathbf E_1^\mathsf{T}\mathbf s)\) in the challenge ciphertext. This means that \(|\Pr [F_3] -\Pr [F_2]| \le \mathsf{negl}(n)\) holds.
Remark 3
Note that for the challenge ciphertext \((\mathbf c,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) in Game 3, we have that \(\mathbf c_1^*:= (\mathbf {GH}_{\mathbf t_1^*} + \mathbf B_1')\mathbf s + \mathbf S_1^\mathsf{T}\mathbf e_1 - \mathbf E_1^\mathsf{T}\mathbf s = \mathbf S_1^\mathsf{T}\mathbf c\).
Game 4. This game is identical to Game 3 except that the challenger \(\mathcal {C}\) answers the decryption queries by using \(\mathbf S_1\) instead of \(\mathbf S_0\).
Lemma 14
\(|\Pr [F_4] -\Pr [F_3]| \le \mathsf{negl}(n)\).
Proof
This lemma directly follows from the fact that both \(\mathbf S_0\) and \(\mathbf S_1\) have equivalent decryption ability except with negligible probability.
Game 5. This game is identical to Game 4 except that the challenger \(\mathcal {C}\) changes the key generation phase and the challenge phase as follows:
-
KeyGen. First uniformly choose matrices \(\mathbf A\xleftarrow {\$}\mathcal {D}_{\lambda }^{n{\times }n},\mathbf C\xleftarrow {\$}\mathcal {D}_{\lambda }^{\ell {\times }n}\), \(\mathbf S_0,\mathbf S_1 \xleftarrow {\$}(\widetilde{\mathcal {B}}_{\mu _1}^{n})^q\), and \(\mathbf E_0,\mathbf E_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n\times q}\). Then, compute \(\mathbf B_0 = \mathbf S_0^\mathsf{T}\mathbf A + \mathbf E_0^\mathsf{T}, \mathbf B_1= \mathbf S_1^\mathsf{T}\mathbf A + \mathbf E_1^\mathsf{T}\in \{0, 1\}^{q\times n} \), \(\mathbf B_0' = \mathbf B_0 - \mathbf {GH}_{\mathbf t^*}\) and \(\mathbf B_1' = \mathbf B_1 - \mathbf {GH}_{\mathbf t^*}\). Finally, \(\mathcal {C}\) sends \(pk=(\mathbf A,\mathbf B_0',\mathbf B_1',\mathbf C)\) to the adversary \(\mathcal {A}\), and keeps \(sk=(\mathbf S_0,\mathbf S_1)\) to itself.
-
Challenge. After receiving two equal length plaintexts \(\mathbf m_0,\mathbf m_1\in \mathcal {M}\) from the adversary \(\mathcal {A}\), the challenger \(\mathcal {C}\) first randomly chooses a bit \(b^*\xleftarrow {\$} \{0, 1\}^{} \), and \(\mathbf s \xleftarrow {\$}\widetilde{\mathcal {B}}_{\mu _1}^{n}, \mathbf e_1 \xleftarrow {\$}\mathcal {B}_\mu ^{n}\) and \(\mathbf e_2 \xleftarrow {\$}\mathcal {B}_\mu ^{\ell }\). Then, it defines
$$ \begin{array}{ll} \mathbf c^*: = \mathbf {As} + \mathbf e_1 &{} \in \{0, 1\}^{n} \\ \mathbf c_0^*: = (\mathbf {GH}_{\mathbf t^*} + \mathbf B_0')\mathbf s + \mathbf S_0^\mathsf{T}\mathbf e_1 - \mathbf E_0^\mathsf{T}\mathbf s = \mathbf S_0^\mathsf{T}\mathbf c^*&{} \in \{0, 1\}^{q} \\ \mathbf c_1^*:= (\mathbf {GH}_{\mathbf t^*} + \mathbf B_1')\mathbf s + \mathbf S_1^\mathsf{T}\mathbf e_1 - \mathbf E_1^\mathsf{T}\mathbf s = \mathbf S_1^\mathsf{T}\mathbf c^*&{} \in \{0, 1\}^{q} \\ \mathbf c_2^*:= \mathbf {Cs} + \mathbf e_2 + \mathbf G_2 \mathbf m_{b^*} &{} \in \{0, 1\}^{\ell } , \end{array} $$and returns the challenge ciphertext \((\mathbf c,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) to the adversary \(\mathcal {A}\).
Lemma 15
If the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard, then we have that \(|\Pr [F_5] -\Pr [F_4]| \le \mathsf{negl}(n)\).
Proof
One can easily show this lemma holds by using similar proofs from Lemma 10 to Lemma 14. We omit the details.
Game 6. This game is identical to Game 5 except that the challenger \(\mathcal {C}\) changes the challenge phase as follows:
-
Challenge. After receiving two equal length plaintexts \(\mathbf m_0,\mathbf m_1\in \mathcal {M}\) from the adversary \(\mathcal {A}\), the challenger \(\mathcal {C}\) first randomly chooses \(b^*\xleftarrow {\$} \{0, 1\}^{} , \mathbf u\xleftarrow {\$} \{0, 1\}^{n} \) and \(\mathbf v\xleftarrow {\$} \{0, 1\}^{\ell } \). Then, it defines
$$ \begin{array}{ll} \mathbf c^*: = \mathbf u&{} \in \{0, 1\}^{n} \\ \mathbf c_0^*:=\mathbf S_0\mathbf c^*&{} \in \{0, 1\}^{q} \\ \mathbf c_1^*:= \mathbf S_1\mathbf c^*&{} \in \{0, 1\}^{q} \\ \mathbf c_2^*:= \mathbf v + \mathbf G_2 \mathbf m_{b^*} &{} \in \{0, 1\}^{\ell } , \end{array} $$and returns the challenge ciphertext \((\mathbf c,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) to the adversary \(\mathcal {A}\).
Lemma 16
If the decisional \(\mathsf {LPN}_{\mu ,n}\) problem is \(2^{\omega (n^{\frac{1}{2}})}\)-hard, then we have that \(|\Pr [F_6] -\Pr [F_5]| \le \mathsf{negl}(n)\).
Proof
Since the only difference between Game 5 and Game 6 is that \(\mathcal {C}\) replaces \(\mathbf c^*= \mathbf {As} + \mathbf e_1\) and \(\mathbf c_2^* = \mathbf {Cs} + \mathbf e_2 + \mathbf G_2 \mathbf m_{b^*}\) in Game 5 with \(\mathbf c^*: = \mathbf u\) and \(\mathbf c_2^*:= \mathbf v + \mathbf G_2 \mathbf m_{b^*}\) in Game 6, where \(\mathbf u\xleftarrow {\$} \{0, 1\}^{n} \) and \(\mathbf v\xleftarrow {\$} \{0, 1\}^{\ell } \), by our assumption and Corollary 1 we have that Game 5 and Game 6 are computationally indistinguishable for any PPT adversary \(\mathcal {A}\). Obviously, we have that \(|\Pr [F_6] -\Pr [F_5]| \le \mathsf{negl}(n)\) holds.
Lemma 17
\(\Pr [F_6] = \frac{1}{2}\).
Proof
This claim follows from the fact that the challenge ciphertext \((\mathbf c,\mathbf c_0^*,\mathbf c_1^*,\mathbf c_2^*)\) in Game 6 perfectly hides the information of \(\mathbf m_{b^*}\).
In all, by Lemma 10 \(\sim \) Lemma 17, we have that \(\epsilon = |\Pr [F_0] = \frac{1}{2}| \le \mathsf{negl}(n) \). This completes the proof of Theorem 5.
Notes
- 1.
We are not aware of any non-trivial time-query tradeoff results to break low-noise LPN in time \(2^{o(n^{1-c})}\) even with super-polynomial number of queries.
- 2.
More precisely, Alekhnovich’s PKE is based on a variant called the Exact LPN whose noise vector is sampled from \(\chi _{\mu {q}}^q\) for \(\mu =\frac{1}{\sqrt{n}}\) (i.e., uniform random distribution over q-bit strings of Hamming weight \(\mu {q}\)), which is implied by LPN with noise rate \(\mu \).
- 3.
Typically (in the context of one-way functions), the Goldreich-Levin Theorem [25] assumes a uniformly random secret \(\mathbf s\), which is however not necessary. A Markov argument suggests that \(\mathbf s\) can follow any polynomial-time sampleable distribution, as long as f on \(\mathbf s\) is hard to invert.
- 4.
- 5.
We assume \(\mathbf x\sim {U_n}\) to be in line with [17], but actually our results hold for any efficiently sampleable \(\mathbf x\) as long as \(\mathbf x\) given f(\(\mathbf x\)) is \(2^{-2\lambda }\)-hard-to-invert.
- 6.
Informally, we say that a cryptographic scheme/problem \(\varPi \) is T-secure/hard if every probabilistic adversary of time (and query, if applicable) complexity T achieve advantage no more than 1 / T in breaking/solving \(\varPi \).
- 7.
Recall that for \(m{\ll }n\) we have by Stirling’s approximation that \({n \atopwithdelims ()m}\approx {n^{m}}/m!\) and thus \(\chi _{\log {n}}^n\) (uniform distribution over n-bit strings of Hamming weight \(\log {n}\)) is of min-entropy roughly \(\log ^2{n}-\log {n}\log \log {n}\ge \log ^2{n}/2\).
- 8.
Please do not confuse \(q'\) with q, where \(q'\) is the number of samples in LPN (see Theorem 2) and is set to n (for a square matrix), and q is the number of parallel repetitions of LPN on independent secrets and noise vectors.
References
Related work on LPN-based authentication schemes. http://www.ecrypt.eu.org/lightweight/index.php/HB
Alekhnovich, M.: More on average case vs approximation complexity. In: 44th Annual Symposium on Foundations of Computer Science, pp. 298–307. IEEE, Cambridge, Massachusetts, October 2003
Applebaum, B., Cash, D., Peikert, C., Sahai, A.: Fast cryptographic primitives and circular-secure encryption based on hard learning problems. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 595–618. Springer, Heidelberg (2009)
Applebaum, B., Ishai, Y., Kushilevitz, E.: Cryptography with constant input locality. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 92–110. Springer, Heidelberg (2007)
Becker, A., Joux, A., May, A., Meurer, A.: Decoding random binary linear codes in 2\(^{n}/20\): how 1 + 1 = 0 improves information set decoding. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 520–536. Springer, Heidelberg (2012)
Berlekamp, E., McEliece, R.J., van Tilborg, H.: On the inherent intractability of certain coding problems. IEEE Trans. Inf. Theor. 24(3), 384–386 (1978)
Bernstein, D.J., Lange, T., Peters, C.: Smaller decoding exponents: ball-collision decoding. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 743–760. Springer, Heidelberg (2011)
Blum, A., Furst, M.L., Kearns, M., Lipton, R.J.: Cryptographic primitives based on hard learning problems. In: Stinson, D.R. (ed.) CRYPTO 1993. LNCS, vol. 773, pp. 278–291. Springer, Heidelberg (1994)
Blum, A., Kalai, A., Wasserman, H.: Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM 50(4), 506–519 (2003)
Boneh, D., Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption. SIAM J. Comput. 36(5), 1301–1328 (2006). http://dx.doi.org/10.1137/S009753970544713X
Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 207–222. Springer, Heidelberg (2004)
Canteaut, A., Chabaud, F.: A new algorithm for finding minimum-weight words in a linear code: application to mceliece’s cryptosystem and to narrow-sense BCH codes of length 511. IEEE Trans. Inf. Theor. 44(1), 367–378 (1998)
Cash, D., Kiltz, E., Tessaro, S.: Two-round man-in-the-middle security from LPN. In: Kushilevitz, E., et al. (eds.) TCC 2016-A. LNCS, vol. 9562, pp. 225–248. Springer, Heidelberg (2016). doi:10.1007/978-3-662-49096-9_10
Cramer, R., Damgård, I.: On the amortized complexity of zero-knowledge protocols. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 177–191. Springer, Heidelberg (2009)
Damgård, I., Park, S.: How practical is public-key encryption based on lpn and ring-lpn? Cryptology ePrint Archive, Report 2012/699. http://eprint.iacr.org/2012/699 (2012)
David, B., Dowsley, R., Nascimento, A.C.A.: Universally composable oblivious transfer based on a variant of LPN. In: Gritzalis, D., Kiayias, A., Askoxylakis, I. (eds.) CANS 2014. LNCS, vol. 8813, pp. 143–158. Springer, Heidelberg (2014)
Dodis, Y., Kalai, Y.T., Lovett, S.: On cryptography with auxiliary input. In: Mitzenmacher, M. (ed.) STOC. pp. 621–630. ACM (2009)
Dodis, Y., Kiltz, E., Pietrzak, K., Wichs, D.: Message authentication, revisited. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 355–374. Springer, Heidelberg (2012)
Döttling, N.: Low noise lpn: Kdm secure public key encryption and sample amplification. In: Katz, J. (ed.) PKC 2015. LNCS, vol. 9020, pp. 604–626. Springer, Heidelberg (2015)
Döttling, N., Müller-Quade, J., Nascimento, A.C.A.: IND-CCA secure cryptography based on a variant of the lpn problem. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 485–503. Springer, Heidelberg (2012)
Feldman, V., Gopalan, P., Khot, S., Ponnuswami, A.K.: New results for learning noisy parities and halfspaces. In: 47th Symposium on Foundations of Computer Science, pp. 563–574. IEEE, Berkeley, CA, USA, 21–24 October 2006
Forney, D.: Concatenated Codes. MIT Press, Cambridge (1966)
Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: Ladner, R.E., Dwork, C. (eds.) Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 197–206. ACM, Victoria, BC, Canada, 17–20 May 2008
Gertner, Y., Kannan, S., Malkin, T., Reingold, O., Viswanathan, M.: The relationship between public key encryption and oblivious transfer. In: Proceedings of 41st Annual Symposium on Foundations of Computer Science, 2000, pp. 325–335 (2000)
Goldreich, O., Levin, L.A.: A hard-core predicate for all one-way functions. In: Johnson[31] , pp. 25–32
Goldwasser, S., Kalai, Y., Peikert, C., Vaikuntanathan, V.: Robustness of the learning with errors assumption. In: Innovations in Theoretical Computer Science, ITCS 2010, pp. 230–240. Tsinghua University Press (2010)
Graham, R.L., Knuth, D.E., Patashnik, O.: Concrete Mathematics: A Foundation for Computer Science, 2nd edn. Addison-Wesley Longman Publishing Co., Inc, Boston (1994)
Håstad, J., Impagliazzo, R., Levin, L., Luby, M.: Construction of pseudorandom generator from any one-way function. SIAM J. Comput. 28(4), 1364–1396 (1999)
Hopper, N.J., Blum, M.: Secure human identification protocols. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, pp. 52–66. Springer, Heidelberg (2001)
Jain, A., Krenn, S., Pietrzak, K., Tentes, A.: Commitments and efficient zero-knowledge proofs from learning parity with noise. In: Wang, X., Sako, K. (eds.) ASIACRYPT 2012. LNCS, vol. 7658, pp. 663–680. Springer, Heidelberg (2012)
Johnson, D.S. (ed.): Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing. Seattle, Washington, 15–17 May 1989
Juels, A., Weis, S.A.: Authenticating pervasive devices with human protocols. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 293–308. Springer, Heidelberg (2005)
Justesen, J.: A class of constructive asymptotically good algebraic codes. IEEE Trans. Info. Theor. 18(5), 652–656 (1972)
Katz, J., Shin, J.S.: Parallel and concurrent security of the HB and HB\(^{+}\) protocols. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 73–87. Springer, Heidelberg (2006)
Kiltz, E.: Chosen-ciphertext security from tag-based encryption. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 581–600. Springer, Heidelberg (2006)
Kiltz, E., Masny, D., Pietrzak, K.: Simple chosen-ciphertext security from low-noise LPN. In: Krawczyk, H. (ed.) PKC 2014. LNCS, vol. 8383, pp. 1–18. Springer, Heidelberg (2014)
Kiltz, E., Pietrzak, K., Cash, D., Jain, A., Venturi, D.: Efficient authentication from hard learning problems. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 7–26. Springer, Heidelberg (2011)
Kirchner, P.: Improved generalized birthday attack. Cryptology ePrint Archive, Report 2011/377 (2011). http://eprint.iacr.org/2011/377
Levieil, É., Fouque, P.-A.: An improved LPN algorithm. In: De Prisco, R., Yung, M. (eds.) SCN 2006. LNCS, vol. 4116, pp. 348–359. Springer, Heidelberg (2006)
Lyubashevsky, V.: The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem. In: Chekuri, C., Jansen, K., Rolim, J.D.P., Trevisan, L. (eds.) APPROX 2005 and RANDOM 2005. LNCS, vol. 3624, pp. 378–389. Springer, Heidelberg (2005)
Lyubashevsky, V., Masny, D.: Man-in-the-Middle secure authentication schemes from LPN and weak PRFs. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 308–325. Springer, Heidelberg (2013)
May, A., Meurer, A., Thomae, E.: Decoding Random Linear Codes in \(\tilde{\cal O}(2^{0.054n})\). In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 107–124. Springer, Heidelberg (2011)
Micciancio, D., Mol, P.: Pseudorandom knapsacks and the sample complexity of LWE search-to-decision reductions. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 465–484. Springer, Heidelberg (2011)
Micciancio, D., Peikert, C.: Trapdoors for lattices: simpler, tighter, faster, smaller. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 700–718. Springer, Heidelberg (2012)
Pietrzak, K.: Cryptography from learning parity with noise. In: Bieliková, M., Friedrich, G., Gottlob, G., Katzenbeisser, S., Turán, G. (eds.) SOFSEM 2012. LNCS, vol. 7147, pp. 99–114. Springer, Heidelberg (2012)
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Gabow, H.N., Fagin, R. (eds.) STOC, pp. 84–93. ACM (2005)
Rosen, A., Segev, G.: Chosen-ciphertext security via correlated products. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 419–436. Springer, Heidelberg (2009)
Stern, J.: A method for finding codewords of small weight. In: 3rd International Colloquium on Coding Theory and Applications, pp. 106–113 (1988)
Yu, Y.: The LPN problem with auxiliary input. (withdrawn) see historical versions at. http://eprint.iacr.org/2009/467
Acknowledgments
Yu Yu was supported by the National Basic Research Program of China Grant No. 2013CB338004, the National Natural Science Foundation of China Grant (Nos. 61472249, 61572192, 61572149 and U1536103), Shanghai excellent academic leader funds (No. 16XD1400200) and International Science & Technology Cooperation & Exchange Projects of Shaanxi Province (2016KW-038). Jiang Zhang is supported by the National Basic Research Program of China under Grant No. 2013CB338003 and the National Natural Science Foundation of China under Grant Nos. U1536205, 61472250 and 61402286.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Definitions and Security Notions
1.1 A.1 Symmetric-Key Encryption Schemes with Auxiliary Input
Definition 6
(Symmetric-Key Encryption Schemes). A symmetric-key encryption scheme \(\varPi \) is a tuple (\(\mathsf {KeyGen}\),\(\mathsf {Enc}\),\(\mathsf {Dec}\)) with message space \(\mathcal {M}\), such that
-
\(\mathsf {KeyGen}(1^n)\) is a PPT algorithm that takes a security-parameter \(1^n\) and outputs a symmetric key k.
-
\(\mathsf {Enc}_{k}(m)\) is a PPT algorithm that encrypts a message \(m\in \mathcal {M}\) under key k and outputs a ciphertext c.
-
\(\mathsf {Dec}_{k}(c)\) is a deterministic polynomial-time algorithm that decrypts a ciphertext c using key k and outputs a plaintext m.
Definition 7
(Correctness). We say that a symmetric-key encryption scheme \(\varPi \) = (\(\mathsf {KeyGen}\), \(\mathsf {Enc}\), \(\mathsf {Dec}\)) is correct, if it holds for every plaintext \(m\in \mathcal {M}\) that
Definition 8
(IND-CPA/IND-CCA SKE w.r.t. Auxiliary Input). For \(X\in \){CPA,CCA}, a symmetric-key encryption scheme \(\varPi \) = (\(\mathsf {KeyGen}\),\(\mathsf {Enc}\),\(\mathsf {Dec}\)) is IND-X secure w.r.t. sub-exponentially hard-to-invert auxiliary input if there exists a constant \(0<\alpha <1\) such that for any PPT adversary \({\mathcal {A}}\), any \(2^{-\varOmega (n^{\alpha })}\)- hard-to-invert function f
where \(\mathsf {SKE}^{cpa}_{\varPi ,f,{\mathcal {A}}}\)(\(1^n\),\(\alpha \)) is the IND-CPA indistinguishability experiment defined as below:
-
1.
On \(k\leftarrow \mathsf {KeyGen}(1^n)\), the adversary takes as input \(1^n\), f(k), and is given oracle access to \(\mathsf {Enc}_k\). Then, he outputs a pair of messages \(m_0\) and \(m_1\) of the same length.
-
2.
A random bit \(b\xleftarrow {\$} \{0, 1\}^{} \) is sampled, and then a challenge ciphertext \(c\leftarrow \mathsf {Enc}_{k}(m_b)\) is computed and given to \({\mathcal {A}}\).
-
3.
\({\mathcal {A}}\) continues to have oracle access to \(\mathsf {Enc}_k\) and finally outputs \(b'\in \{0, 1\}^{} \).
-
4.
The experiment outputs 1 if \(b'=b\), and 0 otherwise.
and \(\mathsf {SKE}^{cca}_{\varPi ,f,{\mathcal {A}}}\)(\(1^n\),\(\alpha \)) is the IND-CCA indistinguishability experiment defined as below:
-
1.
On \(k\leftarrow \mathsf {KeyGen}(1^n)\), the adversary takes as input \(1^n\), f(k), and is given oracle access to \(\mathsf {Enc}_k\) and \(\mathsf {Dec}_k\). Then, he outputs a pair of messages \(m_0\) and \(m_1\) of the same length.
-
2.
A random bit \(b\xleftarrow {\$} \{0, 1\}^{} \) is sampled, and then a challenge ciphertext \(c\leftarrow \mathsf {Enc}_{k}(m_b)\) is computed and given to \({\mathcal {A}}\).
-
3.
\({\mathcal {A}}\) continues to have oracle access to \(\mathsf {Enc}_k\) and \(\mathsf {Dec}_k\) (with the exception that decryption for challenge ciphertext is not allowed) and finally outputs \(b'\in \{0, 1\}^{} \).
-
4.
The experiment outputs 1 if \(b'=b\), and 0 otherwise.
1.2 A.2 Public-Key Encryption Schemes
Definition 9
(Public-key Encryption Schemes). A public key encryption scheme \(\varPi \) is a tuple (\(\mathsf {KeyGen}\),\(\mathsf {Enc}\),\(\mathsf {Dec}\)) with message space \(\mathcal {M}\), such that
-
\(\mathsf {KeyGen}(1^n)\) is a PPT algorithm that takes a security-parameter \(1^n\) and outputs a pair of public and private keys (pk,sk).
-
\(\mathsf {Enc}_{pk}(m)\) is a PPT algorithm that encrypts message \(m\in \mathcal {M}\) under public key pk and outputs a ciphertext c.
-
\(\mathsf {Dec}_{sk}(c)\) is a deterministic polynomial-time algorithm that decrypts a ciphertext c using secret key sk and outputs a plaintext m (or \(\bot \)).
Definition 10
(Correctness). We say that a public-key encryption scheme \(\varPi \) = (\(\mathsf {KeyGen}\),\(\mathsf {Enc}\),\(\mathsf {Dec}\)) is correct, if it holds for every plaintext \(m\in \mathcal {M}\) that
Definition 11
(IND-CPA/IND-CCA PKE). For \(X\in \){CPA,CCA}, a public-key encryption scheme \(\varPi \) = (\(\mathsf {KeyGen}\),\(\mathsf {Enc}\),\(\mathsf {Dec}\)) is IND-X secure if for any PPT adversary \({\mathcal {A}}\)
where \(\mathsf {PKE}^{cpa}_{\varPi ,{\mathcal {A}}}\)(\(1^n\)) is the IND-CPA indistinguishability experiment defined as below:
-
1.
On \((pk,sk)\leftarrow \mathsf {KeyGen}(1^n)\), the adversary takes as input \(1^n\) and pk. Then, he outputs a pair of messages \(m_0\) and \(m_1\) of the same length.
-
2.
A random bit \(b\xleftarrow {\$} \{0, 1\}^{} \) is sampled, and then a challenge ciphertext \(c\leftarrow \mathsf {Enc}_{pk}(m_b)\) is computed and given to \({\mathcal {A}}\).
-
3.
\({\mathcal {A}}\) continues his computation and finally outputs \(b'\in \{0, 1\}^{} \).
-
4.
The experiment outputs 1 if \(b'=b\), and 0 otherwise.
and \(\mathsf {PKE}^{cca}_{\varPi {\mathcal {A}}}\)(\(1^n\)) is the IND-CCA indistinguishability experiment defined as below:
-
1.
On \((pk,sk)\leftarrow \mathsf {KeyGen}(1^n)\), the adversary takes as input \(1^n\) and pk, and is given oracle access to \(\mathsf {Dec}_{sk}\). Then, he outputs a pair of messages \(m_0\) and \(m_1\) of the same length.
-
2.
A random bit \(b\xleftarrow {\$} \{0, 1\}^{} \) is sampled, and then a challenge ciphertext \(c\leftarrow \mathsf {Enc}_{pk}(m_b)\) is computed and given to \({\mathcal {A}}\).
-
3.
\({\mathcal {A}}\) continues to have oracle access to \(\mathsf {Dec}_{sk}\) (with the exception that decryption for challenge ciphertext is not allowed) and finally outputs \(b'\in \{0, 1\}^{} \).
-
4.
The experiment outputs 1 if \(b'=b\), and 0 otherwise.
B Facts, Lemmas, Inequalities and Proofs Omitted
Lemma 18
(Sample-Preserving Reduction). For the same assumptions and notations as in the proof of Theorem 2, we have
Proof
Assume for contradiction that there exists a polynomial \(p(\cdot )\) and a PPT distinguisher \({\mathcal {D}}\) such that
for infinitely many n’s and we recall that \(\mathbf y,\mathbf r\sim U_\lambda \). Given input (\(z_1\),z,\(\mathbf A\),\({\mathbf A}{\cdot }\mathbf y+\mathbf e\),\({\mathbf r^\mathsf{T}}\)), we use an efficient \({\mathcal {D}}'\) (which invokes \({\mathcal {D}}\)) to predict the Goldreich-Levin hardcore bit \({\mathbf r^\mathsf{T}}{\cdot }\mathbf y\) with non-negligible probability (and thus a contradiction to the assumption). \({\mathcal {D}}'\) chooses a random \({\mathbf u}\xleftarrow {\$} \{0, 1\}^{q'} \),computes a new \(q'{\times }n\) Boolean matrix \({\tilde{\mathbf A}}={\mathbf A}-{\mathbf u}\cdot {\mathbf r^\mathsf{T}}\), applies \({\mathcal {D}}\) on \((z_1,z,\tilde{\mathbf A},{\mathbf A}{\mathbf y}+{\mathbf e})\) and outputs his answer. Note that \(\tilde{\mathbf A}\sim {U_{q'\times {n}}}\) and \({\mathbf A}{\mathbf y}+{\mathbf e}=\tilde{\mathbf A}{\mathbf y}+{\mathbf e}+{\mathbf u}\cdot {\mathbf r^\mathsf{T}}\mathbf y\). Therefore, when \({\mathbf r^\mathsf{T}}{\mathbf y}=0\) we have \((z_1,z,{\tilde{\mathbf A}},{{\mathbf A}{\mathbf y}+{\mathbf e}})\) follows \((f(\mathbf x,\mathbf e;Z),Z,{\tilde{\mathbf A}},{\tilde{\mathbf A}}{\cdot }{\mathbf y}{+}{\mathbf e})\) and for \({\mathbf r^\mathsf{T}}{\mathbf y}=1\) it is distributed according to \((f(\mathbf x,\mathbf e;Z),Z,{\tilde{\mathbf A}},U_{q'})\).
which completes the proof.
Proof of
Lemma 9 Consider \(|(\mathbf S'_0-\mathbf S_0)^{\mathsf{T}}\mathbf e+(\mathbf E_0-\mathbf E'_0)^{\mathsf{T}}\mathbf s\big |\) conditioned on any \(|{\mathbf e}|\le 1.01{\mu }n\) (except for a \(2^{-\varOmega (n)}\)-fraction) and \(|{\mathbf s}|\le 2\mu {n}\). We have by Lemma 7 and Lemma 8 that \(\mathbf S_0^\mathsf{T}{\mathbf e}\), \({\mathbf S'_0}^\mathsf{T}{\mathbf e}\) are i.i.d. to \(\mathcal {B}_{\delta _1}^q\), and \({\mathbf E_0}^\mathsf{T}{\mathbf s}\), \({\mathbf E'_0}^\mathsf{T}{\mathbf s}\) are i.i.d. to \(\mathcal {B}_{\delta _2}^q\), where \(\delta _1{\le }1/2-n^{-\alpha /2}\) and \(\delta _2{\le }1/2-n^{-\alpha }/2\). Thus, \(\big (({\mathbf S'_0}-{\mathbf S_0})^\mathsf{T}{\mathbf e}+({\mathbf E_0}-{\mathbf E'_0})^{\mathsf{T}}\mathbf s\big )\) follows \(\mathcal {B}_\delta ^q\) for \(\delta {\le }1/2-2n^{-3\alpha }\) by the Piling-up lemma, and then we complete the proof with Lemma 2.
Lemma 19
(Flattening Shannon Entropy). For any n \(\in \) \(\mathbb {N}\), \(0<\mu <1/2\) and any constant \(0<\varDelta <1\), there exists some random variable \(W\in \{0, 1\}^{n} \) such that \({{\mathbf {H}}_{\infty }}(W)\ge (1-\varDelta ){n{{\mathbf {H}}}(\mu )}\) and \(\mathsf {SD}(\mathcal {B}_\mu ^n,W){\le }2^{-\varOmega (\mu {n})}\).
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Yu, Y., Zhang, J. (2016). Cryptography with Auxiliary Input and Trapdoor from Constant-Noise LPN. In: Robshaw, M., Katz, J. (eds) Advances in Cryptology – CRYPTO 2016. CRYPTO 2016. Lecture Notes in Computer Science(), vol 9814. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-53018-4_9
Download citation
DOI: https://doi.org/10.1007/978-3-662-53018-4_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-53017-7
Online ISBN: 978-3-662-53018-4
eBook Packages: Computer ScienceComputer Science (R0)