Nothing Special   »   [go: up one dir, main page]

What a lovely hat

Is it made out of tin foil?

Papers updated in last 183 days (1599 results)

Last updated:  2024-09-24
SQIsign2D-East: A New Signature Scheme Using 2-dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification. In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.
Last updated:  2024-09-24
Compact Proofs of Partial Knowledge for Overlapping CNF Formulae
Gennaro Avitabile, Vincenzo Botta, Daniele Friolo, Daniele Venturi, and Ivan Visconti
At CRYPTO '94, Cramer, Damgaard, and Schoenmakers introduced a general technique for constructing honest-verifier zero-knowledge proofs of partial knowledge (PPK), where a prover Alice wants to prove to a verifier Bob she knows $\tau$ witnesses for $\tau$ claims out of $k$ claims without revealing the indices of those $\tau$ claims. Their solution starts from a base honest-verifier zero-knowledge proof of knowledge $\Sigma$ and requires to run in parallel $k$ execution of the base protocol, giving a complexity of $O(k\gamma(\Sigma))$, where $\gamma(\Sigma)$ is the communication complexity of the base protocol. However, modern practical scenarios require communication-efficient zero-knowledge proofs tailored to handle partial knowledge in specific application-dependent formats. In this paper we propose a technique to compose a large class of $\Sigma$-protocols for atomic statements into $\Sigma$-protocols for PPK over formulae in conjunctive normal form (CNF) that overlap, in the sense that there is a common subset of literals among all clauses of the formula. In such formulae, the statement is expressed as a conjunction of $m$ clauses, each of which consists of a disjunction of $k$ literals (i.e., each literal is an atomic statement) and $k$ literals are shared among clauses. The prover, for a threshold parameter $\tau \le k$, proves knowledge of at least $\tau$ witnesses for $\tau$ distinct literals in each clause. At the core of our protocol, there is a new technique to compose $\Sigma$-protocols for regular CNF relations (i.e., when $ \tau=1$) that exploits the overlap among clauses and that we then generalize to formulae where $\tau>1$ providing improvements over state-of-the-art constructions.
Last updated:  2024-09-24
The transition to post-quantum cryptography, metaphorically
Stefan-Lukas Gazdag and Sophia Grundner-Culemann
Are we there yet? Are we there yet? No, kids, the road to quantum-safety is long and sturdy. But let me tell you a story: Once upon a time, science discovered a great threat to Cryptography World: The scalable quantum computer! Nobody had ever seen one, but everyone understood it would break the mechanisms used to secure Internet communication since times of yore (or the late 20th century, anyway). The greatest minds from all corners of the land were gathered to invent, implement, and test newer, stronger tools. They worked day and night, but alas, when smaller quantum computers already started to emerge, no end to their research was in sight. How could that be? This paper provides a collection of carefully wrought, more or less creative and more or less consistent metaphors to explain to audiences at all expertise levels the manifold challenges researchers and practitioners face in the ongoing quest for post-quantum migration.
Last updated:  2024-09-24
Kronos: A Secure and Generic Sharding Blockchain Consensus with Optimized Overhead
Yizhong Liu, Andi Liu, Yuan Lu, Zhuocheng Pan, Yinuo Li, Jianwei Liu, Song Bian, and Mauro Conti
Sharding enhances blockchain scalability by dividing the network into shards, each managing specific unspent transaction outputs or accounts. As an introduced new transaction type, cross-shard transactions pose a critical challenge to the security and efficiency of sharding blockchains. Currently, there is a lack of a generic sharding blockchain consensus pattern that achieves both security and low overhead. In this paper, we present Kronos, a secure sharding blockchain consensus achieving optimized overhead. In particular, we propose a new secure sharding blockchain consensus pattern, based on a buffer managed jointly by shard members. Valid transactions are transferred to the payee via the buffer, while invalid ones are rejected through happy or unhappy paths. Kronos is proved to achieve security with atomicity under malicious clients while maintaining optimal intra-shard overhead. Efficient rejection even requires no Byzantine fault tolerance (BFT) protocol execution in happy paths, and the cost in unhappy paths is still not higher than a two-phase commit. Besides, we propose secure cross-shard certification methods. Handling b transactions, Kronos is proved to achieve cross-shard communication with low cross-shard overhead O(n b \lambda) (n for the shard size and \lambda for the security parameter). Notably, Kronos imposes no restrictions on BFT and does not rely on timing assumptions, offering optional constructions in various modules. Kronos could serve as a universal framework for enhancing the performance and scalability of existing BFT protocols. Kronos supports generic models, including asynchronous networks, and can increase the throughput by several orders of magnitude. We implement Kronos using two prominent BFT protocols: asynchronous Speeding Dumbo (NDSS'22) and partially synchronous Hotstuff (PODC'19). Extensive experiments (over up to 1000 AWS EC2 nodes across 4 AWS regions) demonstrate Kronos scales the consensus nodes to thousands, achieving a substantial throughput of 320 ktx/sec with 2.0 sec latency. Compared with the past solutions, Kronos outperforms, achieving up to a 12$\times$ improvement in throughput and a 50% reduction in latency when cross-shard transactions dominate the workload.
Last updated:  2024-09-24
On the Anonymity of One Authentication and Key Agreement Scheme for Peer-to-Peer Cloud
Zhengjun Cao and Lihua Liu
Peer-to-peer communication systems can provide many functions, including anonymized routing of network traffic, massive parallel computing environments, and distributed storage. Anonymity refers to the state of being completely nameless, with no attached identifiers. Pseudonymity involves the use of a fictitious name that can be consistently linked to a particular user, though not necessarily to the real identity. Both provide a layer of privacy, shielding the user's true identity from public view. But we find their significations are often misunderstood. In this note, we clarify the differences between anonymity and pseudonymity. We also find the Zhong et al.'s key agreement scheme [IEEE TCC, 2022, 10(3), 1592-1603] fails to keep anonymity, not as claimed.
Last updated:  2024-09-23
The solving degrees for computing Gröbner bases of affine semi-regular polynomial sequences
Momonari Kudo and Kazuhiro Yokoyama
Determining the complexity of computing Gr\"{o}bner bases is an important problem both in theory and in practice, and for that the solving degree plays a key role. In this paper, we study the solving degrees for affine semi-regular sequences and their homogenized sequences. Some of our results are considered to give mathematically rigorous proofs of the correctness of methods for computing Gr\"{o}bner bases of the ideal generated by an affine semi-regular sequence. This paper is a sequel of the authors' previous work and gives additional results on the solving degrees and important behaviors of Gr\"obner basis computation. We also define the generalized degree of regularity for a sequence of homogeneous polynomials. For the ideal generated by the homogenization of an affine semi-regular sequence, we relate its generalized degree of regularity with its maximal Gr\"{o}bner basis degree (i.e., the solving degree for the homogenized sequence). The definition of a generalized (cryptographic) semi-regular sequence is also given, and it derives a new cryptographic assumption to estimate the security of cryptosystems. From our experimental observation, we raise a conjecture and some questions related to this generalized semi-regularity. These definitions and our results provide a theoretical formulation of (somehow heuristic) discussions done so far in the cryptographic community.
Last updated:  2024-09-23
Founding Quantum Cryptography on Quantum Advantage, or, Towards Cryptography from $\#\mathsf{P}$-Hardness
Dakshita Khurana and Kabir Tomer
Recent oracle separations [Kretschmer, TQC'21, Kretschmer et. al., STOC'23] have raised the tantalizing possibility of building quantum cryptography from sources of hardness that persist even if the polynomial heirarchy collapses. We realize this possibility by building quantum bit commitments and secure computation from unrelativized, well-studied mathematical problems that are conjectured to be hard for $\mathsf{P}^{\#\mathsf{P}}$ -- such as approximating the permanents of complex gaussian matrices, or approximating the output probabilities of random quantum circuits. Indeed, we show that as long as \any one of the conjectures underlying sampling-based quantum advantage (e.g., BosonSampling, Random Circuit Sampling, IQP, etc.) is true, quantum cryptography can be based on the extremely mild assumption that $\mathsf{P}^{\#\mathsf{P}} \not\subseteq \mathsf{(io)BQP}/\mathsf{qpoly}$. Our techniques uncover strong connections between the hardness of approximating the probabilities of outcomes of quantum processes, the existence of ``one-way'' state synthesis problems, and the existence of useful cryptographic primitives such as one-way puzzles and quantum bit commitments. Specifically, we prove that the following hardness assumptions are equivalent under $\mathsf{BQP}$ reductions. 1. The hardness of approximating the probabilities of outcomes of certain efficiently sampleable distributions. That is, there exist quantumly efficiently sampleable distributions for which it is hard to approximate the probability assigned to a randomly chosen string in the support of the distribution (upto inverse polynomial multiplicative error). 2. The existence of one-way puzzles, where a quantum sampler outputs a pair of classical strings -- a puzzle and its key -- and where the hardness lies in finding the key corresponding to a random puzzle. These are known to imply quantum bit commitments [Khurana and Tomer, STOC'24]. 3. The existence of state puzzles, or one-way state synthesis, where it is hard to synthesize a secret quantum state given a public classical identifier. These capture the hardness of search problems with quantum inputs (secrets) and classical outputs (challenges). These are the first constructions of quantum cryptographic primitives (one-way puzzles, quantum bit commitments, state puzzles) from concrete, well-founded mathematical assumptions that do not imply the existence of classical cryptography. Along the way, we also show that distributions that admit efficient quantum samplers but cannot be pseudo-deterministically efficiently sampled imply quantum commitments.
Last updated:  2024-09-23
Adaptive Security, Erasures, and Network Assumptions in Communication-Local MPC
Nishanth Chandran, Juan Garay, Ankit Kumar Misra, Rafail Ostrovsky, and Vassilis Zikas
The problem of reliable/secure all-to-all communication over low-degree networks has been essential for communication-local (CL) n-party MPC (i.e., MPC protocols where every party directly communicates only with a few, typically polylogarithmic in n, parties) and more recently for communication over ad hoc networks, which are used in blockchain protocols. However, a limited number of adaptively secure solutions exist, and they all make relatively strong assumptions on the ability of parties to act in some specific manner before the adversary can corrupt them. Two such assumptions were made in the work of Chandran et al. [ITCS ’15]---parties can (a) multisend messages to several receivers simultaneously; and (b) securely erase the message and the identities of the receivers, before the adversary gets a chance to corrupt the sender (even if a receiver is corrupted). A natural question to ask is: Are these assumptions necessary for adaptively secure CL MPC? In this paper, we characterize the feasibility landscape for all-to-all reliable message transmission (RMT) under these two assumptions, and use this characterization to obtain (asymptotically) tight feasibility results for CL MPC. – First, we prove a strong impossibility result for a broad class of RMT protocols, termed here store-and-forward protocols, which includes all known communication protocols for CL MPC from standard cryptographic assumptions. Concretely, we show that no such protocol with a certain expansion rate can tolerate a constant fraction of parties being corrupted. – Next, under the assumption of only a PKI, we show that assuming secure erasures, we can obtain an RMT protocol between all pairs of parties with polylogarithmic locality (even without assuming multisend) for the honest majority setting. We complement this result by showing a negative result for the setting of dishonest majority. – Finally, and somewhat surprisingly, under stronger assumptions (i.e., trapdoor permutations with a reverse domain sampler, and compact and malicious circuit-private FHE), we construct a polylogarithmic-locality all-to-one RMT protocol, which is adaptively secure and tolerates any constant fraction of corruptions, without assuming either secure erasures or multisend. This last result uses a novel combination of adaptively secure (e.g., non-committing) encryption and (static) FHE to bypass the impossibility of compact adaptively secure FHE by Katz et al. [PKC’13], which we believe may be of independent interest. Intriguingly, even such assumptions do not allow reducing all-to-all RMT to all-to-one RMT (a reduction which is trivial in the non-CL setting). Still, we can implement what we call sublinear output-set RMT (SOS-RMT for short). We show how SOS-RMT can be used for SOS-MPC under the known bounds for feasibility of MPC in the standard (i.e., non-CL) setting assuming, in addition to SOS-RMT, an anonymous PKI.
Last updated:  2024-09-23
Mind the Bad Norms: Revisiting Compressed Oracle-based Quantum Indistinguishability Proofs
Ritam Bhaumik, Benoît Cogliati, Jordan Ethan, and Ashwin Jha
In this work, we revisit the Hosoyamada-Iwata (HI) proof for the quantum CPA security of the 4-round Luby-Rackoff construction and identify a gap that appears to undermine the security proof. We emphasize that this is not an attack, and the construction may still achieve the claimed security level. However, this gap raises concerns about the feasibility of establishing a formal security proof for the 4-round Luby-Rackoff construction. In fact, the issue persists even if the number of rounds is increased arbitrarily. On a positive note, we restore the security of the 4-round Luby-Rackoff construction in the non-adaptive setting, achieving security up to $2^{n/6}$ superposition queries. Furthermore, we establish the quantum CPA security of the 4-round MistyR and 5-round MistyL constructions, up to $2^{n/5}$ and $2^{n/7}$ superposition queries, respectively, where $n$ denotes the size of the underlying permutation.
Last updated:  2024-09-23
Adaptively Secure Attribute-Based Encryption from Witness Encryption
Brent Waters and Daniel Wichs
Attribute-based encryption (ABE) enables fine-grained control over which ciphertexts various users can decrypt. A master authority can create secret keys $sk_f$ with different functions (circuits) $f$ for different users. Anybody can encrypt a message under some attribute $x$ so that only recipients with a key $sk_f$ for a function such that $f(x)=1$ will be able to decrypt. There are a number of different approaches toward achieving selectively secure ABE, where the adversary has to decide on the challenge attribute $x$ ahead of time before seeing any keys, including constructions via bilinear maps (for NC1 circuits), learning with errors, or witness encryption. However, when it comes adaptively secure ABE, the problem seems to be much more challenging and we only know of two potential approaches: via the ``dual systems'' methodology from bilinear maps, or via indistinguishability obfuscation. In this work, we give a new approach that constructs adaptively secure ABE from witness encryption (along with statistically sound NIZKs and one-way functions). While witness encryption is a strong assumption, it appears to be fundamentally weaker than indistinguishability obfuscation. Moreover, we have candidate constructions of witness encryption from some assumptions (e.g., evasive LWE) from which we do not know how to construct indistinguishability obfuscation, giving us adaptive ABE from these assumptions as a corollary of our work.
Last updated:  2024-09-23
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-09-23
Post-Quantum Asynchronous Remote Key Generation for FIDO2 Account Recovery
Jacqueline Brendel, Sebastian Clermont, and Marc Fischlin
The Fast IDentity Online (FIDO) Alliance has developed the widely adopted FIDO2 protocol suite that allows for passwordless online authentication. Cryptographic keys stored on a user's device (e.g. their smartphone) are used as credentials to authenticate to services by performing a challenge-response protocol. Yet, this approach leaves users unable to access their accounts in case their authenticator is lost. The device manufacturer Yubico thus proposed a FIDO2-compliant mechanism that allows to easily create backup authenticators. Frymann et al. (CCS 2020) have first analyzed the cryptographic core of this proposal by introducing the new primitive of Asynchronous Remote Key Generation (ARKG) and accompanying security definitions. Later works instantiated ARKG both from classical and post-quantum assumptions (ACNS 2023, EuroS&P 2023). As we will point out in this paper, the security definitions put forward and used in these papers do not adequately capture the desired security requirements in FIDO2-based authentication and recovery. This issue was also identified in independent and concurrent work by Stebila and Wilson (AsiaCCS 2024), who proposed a new framework for the analysis of account recovery mechanisms, along with a secure post-quantum instantiation from KEMs and key-blinding signature schemes. In this work, we propose alternative security definitions for the primitive ARKG when used inside an account recovery mechanism in FIDO2. We give a secure instantiation from KEMs and standard signature schemes, which may in particular provide post-quantum security. Our solution strikes a middle ground between the compact, but (for this particular use case) inadequate security notions put forward by Frymann et al., and the secure, but more involved and highly tailored model introduced by Stebila and Wilson.
Last updated:  2024-09-23
Elliptic Curve Cryptography for the masses: Simple and fast finite field arithmetic
Michael Scott
Shaped prime moduli are often considered for use in elliptic curve and isogeny-based cryptography to allow for faster modular reduction. Here we focus on the most common choices for shaped primes that have been suggested, that is pseudo-Mersenne, generalized Mersenne and Montgomery-friendly primes. We consider how best to to exploit these shapes for maximum efficiency, and provide an open source tool to automatically generate, test and time working high-level language finite-field code. Next we consider the advantage to be gained from implementations that are written in assembly language and which exploit special instructions, SIMD hardware if present, and the particularities of the algorithm being implemented.
Last updated:  2024-09-23
Key-Homomorphic and Aggregate Verifiable Random Functions
Giulio Malavolta
A verifiable random function (VRF) allows one to compute a random-looking image, while at the same time providing a unique proof that the function was evaluated correctly. VRFs are a cornerstone of modern cryptography and, among other applications, are at the heart of recently proposed proof-of-stake consensus protocols. In this work we initiate the formal study of aggregate VRFs, i.e., VRFs that allow for the aggregation of proofs/images into a small digest, whose size is independent of the number of input proofs/images, yet it still enables sound verification. We formalize this notion along with its security properties and we propose two constructions: The first scheme is conceptually simple, concretely efficient, and uses (asymmetric) bilinear groups of prime order. Pseudorandomness holds in the random oracle model and aggregate pseudorandomness is proven in the algebraic group model. The second scheme is in the standard model and it is proven secure against the learning with errors (LWE) problem. As a cryptographic building block of independent interest, we introduce the notion of key homomorphic VRFs, where the verification keys and the proofs are endowed with a group structure. We conclude by discussing several applications of key-homomorphic and aggregate VRFs, such as distributed VRFs and aggregate proof-of-stake protocols.
Last updated:  2024-09-23
LARMix$\mathbf{++}$: Latency-Aware Routing in Mix Networks with Free Routes Topology
Mahdi Rahimi
Mix networks (mixnets) enhance anonymity by routing client messages through multiple hops, intentionally delaying or reordering these messages to ensure unlinkability. However, this process increases end-to-end latency, potentially degrading the client experience. To address this issue, LARMix (NDSS, 2024) proposed a low-latency routing methodology specifically designed for stratified mixnet architectures. Our paper extends this concept to Free Routes mixnet designs, where, unlike stratified topologies, there are no restrictions on node connections. We adapt several state-of-the-art low-latency routing strategies from both mix and Tor networks to optimize the Free Routes topology. Despite the benefits, low-latency routing can cause certain mixnodes to receive disproportionate amounts of traffic. To overcome this challenge, we introduce a novel load-balancing algorithm that evenly distributes traffic among nodes without significantly compromising low-latency characteristics. Our analytical and simulation experiments demonstrate a considerable reduction in latency compared to uniform routing methods, with negligible loss in message anonymity, defined as the confusion an adversary experiences when correlating messages exiting the mixnet to an initially targeted input message. Additionally, we provide an analysis of adversarial strategies, revealing a balanced trade-off between low latency and adversary advantages.
Last updated:  2024-09-23
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
Céline Chevalier, Guirec Lebrun, Ange Martinelli, and Abdul Rahman Taleb
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time. Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found. We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Last updated:  2024-09-23
Quadratic-like balanced functions and permutations
Claude Carlet and Irene Villa
We study those $(n,n)$-permutations, and more generally those balanced $(n,m)$-functions, whose component functions all admit a derivative equal to constant function 1 (this property itself implies balancedness). We call these functions quadratic-like permutations (resp. quadratic-like balanced functions) since all quadratic balanced functions have this property. We show that all Feistel permutations, all crooked permutations and (more generally) all balanced strongly plateaued functions have this same property and we observe that the notion is affine invariant. We also study in each of these classes and in the class of quadratic-like APN permutations the "reversed" property that every derivative in a nonzero direction has a component function equal to constant function 1, and we show that this property can be satisfied only if $m\ge n$. We also show that all the quadratic-like power permutations $F(x)=x^d$, $x\in \mathbb F_{2^n}$ must be quadratic, which generalizes a well-known similar result on power crooked functions. We give several constructions of quadratic-like permutations and balanced functions outside the three classes of quadratic balanced functions, permutations affine equivalent to Feistel permutations and crooked permutations. We characterize the property by the Walsh transform.
Last updated:  2024-09-23
A Formal Analysis of Apple’s iMessage PQ3 Protocol
Felix Linker, Ralf Sasse, and David Basin
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Last updated:  2024-09-23
Making Searchable Symmetric Encryption Schemes Smaller and Faster
Debrup Chakraborty, Avishek Majumder, and Subhabrata Samajder
Searchable Symmetric Encryption (SSE) has emerged as a promising tool for facilitating efficient query processing over encrypted data stored in un-trusted cloud servers. Several techniques have been adopted to enhance the efficiency and security of SSE schemes. The query processing costs, storage costs and communication costs of any SSE are directly related to the size of the encrypted index that is stored in the server. To our knowledge, there is no work directed towards minimizing the index size. In this paper we introduce a novel technique to directly reduce the index size of any SSE. Our proposed technique generically transforms any secure single keyword SSE into an equivalently functional and secure version with reduced storage requirements, resulting in faster search and reduced communication overhead. Our technique involves in arranging the set of document identifiers $\mathsf{db}(w)$ related to a keyword $w$ in leaf nodes of a complete binary tree and eventually obtaining a succinct representation of the set $\mathsf{db}(w)$. This small representation of $\mathsf{db}(w)$ leads to smaller index sizes. We do an extensive theoretical analysis of our scheme and prove its correctness. In addition, our comprehensive experimental analysis validates the effectiveness of our scheme on real and simulated data and shows that it can be deployed in practical situations.
Last updated:  2024-09-23
Post-quantum Asynchronous Deniable Key Exchange and the Signal Handshake
Jacqueline Brendel, Rune Fiedler, Felix Günther, Christian Janson, and Douglas Stebila
The key exchange protocol that establishes initial shared secrets in the handshake of the Signal end-to-end encrypted messaging protocol has several important characteristics: (1) it runs asynchronously (without both parties needing to be simultaneously online), (2) it provides implicit mutual authentication while retaining deniability (transcripts cannot be used to prove either party participated in the protocol), and (3) it retains security even if some keys are compromised (forward secrecy and beyond). All of these properties emerge from clever use of the highly flexible Diffie--Hellman protocol. While quantum-resistant key encapsulation mechanisms (KEMs) can replace Diffie--Hellman key exchange in some settings, there is no KEM-based replacement for the Signal handshake that achieves all three aforementioned properties, in part due to the inherent asymmetry of KEM operations. In this paper, we show how to construct asynchronous deniable key exchange by combining KEMs and designated verifier signature (DVS) schemes. There are several candidates for post-quantum DVS schemes, either direct constructions or via ring signatures. This yields a template for an efficient post-quantum realization of the Signal handshake with the same asynchronicity and security properties as the original Signal protocol.
Last updated:  2024-09-23
LOL: A Highly Flexible Framework for Designing Stream Ciphers
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, and Tian Tian
In this paper, we propose LOL, a general framework for designing blockwise stream ciphers, to achieve ultrafast software implementations for the ubiquitous virtual networks in 5G/6G environments and high-security level for post-quantum cryptography. The LOL framework is structurally strong, and all its components as well as the LOL framework itself enjoy high flexibility with various extensions. Following the LOL framework, we propose new stream cipher designs named LOL-MINI and LOL-DOUBLE with the support of the AES-NI and SIMD instructions: the former applies the basic LOL single mode while the latter uses the extended parallel-dual mode. Both LOL-MINI and LOL-DOUBLE support 256-bit key length and, according to our thorough evaluations, have 256-bit security margins against all existing cryptanalysis methods including differential, linear, integral, etc. The software performances of LOL-MINI and LOL-DOUBLE can reach 89 Gbps and 135 Gbps. In addition to pure encryptions, the LOL-MINI and LOL-DOUBLE stream ciphers can also be applied in a stream-cipher-then-MAC strategy to make an AEAD scheme.
Last updated:  2024-09-23
The Power of NAPs: Compressing OR-Proofs via Collision-Resistant Hashing
Katharina Boudgoust and Mark Simkin
Proofs of partial knowledge, first considered by Cramer, Damgård and Schoenmakers (CRYPTO'94) and De Santis et al. (FOCS'94), allow for proving the validity of $k$ out of $n$ different statements without revealing which ones those are. In this work, we present a new approach for transforming certain proofs system into new ones that allows for proving partial knowledge. The communication complexity of the resulting proof system only depends logarithmically on the total number of statements $n$ and its security only relies on the existence of collision-resistant hash functions. As an example, we show that our transformation is applicable to the proof systems of Goldreich, Micali, and Wigderson (FOCS'86) for the graph isomorphism and the graph 3-coloring problem. Our main technical tool, which we believe to be of independent interest, is a new cryptographic primitive called non-adaptively programmable functions (NAPs). Those functions can be seen as pseudorandom functions which allow for re-programming the output at an input point, which must be fixed during key generation. Even when given the re-programmed key, it remains infeasible to find out where re-programming happened. Finally, as an additional technical tool, we also build explainable samplers for any distribution that can be sampled efficiently via rejection sampling and use them to construct NAPs for various output distributions.
Last updated:  2024-09-23
Tighter Adaptive IBEs and VRFs: Revisiting Waters' Artificial Abort
Goichiro Hanaoka, Shuichi Katsumata, Kei Kimura, Kaoru Takemure, and Shota Yamada
One of the most popular techniques to prove adaptive security of identity-based encryptions (IBE) and verifiable random functions (VRF) is the partitioning technique. Currently, there are only two methods to relate the adversary's advantage and runtime $(\epsilon, {\sf T})$ to those of the reduction's ($\epsilon_{\sf proof}, {\sf T}_{\sf proof}$) using this technique: One originates to Waters (Eurocrypt 2005) who introduced the famous artificial abort step to prove his IBE, achieving $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon/Q), {\sf T} + O(Q^2/\epsilon^2))$, where $Q$ is the number of key queries. Bellare and Ristenpart (Eurocrypt 2009) provide an alternative analysis for the same scheme removing the artificial abort step, resulting in $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon^2/Q), {\sf T} + O(Q))$. Importantly, the current reductions all loose quadratically in $\epsilon$. In this paper, we revisit this two decade old problem and analyze proofs based on the partitioning technique through a new lens. For instance, the Waters IBE can now be proven secure with $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon^{3/2}/Q), {\sf T} + O(Q))$, breaking the quadratic dependence on $\epsilon$. At the core of our improvement is a finer estimation of the failing probability of the reduction in Waters' original proof relying on artificial abort. We use Bonferroni's inequality, a tunable inequality obtained by cutting off higher order terms from the equality derived by the inclusion-exclusion principle. Our analysis not only improves the reduction of known constructions but also opens the door for new constructions. While a similar improvement to Waters IBE is possible for the lattice-based IBE by Agrawal, Boneh, and Boyen (Eurocrypt 2010), we can slightly tweak the so-called partitioning function in their construction, achieving $(\epsilon_{\sf proof}, {\sf T}_{\sf proof}) = (O(\epsilon/Q), {\sf T} + O(Q))$. This is a much better reduction than the previously known $ (O(\epsilon^3/Q^2), {\sf T} + O(Q))$. We also propose the first VRF with proof and verification key sizes sublinear in the security parameter under the standard $d$-LIN assumption, while simultaneously improving the reduction cost compared to all prior constructions.
Last updated:  2024-09-22
Knot-based Key Exchange protocol
Silvia Sconza and Arno Wildi
We propose a new key exchange protocol based on the Generalised Diffie-Hellman Key Exchange. In the latter, instead of using a group-action, we consider a semigroup action. In our proposal, the semigroup is the set of oriented knots in $\mathbb{S}^3$ with the operation of connected sum. As a semigroup action, we choose the action of the semigroup on itself through the connected sum. For the protocol to work, we need to use knot invariants, which allow us to create the shared secret key starting from the same knot represented in two different ways. In particular, we use finite type invariants. The security of the protocol is guaranteed by the hardness of decomposing knots in the semigroup.
Last updated:  2024-09-22
Revisiting the Slot-to-Coefficient Transformation for BGV and BFV
Robin Geelen
Numerous applications in homomorphic encryption require an operation that moves the slots of a ciphertext to the coefficients of a different ciphertext. For the BGV and BFV schemes, the only efficient algorithms to implement this slot-to-coefficient transformation were proposed in the setting of non-power-of-two cyclotomic rings. In this paper, we devise an FFT-like method to decompose the slot-to-coefficient transformation (and its inverse) for power-of-two cyclotomic rings. The proposed method can handle both fully and sparsely packed slots. Our algorithm brings down the computational complexity of the slot-to-coefficient transformation from a linear to a logarithmic number of FHE operations, which is shown via a detailed complexity analysis. The new procedures are implemented in Microsoft SEAL for BFV. The experiments report a speedup of up to $44\times$ when packing $2^{12}$ elements from $\operatorname{GF}(8191^8)$. We also study a fully packed bootstrapping operation that refreshes $2^{15}$ elements from $\operatorname{GF}(65537)$ and obtain an amortized speedup of $12\times$.
Last updated:  2024-09-22
Distributing Keys and Random Secrets with Constant Complexity
Benny Applebaum and Benny Pinkas
In the *Distributed Secret Sharing Generation* (DSG) problem $n$ parties wish to obliviously sample a secret-sharing of a random value $s$ taken from some finite field, without letting any of the parties learn $s$. *Distributed Key Generation* (DKG) is a closely related variant of the problem in which, in addition to their private shares, the parties also generate a public ``commitment'' $g^s$ to the secret. Both DSG and DKG are central primitives in the domain of secure multiparty computation and threshold cryptography. In this paper, we study the communication complexity of DSG and DKG. Motivated by large-scale cryptocurrency and blockchain applications, we ask whether it is possible to obtain protocols in which the communication per party is a constant that does not grow with the number of parties. We answer this question to the affirmative in a model where broadcast communication is implemented via a public bulletin board (e.g., a ledger). Specifically, we present a constant-round DSG/DKG protocol in which the number of bits that each party sends/receives from the public bulletin board is a constant that depends only on the security parameter and the field size but does not grow with the number of parties $n$. In contrast, in all existing solutions at least some of the parties send $\Omega(n)$ bits. Our protocol works in the near-threshold setting. Given arbitrary privacy/correctness parameters $0<\tau_p<\tau_c<1$, the protocol tolerates up to $\tau_p n$ actively corrupted parties and delivers shares of a random secret according to some $\tau_p n$-private $\tau_c n$-correct secret sharing scheme, such that the adversary cannot bias the secret or learn anything about it. The protocol is based on non-interactive zero-knowledge proofs, non-interactive commitments and a novel secret-sharing scheme with special robustness properties that is based on Low-Density Parity-Check codes. As a secondary contribution, we extend the formal MPC-based treatment of DKG/DSG, and study new aspects of Affine Secret Sharing Schemes.
Last updated:  2024-09-22
Efficient Fuzzy Private Set Intersection from Fuzzy Mapping
Ying Gao, Lin Qi, Xiang Liu, Yuanchao Luo, and Longxin Wang
Private set intersection (PSI) allows Sender holding a set \(X\) and Receiver holding a set \(Y\) to compute only the intersection \(X\cap Y\) for Receiver. We focus on a variant of PSI, called fuzzy PSI (FPSI), where Receiver only gets points in \(X\) that are at a distance not greater than a threshold from some points in \(Y\). Most current FPSI approaches first pick out pairs of points that are potentially close and then determine whether the distance of each selected pair is indeed small enough to yield FPSI result. Their complexity bottlenecks stem from the excessive number of point pairs selected by the first picking process. Regarding this process, we consider a more general notion, called fuzzy mapping (Fmap), which can map each point of two parties to a set of identifiers, with closely located points having a same identifier, which forms the selected point pairs. We initiate the formal study on Fmap and show novel Fmap instances for Hamming and \(L_\infty\) distances to reduce the number of selected pairs. We demonstrate the powerful capability of Fmap with some superior properties in constructing FPSI variants and provide a generic construction from Fmap to FPSI. Our new Fmap instances lead to the fastest semi-honest secure FPSI protocols in high-dimensional space to date, for both Hamming and general \(L_{\mathsf p\in [1, \infty]}\) distances. For Hamming distance, our protocol is the first one that achieves strict linear complexity with input sizes. For \(L_{\mathsf p\in [1, \infty]}\) distance, our protocol is the first one that achieves linear complexity with input sizes, dimension, and threshold.
Last updated:  2024-09-22
Quantum Pseudorandom Scramblers
Chuhan Lu, Minglong Qin, Fang Song, Penghui Yao, and Mingnan Zhao
Quantum pseudorandom state generators (PRSGs) have stimulated exciting developments in recent years. A PRSG, on a fixed initial (e.g., all-zero) state, produces an output state that is computationally indistinguishable from a Haar random state. However, pseudorandomness of the output state is not guaranteed on other initial states. In fact, known PRSG constructions provably fail on some initial states. In this work, we propose and construct quantum Pseudorandom State Scramblers (PRSSs), which can produce a pseudorandom state on an arbitrary initial state. In the information-theoretical setting, we obtain a scrambler which maps an arbitrary initial state to a distribution of quantum states that is close to Haar random in total variation distance. As a result, our scrambler exhibits a dispersing property. Loosely, it can span an ɛ-net of the state space. This significantly strengthens what standard PRSGs can induce, as they may only concentrate on a small region of the state space provided that average output state approximates a Haar random state. Our PRSS construction develops a parallel extension of the famous Kac's walk, and we show that it mixes exponentially faster than the standard Kac's walk. This constitutes the core of our proof. We also describe a few applications of PRSSs. While our PRSS construction assumes a post-quantum one-way function, PRSSs are potentially a weaker primitive and can be separated from one-way functions in a relativized world similar to standard PRSGs.
Last updated:  2024-09-22
Password-Protected Threshold Signatures
Stefan Dziembowski, Stanislaw Jarecki, Paweł Kędzior, Hugo Krawczyk, Chan Nam Ngo, and Jiayu Xu
We witness an increase in applications like cryptocurrency wallets, which involve users issuing signatures using private keys. To protect these keys from loss or compromise, users commonly outsource them to a custodial server. This creates a new point of failure, because compromise of such a server leaks the user’s key, and if user authentication is implemented with a password then this password becomes open to an offline dictionary attack (ODA). A better solution is to secret-share the key among a set of servers, possibly including user’s own device(s), and implement password authentication and signature computation using threshold cryptography. We propose a notion of augmented password protected threshold signature scheme (aptSIG) which captures the best possible security level for this setting. Using standard threshold cryptography techniques, i.e. threshold password authentication and threshold signatures, one can guarantee that compromising up to t out of n servers reveals no information on either the key or the password. However, we extend this with a novel property, namely that compromising even all n servers also does not leak any information, except via an unavoidable ODA attack, which reveals the key (and the password) only if the attacker guesses the password. We define aptSIG in the Universally Composable (UC) framework and show that it can be constructed very efficiently, using a black-box composition of any UC threshold signature and a UC augmented Password-Protected Secret Sharing (aPPSS), which we define as an extension of prior notion of PPSS. As concrete instantiations we obtain secure aptSIG schemes for ECDSA and BLS signatures with very small overhead over the respective threshold signature. Finally, we note that both the notion and our generic solution for augmented password-protected threshold signatures can be generalized to password-protecting MPC for any keyed functions.
Last updated:  2024-09-21
The Concrete Security of Two-Party Computation: Simple Definitions, and Tight Proofs for PSI and OPRFs
Mihir Bellare, Rishabh Ranjan, Doreen Riepel, and Ali Aldakheel
This paper initiates a concrete-security treatment of two-party secure computation. The first step is to propose, as target, a simple, indistinguishability-based definition that we call InI. This could be considered a poor choice if it were weaker than standard simulation-based definitions, but it is not; we show that for functionalities satisfying a condition called invertibility, that we define and show is met by functionalities of practical interest like PSI and its variants, the two definitions are equivalent. Based on this, we move forward to study the concrete security of a canonical OPRF-based construction of PSI, giving a tight proof of InI security of the constructed PSI protocol based on the security of the OPRF. This leads us to the concrete security of OPRFs, where we show how different DH-style assumptions on the underlying group yield proofs of different degrees of tightness, including some that are tight, for the well-known and efficient 2H-DH OPRF, and thus for the corresponding DH PSI protocol. We then give a new PSI protocol, called salted-DH PSI, that is as efficient as DH-PSI, yet enjoys tighter proofs.
Last updated:  2024-09-21
On Schubert cells of Projective Geometry and quadratic public keys of Multivariate Cryptography
Vasyl Ustimenko
Jordan-Gauss graphs are bipartite graphs given by special quadratic equations over the commutative ring K with unity with partition sets K^n and K^m , n ≥m such that the neighbour of each vertex is defined by the system of linear equation given in its row-echelon form. We use families of this graphs for the construction of new quadratic and cubic surjective multivariate maps F of K^n onto K^m (or K^n onto K^n) with the trapdoor accelerators T , i. e. pieces of information which allows to compute the reimage of the given value of F in poly-nomial time. The technique allows us to use the information on the quadratic map F from K^s to K^r, s ≥ r with the trapdoor accelerator T for the construction of other map G from K^{s+rs} onto K^{r+rs} with trapdoor accelerator. In the case of finite field it can be used for construc-tion of new cryptosystems from known pairs (F, T). So we can introduce enveloping trapdoor accelerator for Matsumoto-Imai cryptosystem over finite fields of characteristic 2, for the Oil and Vinegar public keys over F_q (TUOV in particular), for quadratic multivariate public keys defined over Jordan-Gauss graphs D(n, K) where K is arbitrary finite commutative ring with the nontrivial multiplicative group.
Last updated:  2024-09-21
Honest Majority GOD MPC with $O(\mathsf{depth}(C))$ Rounds and Low Online Communication
Amit Agarwal, Alexander Bienstock, Ivan Damgård, and Daniel Escudero
In the context of secure multiparty computation (MPC) protocols with guaranteed output delivery (GOD) for the honest majority setting, the state-of-the-art in terms of communication is the work of (Goyal et al. CRYPTO'20), which communicates O(n|C|) field elements, where |C| is the size of the circuit being computed and n is the number of parties. Their round complexity, as usual in secret-sharing based MPC, is proportional to O(depth(C)), but only in the optimistic case where there is no cheating. Under attack, the number of rounds can increase to \Omega(n^2) before honest parties receive output, which is undesired for shallow circuits with depth(C) << n^2. In contrast, other protocols that only require O(depth(C) rounds even in the worst case exist, but the state-of-the-art from (Choudhury and Patra, Transactions on Information Theory, 2017) still requires \Omega(n^4|C|) communication in the offline phase, and \Omega(n^3|C|) in the online (for both point-to-point and broadcast channels). We see there exists a tension between efficient communication and number of rounds. For reference, the recent work of (Abraham et al., EUROCRYPT'23) shows that for perfect security and t<n/3, protocols with both linear communication and O(depth(C)) rounds exist. We address this state of affairs by presenting a novel honest majority GOD protocol that maintains O(depth(C)) rounds, even under attack, while improving over the communication of the most efficient protocol in this setting by Choudhury and Patra. More precisely, our protocol has point-to-point (P2P) online communication of O(n|C|), accompanied by O(n|C|) broadcasted (BC) elements, while the offline has O(n^3|C|) P2P communication with O(n^3|C|) BC. This improves over the previous best result, and reduces the tension between communication and round complexity. Our protocol is achieved via a careful use of packed secret-sharing in order to improve the communication of existing verifiable secret-sharing approaches, although at the expense of weakening their robust guarantees: reconstruction of shared values may fail, but only if the adversary gives away the identities of many corrupt parties. We show that this less powerful notion is still useful for MPC, and we use this as a core building block in our construction. Using this weaker VSS, we adapt the recent secure-with-abort Turbopack protocol (Escudero et al. CCS'22) to the GOD setting without significantly sacrificing in efficiency.
Last updated:  2024-09-21
On Security Proofs of Existing Equivalence Class Signature Schemes
Balthazar Bauer, Georg Fuchsbauer, and Fabian Regen
Equivalence class signatures (EQS; Asiacrypt '14), sign vectors of elements from a bilinear group. Anyone can transform a signature on a vector to a signature on any multiple of that vector; signatures thus authenticate equivalence classes. A transformed signature/message pair is indistinguishable from a random signature on a random message. EQS have been used to efficiently instantiate (delegatable) anonymous credentials, (round-optimal) blind signatures, ring and group signatures, anonymous tokens and contact-tracing schemes, to name a few. The original EQS construction (J. Crypto '19) is proven secure in the generic group model, and the first scheme from standard assumptions (PKC '18) satisfies a weaker model insufficient for most applications. Two works (Asiacrypt '19, PKC '22) propose applicable schemes that assume trusted parameters. Their unforgeability is argued via a security proof from standard (or non-interactive) assumptions. We show that their security proofs are flawed and explain the subtle issue. While the schemes might be provable in the algebraic group model (AGM), we instead show that the original construction, which is more efficient and has found applications in many works, is secure in the AGM under a parametrized non-interactive hardness assumption.
Last updated:  2024-09-21
Two generalizations of almost perfect nonlinearity
Claude Carlet
Almost perfect nonlinear (in brief, APN) functions are vectorial functions $F:\mathbb F_2^n\rightarrow \mathbb F_2^n$ playing roles in several domains of information protection, at the intersection of computer science and mathematics. Their definition comes from cryptography and is also related to coding theory. When they are used as substitution boxes (S-boxes, which are the only nonlinear components in block ciphers), APN functions contribute optimally to the resistance against differential attacks. This makes of course a strong cryptographic motivation for their study, which has been very active since the 90's, and has posed interesting and difficult mathematical questions, some of which are still unanswered. \\Since the introduction of differential attacks, more recent types of cryptanalyses have been designed, such as integral attacks. No notion about S-boxes has been identified which would play a similar role with respect to integral attacks. In this paper, we study two generalizations of APNness that are natural from a mathematical point of view, since they directly extend classical characterizations of APN functions. We call these two notions strong non-normality and sum-freedom. The former existed already for Boolean functions and the latter is new. We study how they are related to cryptanalyses (the relation is stronger for sum-freedom). The two notions behave differently from each other while they have similar definitions. They behave differently from differential uniformity, which is a well-known generalization of APNness. We study the different ways to define them, and on the example of Kasami functions, how difficult they are. We prove their satisfiability, their monotonicity, their invariance under classical equivalence relations and we characterize them by the Walsh transform. \\ We begin a study of the multiplicative inverse function (used as a substitution box in the Advanced Encryption Standard and other block ciphers) from the viewpoint of these two notions. In particular, we find a simple expression of the sum of the values taken by this function over affine subspaces of $\mathbb F_{2^n}$ that are not vector subspaces. This formula shows that, in such case, the sum never vanishes (which is a remarkable property of the inverse function). We also give a formula for the case of a vector space defined by one of its bases.
Last updated:  2024-09-21
Signature-based Witness Encryption with Compact Ciphertext
Gennaro Avitabile, Nico Döttling, Bernardo Magri, Christos Sakkas, and Stella Wohnig
Signature-based witness encryption (SWE) is a recently proposed notion that allows to encrypt a message with respect to a tag $T$ and a set of signature verification keys. The resulting ciphertext can only be decrypted by a party who holds at least $k$ different valid signatures w.r.t. $T$ and $k$ different verification keys out of the $n$ keys specified at encryption time. Natural applications of this primitive involve distributed settings (e.g., blockchains), where multiple parties sign predictable messages, such as polling or randomness beacons. However, known SWE schemes without trusted setup have ciphertexts that scale linearly in the number of verification keys. This quickly becomes a major bottleneck as the system gets more distributed and the number of parties increases. Towards showing the feasibility of SWE with ciphertext size sub-linear in the number of keys, we give a construction based on indistinguishability obfuscation (iO) for Turing machines and strongly puncturable signatures (SPS).
Last updated:  2024-09-20
Batch Arguments to NIZKs from One-Way Functions
Eli Bradley, Brent Waters, and David J. Wu
Succinctness and zero-knowledge are two fundamental properties in the study of cryptographic proof systems. Several recent works have formalized the connections between these two notions by showing how to realize non-interactive zero-knowledge (NIZK) arguments from succinct non-interactive arguments. Specifically, Champion and Wu (CRYPTO 2023) as well as Bitansky, Kamath, Paneth, Rothblum, and Vasudevan (ePrint 2023) recently showed how to construct a NIZK argument for NP from a (somewhere-sound) non-interactive batch argument (BARG) and a dual-mode commitment scheme (and in the case of the Champion-Wu construction, a local pseudorandom generator). The main open question is whether a BARG suffices for a NIZK (just assuming one-way functions). In this work, we first show that an adaptively-sound BARG for NP together with an one-way function imply a computational NIZK argument for NP. We then show that the weaker notion of somewhere soundness achieved by existing BARGs from standard algebraic assumptions are also adaptively sound if we assume sub-exponential security. This transformation may also be of independent interest. Taken together, we obtain a NIZK argument for NP from one-way functions and a sub-exponentially-secure somewhere-sound BARG for NP. If we instead assume plain public-key encryption, we show that a standard polynomially-secure somewhere-sound batch argument for NP suffices for the same implication. As a corollary, this means a somewhere-sound BARG can be used to generically upgrade any semantically-secure public-key encryption scheme into one secure against chosen-ciphertext attacks. More broadly, our results demonstrate that constructing non-interactive batch arguments for NP is essentially no easier than constructing NIZK arguments for NP.
Last updated:  2024-09-20
Symmetric and Dual PRFs from Standard Assumptions: A Generic Validation of a Prevailing Assumption
Mihir Bellare and Anna Lysyanskaya
A two-input function is a dual PRF if it is a PRF when keyed by either of its inputs. Dual PRFs are assumed in the design and analysis of numerous primitives and protocols including HMAC, AMAC, TLS 1.3 and MLS. But, not only do we not know whether particular functions on which the assumption is made really are dual PRFs; we do not know if dual PRFs even exist. What if the goal is impossible? This paper addresses this with a foundational treatment of dual PRFs, giving constructions based on standard assumptions. This provides what we call a generic validation of the dual PRF assumption. Our approach is to introduce and construct symmetric PRFs, which imply dual PRFs and may be of independent interest. We give a general construction of a symmetric PRF based on a function having a weak form of collision resistance coupled with a leakage hardcore function, a strengthening of the usual notion of hardcore functions we introduce. We instantiate this general construction in two ways to obtain two specific symmetric and dual PRFs, the first assuming any collision-resistant hash function, and the second assuming any one-way permutation. A construction based on any one-way function evades us and is left as an intriguing open problem.
Last updated:  2024-09-20
GoAT: File Geolocation via Anchor Timestamping
Deepak Maram, Iddo Bentov, Mahimna Kelkar, and Ari Juels
Decentralized storage systems are a crucial component of the rapidly growing blockchain ecosystem. They aim to achieve robustness by proving that they store multiple replicas of every file. They have a serious limitation, though: They cannot prove that file replicas are spread across distinct systems, e.g., different hard drives. Consequently, files are vulnerable to loss in a single, locally catastrophic event. We introduce a new primitive, Proof of Geo-Retrievability or PoGeoRet, that proves that a file is located within a strict geographic boundary. Using PoGeoRet, one can, for example, prove that a file is spread across several distinct geographic regions---and by extension across multiple systems, e.g., hard drives. We define what it means for a PoGeoRet scheme to be complete and sound, extending prior formalism in key ways. We also propose GoAT, a practical PoGeoRet scheme to prove file geolocation. Unlike previous geolocation systems that only offer nominal geolocation guarantees and require dedicated anchors, GoAT geolocates provers using any timestamping server on the internet with a fixed, known location as a geolocation anchor. GoAT's geolocation guarantees directly depend on the physical constraints of the internet, making them very reliable. GoAT internally uses a communication-efficient Proof-of-Retrievability (PoRet) scheme in a novel way to achieve constant-size PoRet-component in its proofs. We validate GoAT's practicality by conducting an initial measurement study to find usable anchors and perform a real-world experiment. The results show that a significant fraction of the internet can be used as anchors and that GoAT achieves geolocation radii as low as 500km.
Last updated:  2024-09-20
Revisiting Key Decomposition Techniques for FHE: Simpler, Faster and More Generic
Mariya Georgieva Belorgey, Sergiu Carpov, Nicolas Gama, Sandra Guasch, and Dimitar Jetchev
Ring-LWE based homomorphic encryption computations in large depth use a combination of two techniques: 1) decomposition of big numbers into small limbs/digits, and 2) efficient cyclotomic multiplications modulo $X^N + 1$. It was long believed that the two mechanisms had to be strongly related, like in the full-RNS setting that uses a CRT decomposition of big numbers over an NTT-friendly family of prime numbers, and NTT over the same primes for multiplications. However, in this setting, NTT was the bottleneck of all large-depth FHE computations. A breakthrough result from Kim et al. (Crypto’2023) managed to overcome this limitation by introducing a second gadget decomposition and showing that it indeed shifts the bottleneck and renders the cost of NTT computations negligible compared to the rest of the computation. In this paper, we extend this result (far) beyond the Full-RNS settings and show that we can completely decouple the big number decomposition from the cyclotomic arithmetic aspects. As a result, we get modulus switching/rescaling for free. We verify both in theory and in practice that the performance of key-switching, external and internal products and automorphisms using our representation are faster than the one achieved by Kim et al., and we discuss the high impact of these results for low-level or hardware optimizations as well as the benefits of the new parametrizations for FHE compilers. We even manage to lower the running time of the gate bootstrapping of TFHE by eliminating one eighth of the FFTs and one sixth of the linear operations, which lowers the running time below 5.5ms on recent CPUs.
Last updated:  2024-09-20
Updatable Private Set Intersection Revisited: Extended Functionalities, Deletion, and Worst-Case Complexity
Saikrishna Badrinarayanan, Peihan Miao, Xinyi Shi, Max Tromanhauser, and Ruida Zeng
Private set intersection (PSI) allows two mutually distrusting parties each holding a private set of elements, to learn the intersection of their sets without revealing anything beyond the intersection. Recent work (Badrinarayanan et al., PoPETS'22) initiates the study of updatable PSI (UPSI), which allows the two parties to compute PSI on a regular basis with sets that constantly get updated, where both the computation and communication complexity only grow with the size of the small updates and not the large entire sets. However, there are several limitations of their presented protocols. First, they can only be used to compute the plain PSI functionality and do not support extended functionalities such as PSI-Cardinality and PSI-Sum. Second, they only allow parties to add new elements to their existing set and do not support arbitrary deletion of elements. Finally, their addition-only protocols either require both parties to learn the output or only achieve low complexity in an amortized sense and incur linear worst-case complexity. In this work, we address all the above limitations. In particular, we study UPSI with semi-honest security in both the addition-only and addition-deletion settings. We present new protocols for both settings that support plain PSI as well as extended functionalities including PSI-Cardinality and PSI-Sum, achieving one-sided output (which implies two-sided output). In the addition-only setting, we also present a protocol for a more general functionality Circuit-PSI that outputs secret shares of the intersection. All of our protocols have worst-case computation and communication complexity that only grow with the set updates instead of the entire sets (except for a polylogarithmic factor). We implement our new UPSI protocols and compare with the state-of-the-art protocols for PSI and extended functionalities. Our protocols compare favorably when the total set sizes are sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.
Last updated:  2024-09-20
Revisiting Pairing-friendly Curves with Embedding Degrees 10 and 14
Uncategorized
Yu Dai, Debiao He, Cong Peng, Zhijian Yang, and Chang-an Zhao
Show abstract
Uncategorized
Since 2015, there has been a significant decrease in the asymptotic complexity of computing discrete logarithms in finite fields. As a result, the key sizes of many mainstream pairing-friendly curves have to be updated to maintain the desired security level. In PKC'20, Guillevic conducted a comprehensive assessment of the security of a series of pairing-friendly curves with embedding degrees ranging from $9$ to $17$. In this paper, we focus on pairing-friendly curves with embedding degrees of 10 and 14. First, we extend the optimized formula of the optimal pairing on BW13-310, a 128-bit secure curve with a prime $p$ in 310 bits and embedding degree $13$, to our target curves. This generalization allows us to compute the optimal pairing in approximately $\log r/2\varphi(k)$ Miller iterations, where $r$ and $k$ are the order of pairing groups and the embedding degree respectively. Second, we develop optimized algorithms for cofactor multiplication for $\mathbb{G}_1$ and $\mathbb{G}_2$, as well as subgroup membership testing for $\mathbb{G}_2$ on these curves. Based on these theoretical results a new 128-bit secure curve emerges: BW14-351. Finally, we provide detailed performance comparisons between BW14-351 and other popular curves on a 64-bit platform in terms of pairing computation, hashing to $\mathbb{G}_1$ and $\mathbb{G}_2$, group exponentiations and subgroup membership testings. Our results demonstrate that BW14-351 is a strong candidate for building pairing-based cryptographic protocols.
Last updated:  2024-09-20
Proofs of Space with Maximal Hardness
Leonid Reyzin
In a proof of space, a prover performs a complex computation with a large output. A verifier periodically checks that the prover still holds the output. The security goal for a proof of space construction is to ensure that a prover who erases even a portion of the output has to redo a large portion of the complex computation in order to satisfy the verifier. In existing constructions of proofs of space, the computation that a cheating prover is forced to redo is a small fraction (vanishing or small constant) of the original complex computation. The only exception is a construction of Pietrzak (ITCS 2019) that requires extremely depth-robust graphs, which result in impractically high complexity of the initialization process. We present the first proof of space of reasonable complexity that ensures that the prover has to redo almost the entire computation (fraction arbitrarily close to 1) when trying to save even an arbitrarily small constant fraction of the space. Our construction is a generalization of an existing construction called SDR (Fisch, Eurocrypt 2019) deployed on the Filecoin blockchain. Our improvements, while general, also demonstrate that the already deployed construction has considerably better security than previously shown. Technically, our construction can be viewed as amplifying predecessor-robust graphs. These are directed acyclic graphs in which every subgraph of sufficient relative size $\pi$ contains a large single-sink connected component of relative size $\alpha_\pi$. We take a predecessor-robust graph with constant parameters $(\pi, \alpha_\pi)$, and build a bigger predecessor-robust graph with a near-optimal set of parameters and additional guarantees on sink placement, while increasing the degree only by a small additive constant.
Last updated:  2024-09-20
C'est très CHIC: A compact password-authenticated key exchange from lattice-based KEM
Afonso Arriaga, Manuel Barbosa, Stanislaw Jarecki, and Marjan Skrobot
Driven by the NIST's post-quantum standardization efforts and the selection of Kyber as a lattice-based Key-Encapsulation Mechanism (KEM), several Password Authenticated Key Exchange (PAKE) protocols have been recently proposed that leverage a KEM to create an efficient, easy-to-implement and secure PAKE. In two recent works, Beguinet et al. (ACNS 2023) and Pan and Zeng (ASIACRYPT 2023) proposed generic compilers that transform KEM into PAKE, relying on an Ideal Cipher (IC) defined over a group. However, although IC on a group is often used in cryptographic protocols, special care must be taken to instantiate such objects in practice, especially when a low-entropy key is used. To address this concern, Dos Santos et al. (EUROCRYPT 2023) proposed a relaxation of the IC model under the Universal Composability (UC) framework called Half-Ideal Cipher (HIC). They demonstrate how to construct a UC-secure PAKE protocol, EKE-KEM, from a KEM and a modified 2-round Feistel construction called m2F. Remarkably, the m2F sidesteps the use of an IC over a group, and instead employs an IC defined over a fixed-length bitstring domain, which is easier to instantiate. In this paper, we introduce a novel PAKE protocol called CHIC that improves the communication and computation efficiency of EKE-KEM, by avoiding the HIC abstraction. Instead, we split the KEM public key in two parts and use the m2F directly, without further randomization. We provide a detailed proof of the security of CHIC and establish precise security requirements for the underlying KEM, including one-wayness and anonymity of ciphertexts, and uniformity of public keys. Our findings extend to general KEM-based EKE-style protocols and show that a passively secure KEM is not sufficient. In this respect, our results align with those of Pan and Zeng (ASIACRYPT 2023), but contradict the analyses of KEM-to-PAKE compilers by Beguinet et al. (ACNS 2023) and Dos Santos et al. (EUROCRYPT 2023). Finally, we provide an implementation of CHIC, highlighting its minimal overhead compared to the underlying KEM -- Kyber. An interesting aspect of the implementation is that we reuse the rejection sampling procedure in Kyber reference code to address the challenge of hashing onto the public key space. As of now, to the best of our knowledge, CHIC stands as the most efficient PAKE protocol from black-box KEM that offers rigorously proven UC security.
Last updated:  2024-09-20
On the Spinor Genus and the Distinguishing Lattice Isomorphism Problem
Cong Ling, Jingbo Liu, and Andrew Mendelsohn
This paper addresses the spinor genus, a previously unrecognized classification of quadratic forms in the context of cryptography, related to the lattice isomorphism problem (LIP). The spinor genus lies between the genus and equivalence class, thus refining the concept of genus. We present algorithms to determine whether two quadratic forms belong to the same spinor genus. If they do not, it provides a negative answer to the distinguishing variant of LIP. However, these algorithms have very high complexity, and we show that the proportion of genera splitting into multiple spinor genera is vanishing (assuming rank $n \geq 3$). For the special case of anisotropic integral binary forms ($n = 2$) over number fields with class number 1, we offer an efficient quantum algorithm to test if two forms lie in the same spinor genus. Our algorithm does not apply to the HAWK protocol, which uses integral binary Hermitian forms over number fields with class number greater than 1.
Last updated:  2024-09-20
On the Semidirect Discrete Logarithm Problem in Finite Groups
Christopher Battarbee, Giacomo Borin, Julian Brough, Ryann Cartor, Tobias Hemmert, Nadia Heninger, David Jao, Delaram Kahrobaei, Laura Maddison, Edoardo Persichetti, Angela Robinson, Daniel Smith-Tone, and Rainer Steinwandt
We present an efficient quantum algorithm for solving the semidirect discrete logarithm problem (SDLP) in any finite group. The believed hardness of the semidirect discrete logarithm problem underlies more than a decade of works constructing candidate post-quantum cryptographic algorithms from nonabelian groups. We use a series of reduction results to show that it suffices to consider SDLP in finite simple groups. We then apply the celebrated Classification of Finite Simple Groups to consider each family. The infinite families of finite simple groups admit, in a fairly general setting, linear algebraic attacks providing a reduction to the classical discrete logarithm problem. For the sporadic simple groups, we show that their inherent properties render them unsuitable for cryptographically hard SDLP instances, which we illustrate via a Baby-Step Giant-Step style attack against SDLP in the Monster Group. Our quantum SDLP algorithm is fully constructive, up to the compu- tation of maximal normal subgroups, for all but three remaining cases that appear to be gaps in the literature on constructive recognition of groups; for these cases SDLP is no harder than finding a linear representation. We conclude that SDLP is not a suitable post-quantum hardness assumption for any choice of finite group.
Last updated:  2024-09-20
Mystrium: Wide Block Encryption Efficient on Entry-Level Processors
Parisa Amiri Eliasi, Koustabh Ghosh, and Joan Daemen
We present a tweakable wide block cipher called Mystrium and show it as the fastest such primitive on low-end processors that lack dedicated AES or other cryptographic instructions, such as ARM Cortex-A7. Mystrium is based on the provably secure double-decker mode, that requires a doubly extendable cryptographic keyed (deck) function and a universal hash function. We build a new deck function called Xymmer that for its compression part uses Multimixer-128, the fastest universal hash for such processors, and for its expansion part uses a newly designed permutation, $\mathcal{G}_{512}$. Deck functions can also be used in modes to build encryption, authenticated encryption, and authentication schemes, and hence, Xymmer is of independent interest. The current state-of-the-art wide tweakable block cipher Adiantum-XChaCha12-AES encrypts 4096-byte messages at 11.5 cycles per byte on ARM Cortex-A7, while for Mystrium it is 6.8 cycles per byte while having a higher claimed security.
Last updated:  2024-09-20
A Note on Low-Communication Secure Multiparty Computation via Circuit Depth-Reduction
Pierre Charbit, Geoffroy Couteau, Pierre Meyer, and Reza Naserasr
We consider the graph-theoretic problem of removing (few) nodes from a directed acyclic graph in order to reduce its depth. While this problem is intractable in the general case, we provide a variety of algorithms in the case where the graph is that of a circuit of fan-in (at most) two, and explore applications of these algorithms to secure multiparty computation with low communication. Over the past few years, a paradigm for low-communication secure multiparty computation has found success based on decomposing a circuit into low-depth ``chunks''. This approach was however previously limited to circuits with a ``layered'' structure. Our graph-theoretic approach extends this paradigm to all circuits. In particular, we obtain the following contributions: 1) Fractionally linear-communication MPC in the correlated randomness model: We provide an $N$-party protocol for computing an $n$-input, $m$-output $\mathsf{F}$-arithmetic circuit with $s$ internal gates (over any basis of binary gates) with communication complexity $(\frac{2}{3}s + n + m)\cdot N\cdot\log |\mathsf{F}|$, which can be improved to $((1+\epsilon)\cdot\frac{2}{5}s+n+m)\cdot N\cdot\log |\mathsf{F}|$ (at the cost of increasing the computational overhead from a small constant factor to a large one). Previously, comparable protocols either used more than $s\cdot N\cdot \log |\mathsf{F}|$ bits of communication, required super-polynomial computation, were restricted to layered circuits, or tolerated a sub-optimal corruption threshold. 2) Sublinear-Communication MPC: Assuming the existence of $N$-party Homomorphic Secret Sharing for logarithmic depth circuits (respectively doubly logarithmic depth circuits), we show there exists sublinear-communication secure $N$-party computation for \emph{all} $\log^{1+o(1)}$-depth (resp.~$(\log\log)^{1+o(1)}$-depth) circuits. Previously, this result was limited to $(\mathcal{O}(\log))$-depth (resp.~$(\mathcal{O}(\log\log))$-depth) circuits, or to circuits with a specific structure (e.g. layered). 3) The 1-out-of-M-OT complexity of MPC: We introduce the `` 1-out-of-M-OT complexity of MPC'' of a function $f$, denoted $C_M(f)$, as the number of oracle calls required to securely compute $f$ in the 1-out-of-M-OT hybrid model. We establish the following upper bound: for every $M\geq 2$, $C_N(f) \leq (1+g(M))\cdot \frac{2 |f|}{5}$, where $g(M)$ is an explicit vanishing function. We also obtain additional contributions to reducing the amount of bootstrapping for fully homomorphic encryption, and to other types of sublinear-communication MPC protocols such as those based on correlated symmetric private information retrieval.
Last updated:  2024-09-20
Isogeny-Based Secure Voting Systems for Large-Scale Elections
Mohammed El Baraka and Siham Ezzouak
This article presents an in-depth study of isogeny-based cryptographic methods for the development of secure and scalable electronic voting systems. We address critical challenges such as voter privacy, vote integrity, and resistance to quantum attacks. Our work introduces novel cryptographic protocols leveraging isogenies, establishing a robust framework for post-quantum secure electronic voting. We provide detailed mathematical foundations, protocol designs, and security proofs, demonstrating the efficacy and scalability of our proposed system in large-scale elections.
Last updated:  2024-09-20
Bandersnatch: a fast elliptic curve built over the BLS12-381 scalar field
Simon Masson, Antonio Sanso, and Zhenfei Zhang
In this short note, we introduce Bandersnatch, a new elliptic curve built over the BLS12-381 scalar field. The curve is equipped with an efficient endomorphism, allowing a fast scalar multiplication algorithm. Our benchmark shows that the multiplication is 42% faster, compared to another curve, called Jubjub, having similar properties. Nonetheless, Bandersnatch does not provide any performance improvement for either rank 1 constraint systems (R1CS) or multi scalar multiplications, compared to the Jubjub curve.
Last updated:  2024-09-20
SQIPrime: A dimension 2 variant of SQISignHD with non-smooth challenge isogenies
Max Duparc and Tako Boris Fouotsa
We introduce SQIPrime, a post-quantum digital signature scheme based on the Deuring correspondence and Kani's Lemma. Compared to its predecessors that are SQISign and especially SQISignHD, SQIPrime further expands the use of high dimensional isogenies, already in use in the verification in SQISignHD, to all its subroutines. In doing so, it no longer relies on smooth degree isogenies (of dimension 1). Intriguingly, this includes the challenge isogeny which is also a non-smooth degree isogeny, but has an accessible kernel. The fact that the isogenies do not have rational kernel allows to fit more rational power 2 torsion points which are necessary when computing and representing the response isogeny. SQIPrime operates with prime numbers of the form $p = 2^\alpha f-1$. We describe two variants of SQIPrime. SQIPrime4D which incorporates the novelties described above and uses dimension 4 isogenies to represent the response isogeny. The runtime of higher dimensional isogeny computation is exponential in the dimension, hence the smaller the dimension the better for efficiency. The second variant, SQIPrime2D, solely uses dimension 2 isogenies. This is achieved by setting the degree of the secret isogeny to be equal to that of the challenge isogeny and further exploiting Kani's Lemma. SQIPrime2D is more efficient compared to SQIPrime4D and to SQISignHD, at the cost of being comparatively less compact, but still very compact compared to non isogeny based post-quantum signatures.
Last updated:  2024-09-20
Quantum Algorithms for Fast Correlation Attacks on LFSR-Based Stream Ciphers
Akinori Hosoyamada
This paper presents quantum algorithms for fast correlation attacks, one of the most powerful techniques for cryptanalysis on LFSR-based stream ciphers in the classical setting. Typical fast correlation attacks recover a value related to the initial state of the underlying LFSR by solving a decoding problem on a binary linear code with the Fast Walsh-Hadamard Transform (FWHT). Applying the FWHT on a function in the classical setting is mathematically equivalent to applying the Hadamard transform on the corresponding state in quantum computation. While the classical FWHT on a function with $\ell$-bit inputs requires $O(\ell 2^\ell)$ operations, the Hadamard transform on $\ell$-qubit states requires only a parallel application of $O(\ell)$ basic gates. This difference leads to the exponential speed-up by some quantum algorithms, including Simon's period finding algorithm. Given these facts, the question naturally arises of whether a quantum speedup can also be achieved for fast correlations by replacing the classical FWHT with the quantum Hadamard transform. We show quantum algorithms achieving speed-up in such a way, introducing a new attack model in the Q2 setting. The new model endows adversaries with a quite strong power, but we demonstrate its feasibility by showing that certain members of the ChaCha and Salsa20 families will likely be secure in the new model. Our attack exploits the link between LFSRs' state update and multiplication in a fine field to apply Shor's algorithm for the discrete logarithm problem. We apply our attacks on SNOW 2.0, SNOW 3G, and Sosemanuk, observing a large speed-up from classical attacks.
Last updated:  2024-09-20
Interactive Threshold Mercurial Signatures and Applications
Masayuki Abe, Masaya Nanri, Octavio Perez Kempner, and Mehdi Tibouchi
Mercurial signatures are an extension of equivalence class signatures that allow malleability for the public keys, messages, and signatures within the respective classes. Unfortunately, the most efficient construction to date suffers from a weak public key class-hiding property, where the original signer with the signing key can link the public keys in the same class. This is a severe limitation in their applications, where the signer is often considered untrustworthy of privacy. This paper presents two-party and multi-party interactive threshold mercurial signatures that overcome the above limitation by eliminating the single entity who knows the signing key. For the general case, we propose two constructions. The first follows the same interactive structure as the two-party case, avoiding complex distributed computations such as randomness generation, inversion, and multiplication, and even eliminates the need for private communication between parties. The second is based on a blueprint for general multi-party computation using verifiable secret sharing, but adopting optimizations. We show applications in anonymous credential systems that individually fit the two-party and multi-party constructions. In particular, in the two-party case, our approach provides stronger privacy by completely removing the trust in the authorities. We also discuss more applications, from blind signatures to multi-signatures and threshold ring signatures. Finally, to showcase the practicality of our approach, we implement our interactive constructions and compare them against related alternatives.
Last updated:  2024-09-20
Blind-Folded: Simple Power Analysis Attacks using Data with a Single Trace and no Training
Xunyue Hu, Quentin L. Meunier, and Emmanuelle Encrenaz
Side-Channel Attacks target the recovery of key material in cryptographic implementations by measuring physical quantities such as power consumption during the execution of a program. Simple Power Attacks consist in deducing secret information from a trace using a single or a few samples, as opposed to differential attacks which require many traces. Software cryptographic implementations now all contain a data-independent execution path, but often do not consider variations in power consumption associated to data. In this work, we show that a technique commonly used to select a value from different possible values in a control-independant way leads to significant power differences depending on the value selected. This difference is actually so important that a single sample can be considered for attacking one condition, and no training on other traces is required. We exploit this finding to propose the first single-trace attack without any knowledge gained on previous executions, using trace folding. We target the two modular exponentiation implementations in Libgcrypt, getting respectively 100% and 99.98% of correct bits in average on 30 executions using 2,048-bit exponents. We also use this technique to attack the scalar multiplication in ECDSA, successfully recovering all secret nonces on 1,000 executions. Finally, the insights we gained from this work allow us to show that a proposed counter-measure from the litterature for performing the safe loading of precomputed operands in the context of windowed implementations can be attacked as well.
Last updated:  2024-09-20
Early Stopping for Any Number of Corruptions
Julian Loss and Jesper Buus Nielsen
Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to $t=n-1$ malicious corruptions and terminates in $O(\min\{f^2,t+1\})$ rounds for any execution with $f\leq t$ actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to $t=(1-\epsilon)n$ malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority.
Last updated:  2024-09-20
SQISignHD: New Dimensions in Cryptography
Pierrick Dartois, Antonin Leroux, Damien Robert, and Benjamin Wesolowski
We introduce SQIsignHD, a new post-quantum digital signature scheme inspired by SQIsign. SQIsignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQIsignHD overcomes the main drawbacks of SQIsign. First, it scales well to high security levels, since the public parameters for SQIsignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Our signing procedure implemented in C runs in 28 ms, which is a significant improvement compared to SQISign. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQIsign, with compressed signatures as small as 109 bytes for the post-quantum NIST-1 level of security. These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention. Our experimental sagemath implementation of the verification runs in around 600 ms, indicating the potential cryptographic interest of dimension $4$ isogenies after optimisations and low level implementation.
Last updated:  2024-09-20
Communication Efficient Secure and Private Multi-Party Deep Learning
Sankha Das, Sayak Ray Chowdhury, Nishanth Chandran, Divya Gupta, Satya Lokam, and Rahul Sharma
Distributed training that enables multiple parties to jointly train a model on their respective datasets is a promising approach to address the challenges of large volumes of diverse data for training modern machine learning models. However, this approach immedi- ately raises security and privacy concerns; both about each party wishing to protect its data from other parties during training and preventing leakage of private information from the model after training through various inference attacks. In this paper, we ad- dress both these concerns simultaneously by designing efficient Differentially Private, secure Multiparty Computation (DP-MPC) protocols for jointly training a model on data distributed among multiple parties. Our DP-MPC protocol in the two-party setting is 56-794× more communication-efficient and 16-182× faster than previous such protocols. Conceptually, our work simplifies and improves on previous attempts to combine techniques from secure multiparty computation and differential privacy, especially in the context of ML training.
Last updated:  2024-09-20
Dishonest Majority Multiparty Computation over Matrix Rings
Hongqing Liu, Chaoping Xing, Chen Yuan, and Taoxu Zou
The privacy-preserving machine learning (PPML) has gained growing importance over the last few years. One of the biggest challenges is to improve the efficiency of PPML so that the communication and computation costs of PPML are affordable for large machine learning models such as deep learning. As we know, linear algebra such as matrix multiplication occupies a significant part of the computation in deep learning such as deep convolutional neural networks (CNN). Thus, it is desirable to propose the MPC protocol specialized for the matrix operations. In this work, we propose a dishonest majority MPC protocol over matrix rings which supports matrix multiplication and addition. Our MPC protocol can be seen as a variant of SPDZ protocol, i.e., the MAC and global key of our protocol are vectors of length $m$ and the secret of our protocol is an $m\times m$ matrix. Compared to the classic SPDZ protocol, our MPC protocol reduces the communication complexity by at least $m$ times to securely compute a matrix multiplication. We also show that the communication complexity of our MPC protocol is asymptotically as good as [16] which also presented a dishonest majority MPC protocol specialized for matrix operations, i.e., the communication complexity of securely computing a multiplication gate is $O(m^2n^2\log q)$ in the preprocessing phase and $O(m^2n\log q)$ in the online phase. The share size and the number of multiplications of our protocol are reduced by around $50\%$ and $40\%$ of [16], respectively. However, we take a completely different approach. The protocol in [16] uses a variant of BFV scheme to embed a whole matrix into a single ciphertext and then treats the matrix operation as the entry-wise operation in the ciphertext while our approach resorts to a variant of vector linear oblivious evaluation (VOLE) called the subfield VOLE [33] which can securely compute the additive sharing of $v {\bf x}$ for $v\in \mathbb{F}_{q^b}, {\bf x}\in \mathbb{F}_q^a$ with sublinear communication complexity. Finally, we note that our MPC protocol can be easily extended to small fields.
Last updated:  2024-09-20
Aether: Approaching the Holy Grail in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Hai Jin, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Aether, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Aether also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Aether runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Aether’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Aether’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Aether achieves high throughput and low latency in all situations.
Last updated:  2024-09-20
Finding Complete Impossible Differential Attacks on AndRX Ciphers and Efficient Distinguishers for ARX Designs
Debasmita Chakraborty, Hosein Hadipour, Phuong Hoa Nguyen, and Maria Eichlseder
The impossible differential (ID) attack is one of the most important cryptanalytic techniques for block ciphers. There are two phases to finding an ID attack: searching for the distinguisher and building a key recovery upon it. Previous works only focused on automated distinguisher discovery, leaving key recovery as a manual post-processing task, which may lead to a suboptimal final complexity. At EUROCRYPT~2023, Hadipour et al. introduced a unified constraint programming (CP) approach based on satisfiability for finding optimal complete ID attacks in strongly aligned ciphers. While this approach was extended to weakly-aligned designs like PRESENT at ToSC~2024, its application to ARX and AndRX ciphers remained as future work. Moreover, this method only exploited ID distinguishers with direct contradictions at the junction of two deterministic transitions. In contrast, some ID distinguishers, particularly for ARX and AndRX designs, may not be detectable by checking only the existence of direct contradictions. This paper fills these gaps by extending Hadipour et al.'s method to handle indirect contradictions and adapting it for ARX and AndRX designs. We also present a similar method for identifying zero-correlation (ZC) distinguishers. Moreover, we extend our new model for finding ID distinguishers to a unified optimization problem that includes both the distinguisher and the key recovery for AndRX designs. Our method improves ID attacks and introduces new distinguishers for several ciphers, such as SIMON, SPECK, Simeck, ChaCha, Chaskey, LEA, and SipHash. For example, we achieve a one-round improvement in the ID attacks against SIMON-64-96, SIMON-64-128, SIMON-128-128, SIMON-128-256 and a two-round improvement in the ID attacks against SIMON-128-192. These results significantly contribute to our understanding of the effectiveness of automated tools in the cryptanalysis of different design paradigms.
Last updated:  2024-09-19
Limits on Adaptive Security for Attribute-Based Encryption
Zvika Brakerski and Stav Medina
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be ``irregular'' in the sense that it is impossible to ``validate'' secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner. As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
Last updated:  2024-09-19
Pulsar: Secure Steganography for Diffusion Models
Tushar M. Jois, Gabrielle Beck, and Gabriel Kaptchuk
Widespread efforts to subvert access to strong cryptography has renewed interest in steganography, the practice of embedding sensitive messages in mundane cover messages. Recent efforts at provably secure steganography have focused on text-based generative models and cannot support other types of models, such as diffusion models, which are used for high-quality image synthesis. In this work, we study securely embedding steganographic messages into the output of image diffusion models. We identify that the use of variance noise during image generation provides a suitable steganographic channel. We develop our construction, Pulsar, by building optimizations to make this channel practical for communication. Our implementation of Pulsar is capable of embedding $\approx 320$--$613$ bytes (on average) into a single image without altering the distribution of the generated image, all in $< 3$ seconds of online time on a laptop. In addition, we discuss how the results of Pulsar can inform future research into diffusion models. Pulsar shows that diffusion models are a promising medium for steganography and censorship resistance.
Last updated:  2024-09-19
Dense and smooth lattices in any genus
Wessel van Woerden
The Lattice Isomorphism Problem (LIP) was recently introduced as a new hardness assumption for post-quantum cryptography. The strongest known efficiently computable invariant for LIP is the genus of a lattice. To instantiate LIP-based schemes one often requires the existence of a lattice that (1) lies in some fixed genus, and (2) has some good geometric properties such as a high packing density or small smoothness parameter. In this work we show that such lattices exist. In particular, building upon classical results by Siegel (1935), we show that essentially any genus contains a lattice with a close to optimal packing density, smoothing parameter and covering radius. We present both how to efficiently compute concrete existence bounds for any genus, and asymptotically tight bounds under weak conditions on the genus.
Last updated:  2024-09-19
P2C2T: Preserving the Privacy of Cross-Chain Transfer
Panpan Han, Zheng Yan, Laurence T. Yang, and Elisa Bertino
Blockchain-enabled digital currency systems have typically operated in isolation, lacking necessary mechanisms for seamless interconnection. Consequently, transferring assets across distinct currency systems remains a complex challenge, with existing schemes often falling short in ensuring security, privacy, and practicality. This paper proposes P2C2T -- a privacy-preserving cross-chain transfer scheme. It is the first scheme to address atomicity, unlinkability, indistinguishability, non-collateralization, and required functionalities across diverse currency systems. P2C2T is based on \textit{threshold anonymous atomic locks} (TA$^2$L), also proposed by us, serving as the cornerstone for guaranteeing atomic cross-chain transfer while obscuring the payment relationships between users. By combining TA$^2$L with \textit{verifiable timed discrete logarithm} schemes, P2C2T renders cross-chain transactions indistinguishable from regular intra-chain ones. Notably, P2C2T eliminates the collateralization of senders and imposes minimal requirements on underlying blockchains, specifically on the ability to verify signatures. We substantiate the security of TA$^2$L based on a proposed cryptographic notion called \textit{threshold blind conditional signatures} and demonstrate the security of P2C2T through necessary proofs. Additionally, we compare the performance of P2C2T with an existing scheme that has properties closest to P2C2T. The comparison reveals that P2C2T reduces overhead by at least $85.488\%$ in terms of running time, communication cost, and storage cost when completing a cross-chain transfer. We further conduct cross-chain transfers and intra-chain payments using the Bitcoin testnet and Litecoin testnet to illustrate the privacy and practicality of P2C2T.
Last updated:  2024-09-19
Monomial Isomorphism for Tensors and Applications to Code Equivalence Problems
Giuseppe D'Alconzo
Starting from the problem of $d$-Tensor Isomorphism ($d$-TI), we study the relation between various Code Equivalence problems in different metrics. In particular, we show a reduction from the sum-rank metric (CE${}_{sr}$) to the rank metric (CE${}_{rk}$). To obtain this result, we investigate reductions between tensor problems. We define the Monomial Isomorphism problem for $d$-tensors ($d$-TI${}^*$), where, given two $d$-tensors, we ask if there are $d-1$ invertible matrices and a monomial matrix sending one tensor into the other. We link this problem to the well-studied $d$-TI and the TI-completeness of $d$-TI${}^*$ is shown. Due to this result, we obtain a reduction from CE${}_{sr}$ to CE${}_{rk}$. In the literature, a similar result was known, but it needs an additional assumption on the automorphisms of matrix codes. Since many constructions based on the hardness of Code Equivalence problems are emerging in cryptography, we analyze how such reductions can be taken into account in the design of cryptosystems based on CE${}_{sr}$.
Last updated:  2024-09-19
Dishonest Majority Constant-Round MPC with Linear Communication from DDH
Vipul Goyal, Junru Li, Ankit Kumar Misra, Rafail Ostrovsky, Yifan Song, and Chenkai Weng
In this work, we study constant round multiparty computation (MPC) for Boolean circuits against a fully malicious adversary who may control up to $n-1$ out of $n$ parties. Without relying on fully homomorphic encryption (FHE), the best-known results in this setting are achieved by Wang et al. (CCS 2017) and Hazay et al. (ASIACRYPT 2017) based on garbled circuits, which require a quadratic communication in the number of parties $O(|C|\cdot n^2)$. In contrast, for non-constant round MPC, the recent result by Rachuri and Scholl (CRYPTO 2022) has achieved linear communication $O(|C|\cdot n)$. In this work, we present the first concretely efficient constant round MPC protocol in this setting with linear communication in the number of parties $O(|C|\cdot n)$. Our construction can be based on any public-key encryption scheme that is linearly homomorphic for public keys. Our work gives a concrete instantiation from a variant of the El-Gamal Encryption Scheme assuming the DDH assumption. The analysis shows that when the computational security parameter $\lambda=128$ and statistical security parameter $\kappa=80$, our protocol achieves a smaller communication than Wang et al. (CCS 2017) when there are $16$ parties for AES circuit and $8$ parties for general Boolean circuits (where we assume that the numbers of AND gates and XOR gates are the same). When comparing with the recent work by Beck et al. (CCS 2023) that achieves constant communication complexity $O(|C|)$ in the strong honest majority setting ($t<(1/2-\epsilon)n$ where $\epsilon$ is a constant), our protocol is better as long as $n<3500$ (when $t=n/4$ for their work).
Last updated:  2024-09-19
Practical Two-party Computational Differential Privacy with Active Security
Fredrik Meisingseth, Christian Rechberger, and Fabian Schmid
In this work we revisit the problem of using general-purpose MPC schemes to emulate the trusted dataholder in differential privacy (DP), to achieve the same accuracy but without the need to trust one single dataholder. In particular, we consider the two-party model where two computational parties (or dataholders), each with their own dataset, wish to compute a canonical DP mechanism on their combined data and to do so with active security. We start by remarking that available definitions of computational DP (CDP) for protocols are somewhat ill-suited for such a use-case, due to them either poorly capturing some strong security guarantees commonly given by general-purpose MPC protocols, or having too strict requirements in the sense that they need significant adjustment in order to be satisfiable by using common DP and MPC techniques. With this in mind, we propose a new version of simulation-based CDP, called SIM$^*$-CDP, and prove it to be stronger than the IND-CDP and SIM-CDP and incomparable to SIM$^+$-CDP. We demonstrate the usability of the SIM$^*$-CDP definition by showing how to satisfy it by the use of an available distributed protocol for sampling truncated geometric noise. Further, we use the protocol to compute two-party inner-products with CDP and active security, and with accuracy equal to that of the central model, being the first to do so. Finally, we provide an open-sourced implementation and benchmark its practical performance. Our implementation generates a truncated geometric sample in between about 0.035 and 3.5 seconds (amortized), depending on network and parameter settings, comparing favourably to existing implementations.
Last updated:  2024-09-19
Linear approximations of the Flystel construction
Tim Beyne and Clémence Bouvier
Using a purity theorem for exponential sums due to Rojas-Léon, we upper bound the absolute correlations of linear approximations of the Flystel construction used in Anemoi. This resolves open problem 7.1 in [Bouvier, 2023].
Last updated:  2024-09-19
SoK: Descriptive Statistics Under Local Differential Privacy
René Raab, Pascal Berrang, Paul Gerhart, and Dominique Schröder
Local Differential Privacy (LDP) provides a formal guarantee of privacy that enables the collection and analysis of sensitive data without revealing any individual's data. While LDP methods have been extensively studied, there is a lack of a systematic and empirical comparison of LDP methods for descriptive statistics. In this paper, we first provide a systematization of LDP methods for descriptive statistics, comparing their properties and requirements. We demonstrate that several mean estimation methods based on sampling from a Bernoulli distribution are equivalent in the one-dimensional case and introduce methods for variance estimation. We then empirically compare methods for mean, variance, and frequency estimation. Finally, we provide recommendations for the use of LDP methods for descriptive statistics and discuss their limitations and open questions.
Last updated:  2024-09-19
Asynchronous Verifiable Secret Sharing with Elastic Thresholds and Distributed Key Generation
Junming Li, Zhi Lu, Renfei Shen, Yuanqing Feng, and Songfeng Lu
Distributed Key Generation (DKG) is a technique that enables the generation of threshold cryptography keys among a set of mutually untrusting nodes. DKG generates keys for a range of decentralized applications such as threshold signatures, multiparty computation, and Byzantine consensus. Over the past five years, research on DKG has focused on optimizing network communication protocols to improve overall system efficiency by reducing communication complexity. However, SOTA asynchronous distributed key generation (ADKG) schemes (e.g., Kokoris-Kogias ADKG, CCS 2020 and Das ADKG, S\&P 2022, and others) only support recovery thresholds of either $f$ or $2f$, where $f$ is the maximum number of malicious nodes. This paper proposes an asynchronous verifiable secret sharing protocol featuring an elastic threshold, where $t \in [f,n-f-1]$ and $n \ge 3f+1$ is the total number of parties. Our protocol enables a dealer to share up to $t+1$ secrets with a total communication cost of O($\lambda n^3$), where $\lambda$ is the security parameter, and the protocol relies on the hardness of the $q$-SDH problem. We further modified the Schnorr protocol to enable simultaneous commitments to multiple secrets, which we refer to $m$-Schnorr.
Last updated:  2024-09-19
Provable Security of Linux-DRBG in the Seedless Robustness Model
Woohyuk Chung, Hwigyeom Kim, Jooyoung Lee, and Yeongmin Lee
This paper studies the provable security of the deterministic random bit generator~(DRBG) utilized in Linux 6.4.8, marking the first analysis of Linux-DRBG from a provable security perspective since its substantial structural changes in Linux 4 and Linux 5.17. Specifically, we prove its security up to $O(\min\{2^{\frac{n}{2}},2^{\frac{\lambda}{2}}\})$ queries in the seedless robustness model, where $n$ is the output size of the internal primitives and $\lambda$ is the min-entropy of the entropy source. Our result implies $128$-bit security given $n=256$ and $\lambda=256$ for Linux-DRBG. We also present two distinguishing attacks using $O(2^{\frac{n}{2}})$ and $O (2^{\frac{\lambda}{2}})$ queries, respectively, proving the tightness of our security bound.
Last updated:  2024-09-19
FRAST: TFHE-friendly Cipher Based on Random S-boxes
Mingyu Cho, Woohyuk Chung, Jincheol Ha, Jooyoung Lee, Eun-Gyeol Oh, and Mincheol Son
A transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption~(HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for ``HE-friendly'' ciphers that take into account the specific properties of the HE schemes. In this paper, we propose a new TFHE-friendly cipher, dubbed $\mathsf{FRAST}$, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds. The round function of $\mathsf{FRAST}$ can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation. Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of $\mathsf{FRAST}$ at the cost of a single S-box call. In this way, $\mathsf{FRAST}$ enjoys $2.768$ (resp. $10.57$) times higher throughput compared to $\mathsf{Kreyvium}$ (resp. $\mathsf{Elisabeth}$) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.
Last updated:  2024-09-18
Detecting and Correcting Computationally Bounded Errors: A Simple Construction Under Minimal Assumptions
Jad Silbak and Daniel Wichs
We study error detection and error correction in a computationally bounded world, where errors are introduced by an arbitrary polynomial time adversarial channel. We consider codes where the encoding procedure uses random coins and define two distinct variants: (1) in randomized codes, fresh randomness is chosen during each encoding operation and is unknown a priori, while (2) in self-seeded codes, the randomness of the encoding procedure is fixed once upfront and is known to the adversary. In both cases, the randomness need not be known to the decoding procedure, and there is no trusted common setup between the encoder and decoder. The encoding and decoding algorithms are efficient and run in some fixed polynomial time, independent of the run time of the adversary. The parameters of standard codes for worst-case (inefficient) errors are limited by the Singleton bound: for rate $R$ it is not possible to detect more than a $1-R$ fraction of errors, or uniquely correct more than a $(1-R)/2$ fraction of errors, and efficient codes matching this bound exist for sufficiently large alphabets. In the computationally bounded setting, we show that going beyond the Singleton bound implies one-way functions in the case of randomized codes and collision-resistant hash functions in the case of self-seeded codes. We construct randomized and self-seeded codes under these respective minimal assumptions with essentially optimal parameters over a constant-sized alphabet: - Detection: the codes have a rate $R \approx 1$ while detecting a $\rho \approx 1$ fraction of errors. - Correction: for any $\rho < 1/2$, the codes uniquely correct a $\rho$ fraction of errors with rate $R \approx 1-\rho$. Codes for computationally bounded errors were studied in several prior works starting with Lipton (STACS '94), but all such works either: (a) need some trusted common setup (e.g., public-key infrastructure, common reference string) between the encoder and decoder, or (b) only handle channels whose complexity is a prior bounded below that of the code.
Last updated:  2024-09-18
Lower Bound on Number of Compression Calls of a Collision-Resistance Preserving Hash
Debasmita Chakraborty and Mridul Nandi
The collision-resistant hash function is an early cryptographic primitive that finds extensive use in various applications. Remarkably, the Merkle-Damgård and Merkle tree hash structures possess the collision-resistance preserving property, meaning the hash function remains collision-resistant when the underlying compression function is collision-resistant. This raises the intriguing question of whether reducing the number of underlying compression function calls with the collision-resistance preserving property is possible. In pursuit of addressing these inquiries, we prove that for an ℓn-to-sn-bit collision-resistance preserving hash function designed using r tn-to-n-bit compression function calls, we must have r ≥ ⌈(ℓ−s)/(t−1)⌉. Throughout the paper, all operations other than the compression function are assumed to be linear (which we call linear hash mode).
Last updated:  2024-09-18
Compute, but Verify: Efficient Multiparty Computation over Authenticated Inputs
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, and Nitin Singh
Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some certification authority. We propose a generic and efficient compiler that transforms any linear secret sharing based honest-majority MPC protocol into one with input authentication. Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party honest majority MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t<n/3$, our compiler preserves the stronger identifiable abort security of the underlying MPC protocol. No existing solution for authenticated MPC achieves this regardless of the corruption threshold. Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment.
Last updated:  2024-09-18
Threshold PAKE with Security against Compromise of all Servers
Yanqi Gu, Stanislaw Jarecki, Pawel Kedzior, Phillip Nazarian, and Jiayu Xu
We revisit the notion of threshold Password-Authenticated Key Exchange (tPAKE), and we extend it to augmented tPAKE (atPAKE), which protects password information even in the case all servers are compromised, except for allowing an (inevitable) offline dictionary attack. Compared to prior notions of tPAKE this is analogous to replacing symmetric PAKE, where the server stores the user's password, with an augmented (or asymmetric) PAKE, like OPAQUE [JKX18], where the server stores a password hash, which can be used only as a target in an offline dictionary search for the password. An atPAKE scheme also strictly improves on the security of an aPAKE, by secret-sharing the password hash among a set of servers. Indeed, our atPAKE protocol is a natural realization of threshold OPAQUE. We formalize atPAKE in the framework of Universal Composability (UC), and show practical ways to realize it. All our schemes are generic compositions which interface to any aPAKE used as a sub-protocol, making them easier to adopt. Our main scheme relies on threshold Oblivious Pseudorandom Function (tOPRF), and our independent contribution fixes a flaw in the UC tOPRF notion of [JKKX17] and upgrades the tOPRF scheme therein to achieve the fixed definition while preserving its minimal cost and round complexity. The technique we use enforces implicit agreement on arbitrary context information within threshold computation, and it is of general interest.
Last updated:  2024-09-18
Succinct Verification of Compressed Sigma Protocols in the Updatable SRS setting
Moumita Dutta, Chaya Ganesh, and Neha Jawalkar
We propose protocols in the Compressed Sigma Protocol framework that achieve a succinct verifier. Towards this, we construct a new inner product argument and cast it in the Compressed Sigma Protocol (CSP) framework as a protocol for opening a committed linear form, achieving logarithmic verification. We then use our succinct-verifier CSP to construct a zero-knowledge argument for circuit satisfiability (under the discrete logarithm assumption in bilinear groups) in the updatable Structured Reference String (SRS) setting that achieves $O(\log n)$ proof size and $O(\log n)$ verification complexity. Our circuit zero-knowledge protocol has concretely better proof/prover/verifier complexity compared to the the state-of-the-art protocol in the updatable setting under the same assumption. Our techniques of achieving verifier-succinctness in the compression framework is of independent interest. We then show a commitment scheme for committing to group elements using a structured commitment key. We construct protocols to open a committed homomorphism on a committed vector with verifier succinctness in the designated verifier setting. This has applications in making the verifier in compressed sigma protocols for bilinear group arithmetic circuits, succinct.
Last updated:  2024-09-18
PPSA: Polynomial Private Stream Aggregation for Time-Series Data Analysis
Antonia Januszewicz, Daniela Medrano Gutierrez, Nirajan Koirala, Jiachen Zhao, Jonathan Takeshita, Jaewoo Lee, and Taeho Jung
Modern data analytics requires computing functions on streams of data points from many users that are challenging to calculate, due to both the high scale and nontrivial nature of the computation at hand. The need for data privacy complicates this matter further, as general-purpose privacy-enhancing technologies face limitations in at least scalability or utility. Existing work has attempted to improve this by designing purpose-built protocols for the use case of Private Stream Aggregation; however, prior work lacks the ability to compute more general aggregative functions without the assumption of trusted parties or hardware. In this work, we present PPSA, a protocol that performs Private Polynomial Stream Aggregation, allowing the private computation of any polynomial function on user data streams even in the presence of an untrusted aggregator. Unlike previous state-of-the-art approaches, PPSA enables secure aggregation beyond simple summations without relying on trusted hardware; it utilizes only tools from cryptography and differential privacy. Our experiments show that PPSA has low latency during the encryption and aggregation processes with an encryption latency of 10.5 ms and aggregation latency of 21.6 ms for 1000 users, which are up to 138$\times$ faster than the state-of-the-art prior work.
Last updated:  2024-09-18
Attacking ECDSA with Nonce Leakage by Lattice Sieving: Bridging the Gap with Fourier Analysis-based Attacks
Yiming Gao, Jinghui Wang, Honggang Hu, and Binang He
The Hidden Number Problem (HNP) has found extensive applications in side-channel attacks against cryptographic schemes, such as ECDSA and Diffie-Hellman. There are two primary algorithmic approaches to solving the HNP: lattice-based attacks and Fourier analysis-based attacks. Lattice-based attacks exhibit better efficiency and require fewer samples when sufficiently long substrings of the nonces are known. However, they face significant challenges when only a small fraction of the nonce is leaked, such as 1-bit leakage, and their performance degrades in the presence of errors. In this paper, we address an open question by introducing an algorithmic tradeoff that significantly bridges the gap between these two approaches. By introducing a parameter $x$ to modify Albrecht and Heninger's lattice, the lattice dimension is reduced by approximately $(\log_2{x})/ l$, where $l$ represents the number of leaked bits. We present a series of new methods, including the interval reduction algorithm, several predicates, and the pre-screening technique. Furthermore, we extend our algorithms to solve the HNP with erroneous input. Our attack outperforms existing state-of-the-art lattice-based attacks against ECDSA. We break several records including 1-bit and less than 1-bit leakage on a 160-bit curve, while the best previous lattice-based attack for 1-bit leakage was conducted only on a 112-bit curve.
Last updated:  2024-09-18
Real-World Deniability in Messaging
Daniel Collins, Simone Colombo, and Loïs Huguenin-Dumittan
This work explores real-world deniability in messaging. We propose a formal model that considers the entire messaging system to analyze deniability in practice. Applying this model to the Signal application and DKIM-protected email, we demonstrate that these systems do not offer practical deniability guarantees. Additionally, we analyze 140 court cases in Switzerland that use conversations on messaging applications as evidence and find that none consider deniability, providing evidence that this property does not have an impact in the legal setting. Based on these technical and legal findings, we assess whether deniability is a desirable property and the challenges and shortcomings of designing a system that is deniable in practice. We posit that systems should either offer real-world deniability or refrain from claiming to achieve it. We discuss how to choose an appropriate threat model for deniability in a given context and how to design communication systems that are deniable in practice. For Signal, we propose and discuss a simple yet effective solution: the application should enable direct modification of locally stored messages in the user interface. This position paper raises several unanswered questions, aiming to further stimulate discussion and research on real-world deniability in messaging.situation, we propose a model for real world deniability that takes into account the entire messaging system. We then discuss how deniability is (not) used in practice and the challenges arising from the design of a deniable system. We propose a simple, yet powerful solution for deniability: applications should enable direct modification of local messages; we discuss the impacts of this strong deniability property.
Last updated:  2024-09-18
Curl: Private LLMs through Wavelet-Encoded Look-Up Tables
Manuel B. Santos, Dimitris Mouris, Mehmet Ugurbil, Stanislaw Jarecki, José Reis, Shubho Sengupta, and Miguel de Vega
Recent advancements in transformers have revolutionized machine learning, forming the core of Large language models (LLMs). However, integrating these systems into everyday applications raises privacy concerns as client queries are exposed to model owners. Secure multiparty computation (MPC) allows parties to evaluate machine learning applications while keeping sensitive user inputs and proprietary models private. Due to inherent MPC costs, recent works introduce model-specific optimizations that hinder widespread adoption by machine learning researchers. CrypTen (NeurIPS'21) aimed to solve this problem by exposing MPC primitives via common machine learning abstractions such as tensors and modular neural networks. Unfortunately, CrypTen and many other MPC frameworks rely on polynomial approximations of the non-linear functions, resulting in high errors and communication complexity. This paper introduces Curl, an easy-to-use MPC framework that evaluates non-linear functions as lookup tables, resulting in better approximations and significant round and communication reduction. Curl exposes a similar programming model as CrypTen and is highly parallelizable through tensors. At its core, Curl relies on discrete wavelet transformations to reduce the lookup table size without sacrificing accuracy, which results in up to $19\times$ round and communication reduction compared to CrypTen for non-linear functions such as logarithms and reciprocals. We evaluate Curl on a diverse set of LLMs, including BERT, GPT-2, and GPT Neo, and compare against state-of-the-art related works such as Iron (NeurIPS'22) and Bolt (S&P'24) achieving at least $1.9\times$ less communication and latency. Finally, we resolve a long-standing debate regarding the security of widely used probabilistic truncation protocols by proving their security in the stand-alone model. This is of independent interest as many related works rely on this truncation style.
Last updated:  2024-09-18
Actively Secure Polynomial Evaluation from Shared Polynomial Encodings
Pascal Reisert, Marc Rivinius, Toomas Krips, Sebastian Hasler, and Ralf Küsters
Many of the currently best actively secure Multi-Party Computation (MPC) protocols like SPDZ (Damgård et al., CRYPTO 2012) and improvements thereof use correlated randomness to speed up the time-critical online phase. Although many of these protocols still rely on classical Beaver triples, recent results show that more complex correlations like matrix or convolution triples lead to more efficient evaluations of the corresponding operations, i.e. matrix multiplications or tensor convolutions. In this paper, we address the evaluation of multivariate polynomials with a new form of randomness: polytuples. We use the polytuples to construct a new family of randomized encodings which then allow us to evaluate the given multivariate polynomial. Our approach can be fine-tuned in various ways to the constraints of applications at hand, in terms of round complexity, bandwidth, and tuple size. We show that for many real-world setups, a polytuples-based online phase outperforms state-of-the-art protocols based on Beaver triples.
Last updated:  2024-09-18
Depth-Optimized Implementation of ASCON Quantum Circuit
Yujin Oh, Kyungbae Jang, Anubhab Baksi, and Hwajeong Seo
The development of quantum computers, which employ a different paradigm of computation, is posing a threat to the security of cryptography. Narrowing down the scope to symmetric-key cryptography, the Grover search algorithm is probably the most influential in terms of its impact on security. Recently, there have been efforts to estimate the complexity of the Grover’s key search for symmetric key ciphers and evaluate their post-quantum security. In this paper, we present a depth-optimized implementation of a quantum circuit for ASCON, which is a symmetric key cipher that has recently been standardized in the NIST (National Institute of Standards and Technology) Lightweight Cryptography standardization. As far as we know, this is the first implementation of a quantum circuit for the ASCON AEAD (Authenticated Encryption with Associated Data) scheme. To our understanding, reducing the depth of the quantum circuit for the target cipher is the most effective approach for Grover’s key search. We demonstrate the optimal Grover’s key search cost for ASCON, along with a proposed depth-optimized quantum circuit. Further, based on the estimated cost, we evaluate the post-quantum security strength of ASCON according to relevant evaluation criteria and state-of-the-art research.
Last updated:  2024-09-18
Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable
Martin R. Albrecht and Kamil Doruk Gur
We revisit the lattice-based verifiable oblivious PRF construction from PKC'21 and remove or mitigate its central three sources of inefficiency. First, applying Rényi divergence arguments, we eliminate one superpolynomial factor from the ciphertext modulus \(q\), allowing us to reduce the overall bandwidth consumed by RLWE samples by about a factor of four. This necessitates us introducing intermediate unpredictability notions to argue PRF security of the final output in the Random Oracle model. Second, we remove the reliance on the \(\mathsf{1D-SIS}\) assumption, which reduces another superpolynomial factor, albeit to a factor that is still superpolynomial. Third, by applying the state-of-the-art in zero-knowledge proofs for lattice statements, we achieve a reduction in bandwidth of several orders of magnitude for this material. Finally, we give a \(t\)-out-of-\(n\) threshold variant of the VOPRF for constant \(t\) and with trusted setup, based on a \(n\)-out-of-\(n\) distributed variant of the VOPRF (and without trusted setup).
Last updated:  2024-09-18
CAPYBARA and TSUBAKI: Verifiable Random Functions from Group Actions and Isogenies
Yi-Fu Lai
In this work, we introduce two post-quantum Verifiable Random Function (VRF) constructions based on abelian group actions and isogeny group actions with a twist. The former relies on the standard group action Decisional Diffie-Hellman (GA-DDH) assumption. VRFs serve as cryptographic tools allowing users to generate pseudorandom outputs along with publicly verifiable proofs. Moreover, the residual pseudorandomness of VRFs ensures the pseudorandomness of unrevealed inputs, even when multiple outputs and proofs are disclosed. Our work aims at addressing the growing demand for post-quantum VRFs, as existing constructions based on elliptic curve cryptography (ECC) or classical DDH-type assumptions are vulnerable to quantum threats. In our contributions, our two VRF constructions, rooted in number-theoretic pseudorandom functions, are both simple and secure over the random oracle model. We introduce a new proof system for the factorization of group actions and set elements, serving as the proofs for our VRFs. The first proposal is based on the standard GA-DDH problem, and for its security proof, we introduce the (group action) master Decisional Diffie-Hellman problem over group actions, proving its equivalence to the standard GA-DDH problem. In the second construction, we leverage quadratic twists to enhance efficiency, reducing the key size and the proof sizes, expanding input size. The scheme is based on the square GA-DDH problem. Moreover, we employ advanced techniques from the isogeny literature to optimize the proof size to 39KB and 34KB using CSIDH-512 without compromising VRF notions. The schemes feature fast evaluations but exhibit slower proof generation. To the best of our knowledge, these constructions represent the first two provably secure VRFs based on isogenies.
Last updated:  2024-09-18
Providing Integrity for Authenticated Encryption in the Presence of Joint Faults and Leakage
Francesco Berti and Itamar Levi
Passive (leakage exploitation) and active (fault injection) physical attacks pose a significant threat to cryptographic schemes. Although leakage-resistant cryptography is well studied, there is little work on mode-level security in the presence of joint faults and leakage exploiting adversaries. In this paper, we focus on integrity for authenticated encryption (AE). First, we point out that there is an inherent attack in the fault-resilience model presented at ToSC 2023. This shows how fragile the freshness condition of a forgery is when faults are injected into either the tag-generation or the encryption algorithm. Therefore, we provide new integrity definitions for AE in the presence of leakage and faults, and we follow the atomic model, in which the scheme is divided into atoms (or components, e.g. a call to a block cipher) and allows the adversary to inject a fault only into the inputs of an atom. We envision this model as a first step for leveled implementations in the faults context, the granularity of atoms can be made finer or coarser (for example, instead of considering a call to a block cipher, we can consider atoms to be rounds of the block cipher). We hold the underlying belief that it would be easier to protect smaller blocks than a full scheme. The proposed model is very flexible and allows us to understand where to apply faults countermeasures (in some very interesting cases this model can reduce faults inside atoms to faults on their outputs, as we discuss). We then show that an AE-scheme using a single call to a highly leakage-protected (and thus very expensive) component, CONCRETE (presented at Africacrypt 2019), maintains integrity in the presence of leakage in both encryption and decryption, and faults only in decryption.On the other hand, a single fault in encryption is enough to forge. Therefore, we first introduce a weaker definition (which restricts the meaning of freshness), weak integrity, which CONCRETE achieves even if the adversary can introduce faults in the encryption queries (with some restrictions on the number and type of faults). Finally, we provide a variant, CONCRETE2, which is slightly more computationally expensive, but still uses a single call to a strongly protected component that provides integrity in the presence of leakage and faults.
Last updated:  2024-09-18
A Combined Design of 4-PLL-TRNG and 64-bit CDC-7-XPUF on a Zynq-7020 SoC
Oğuz Yayla and Yunus Emre Yılmaz
True Random Number Generators (TRNGs) and Physically Unclonable Functions (PUFs) are critical hardware primitives for cryptographic systems, providing randomness and device-specific security. TRNGs require complete randomness, while PUFs rely on consistent, device-unique responses. In this work, both primitives are implemented on a System-on-Chip Field-Programmable Gate Array (SoC FPGA), leveraging the integrated Phase-Locked Loops (PLLs) for robust entropy generation in PLLbased TRNGs. A novel backtracking parameter selection algorithm for the TRNG implementation is employed, alongside a methodology to boost data generation rates without compromising entropy. The design is rigorously evaluated using the German BSI AIS-20/31 standards. For the PUF implementation, the Arbiter PUF, known for its lightweight design and key generation, is enhanced to resist machine learning attacks by implementing a 32-bit and a 64-bit component-differentially challenged XOR Arbiter PUF (CDC-XPUF). These designs are tested using standard PUF metrics, including uniformity, correctness, and uniqueness. Finally, a combined 4-PLL-TRNG and 64-bit CDC-XPUF design is introduced and evaluated for its suitability in Internet-of-Things (IoT) systems, demonstrating strong performance in both TRNG and PUF tests. The tests are conducted on the Xilinx Zynq 7020 SoC using a ZC702 evaluation board, confirming the effectiveness of this integrated approach for secure, low-resource applications like IoT.
Last updated:  2024-09-18
Interactive Line-Point Zero-Knowledge with Sublinear Communication and Linear Computation
Fuchun Lin, Chaoping Xing, and Yizhou Yao
Studies of vector oblivious linear evaluation (VOLE)-based zero-knowledge (ZK) protocols flourish in recent years. Such ZK protocols feature optimal prover computation and a flexibility for handling arithmetic circuits over arbitrary fields. However, most of them have linear communication, which constitutes a bottleneck for handling large statements in a slow network. The pioneer work AntMan (CCS'22), achieved sublinear communication for the first time within VOLE-based ZK, but lost the advantage of fast proving. In this work, we propose two new VOLE-based ZK constructions that achieve sublinear communication and linear computation, simultaneously. Let $\mathcal{C}$ be a circuit with size $S$, input size $n$, and depth $d$. In particular, our first ZK, specialized for layered circuits, has communication $O(n+d\log{S})$, while our second ZK can be used to prove general circuits and has communication $O(n+d\log{S}+d^2)$. Our results are obtained by introducing the powerful sum-check techniques from the mature line of works on interactive proofs into the context of VOLE-based ZK for the first time. Reminiscent of the non-interactive line-point zero-knowledge proof system (ITC'21), we introduce an interactive line-point zero-knowledge (ILPZK) proof system, which closely connects with VOLE-based ZK protocols. In addition, our works also enrich the studies of ZK based on interactive proofs, with new interesting features (e.g., having information-theoretic UC-security, naturally supporting any field) achieved.
Last updated:  2024-09-18
Attestation Proof of Association – provability that attestation keys are bound to the same hardware and person
Eric Verheul
We propose a wallet provider issued attestation called Wallet Trust Evidence (WTE) and three related specific instructions for the European Digital Identity (EUDI) Wallet cryptographic hardware, most notably the generation of a Proof of Association (PoA). These allow the EUDI Wallet providing verifiable assurance to third parties (issuers, relying parties) that attestation private keys are not only bound to conformant cryptographic hardware but also that they are bound to the same such hardware. This allows the EUDI Wallet meeting eIDAS Level of Assurance ``high'' as well as operating in a privacy friendly manner. The instructions specified in this document cater for convenient implementation in all envisioned EUDI Wallet architectures including those based on a GlobalPlatform based Secure Element such as an eID-card or an embedded SIM (eSIM). By their simplicity, the three instructions also allow for convenient Common Criteria certification. This document is a further refinement and cryptographic concretization of the WTE/PoA logic specified in the wallet Architecture and Reference Framework (ARF), which is based on the EPIC-09 result developed in a cooperation between the NI-Scy consortium and the eIDAS expert group. However, the present draft document is meant for discussion only and not approved by the NI-Scy consortium, the eIDAS expert group or Dutch government.
Last updated:  2024-09-18
SnarkFold: Efficient Proof Aggregation from Incrementally Verifiable Computation and Applications
Xun Liu, Shang Gao, Tianyu Zheng, Yu Guo, and Bin Xiao
The succinct non-interactive argument of knowledge (SNARK) technique has been extensively utilized in blockchain systems to replace the costly on-chain computation with the verification of a succinct proof. However, most existing applications verify each proof independently, resulting in a heavy load on nodes and high transaction fees for users. Currently, the mainstream proof aggregation schemes are based on a generalized inner product argument, which has a logarithmic proof size and verification cost. To improve the efficiency of verifying multiple proofs, we introduce SnarkFold, a novel SNARK-proof aggregation scheme with constant verification time and proof size. SnarkFold is derived from incrementally verifiable computation (IVC) and is optimized further through the folding scheme. By folding multiple instance-proof pairs, SnarkFold defers the expensive SNARK verification (e.g., elliptic curve pairing) to the final step. Additionally, we propose a generic technique to enhance the verifier's efficiency by delegating instance aggregation tasks to the prover. The verifier only needs a simple preprocessing to check the validity of the delegation. We further introduce folding schemes for Groth16 and Plonk proofs. Experimental results demonstrate that SnarkFold offers significant advantages, with an aggregated Plonk proof size of just $0.5$ KB and the verification time of only $4.5$ ms for aggregating 4096 Plonk proofs.
Last updated:  2024-09-18
Bit-Security Preserving Hardness Amplification
Shun Watanabe and Kenji Yasunaga
Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to $1$) to very hard $1/2 + \varepsilon$ by inducing $\varepsilon^2$ multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause a bit-security loss of $\log(1/\varepsilon)$. To resolve this issue, we derive a new variant of the XOR lemma in terms of the R\'enyi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the $\bot$ symbol in addition to $0$ and $1$, which may be of independent interest.
Last updated:  2024-09-18
Direct Range Proofs for Paillier Cryptosystem and Their Applications
Zhikang Xie, Mengling Liu, Haiyang Xue, Man Ho Au, Robert H. Deng, and Siu-Ming Yiu
The Paillier cryptosystem is renowned for its applications in electronic voting, threshold ECDSA, multi-party computation, and more, largely due to its additive homomorphism. In these applications, range proofs for the Paillier cryptosystem are crucial for maintaining security, because of the mismatch between the message space in the Paillier system and the operation space in application scenarios. In this paper, we present novel range proofs for the Paillier cryptosystem, specifically aimed at optimizing those for both Paillier plaintext and affine operation. We interpret encryptions and affine operations as commitments over integers, as opposed to solely over $\mathbb{Z}_{N}$. Consequently, we propose direct range proof for the updated cryptosystem, thereby eliminating the need for auxiliary integer commitments as required by the current state-of-the-art. Our work yields significant improvements: In the range proof for Paillier plaintext, our approach reduces communication overheads by approximately $60\%$, and computational overheads by $30\%$ and $10\%$ for the prover and verifier, respectively. In the range proof for Paillier affine operation, our method reduces the bandwidth by $70\%$, and computational overheads by $50\%$ and $30\%$ for the prover and verifier, respectively. Furthermore, we demonstrate that our techniques can be utilized to improve the performance of threshold ECDSA and the DCR-based instantiation of the Naor-Yung CCA2 paradigm.
Last updated:  2024-09-18
Marian: An Open Source RISC-V Processor with Zvk Vector Cryptography Extensions
Thomas Szymkowiak, Endrit Isufi, and Markku-Juhani Saarinen
The RISC-V Vector Cryptography Extensions (Zvk) were ratified in 2023 and integrated into the main ISA manuals in 2024. These extensions support high-speed symmetric cryptography (AES, SHA2, SM3, SM4) operating on the vector register file and offer significant performance improvements over scalar cryptography extensions (Zk) due to data parallelism. As a ratified extension, Zvk is supported by compiler toolchains and is already being integrated into popular cryptographic middleware such as OpenSSL. We report on Marian, the first open-source hardware implementation of a vector processor with the Zvk extensions. The design is based on the PULP ``Ara'' vector unit, which itself is an extension of the popular CVA6 processor. The implementation is in SystemVerilog and has been tested using Virtex Ultrascale+ FPGA prototyping, with a planned tapeout targeting a 22nm process node. We offer an analysis of the architectural requirements that vector cryptography imposes on a processor, as well as the initial estimates of performance and area for our implementation.
Last updated:  2024-09-18
Crooked Indifferentiability of the Feistel Construction
Alexander Russell, Qiang Tang, and Jiadong Zhu
The Feistel construction is a fundamental technique for building pseudorandom permutations and block ciphers. This paper shows that a simple adaptation of the construction is resistant, even to algorithm substitution attacks---that is, adversarial subversion---of the component round functions. Specifically, we establish that a Feistel-based construction with more than $337n/\log(1/\epsilon)$ rounds can transform a subverted random function---which disagrees with the original one at a small fraction (denoted by $\epsilon$) of inputs---into an object that is \emph{crooked-indifferentiable} from a random permutation, even if the adversary is aware of all the randomness used in the transformation. Here, $n$ denotes the length of both the input and output of the round functions that underlie the Feistel cipher. We also provide a lower bound showing that the construction cannot use fewer than $2n/\log(1/\epsilon)$ rounds to achieve crooked-indifferentiable security.
Last updated:  2024-09-17
Interval Key-Encapsulation Mechanism
Alexander Bienstock, Yevgeniy Dodis, Paul Rösler, and Daniel Wichs
Forward-Secure Key-Encapsulation Mechanism (FS-KEM; Canetti et al. Eurocrypt 2003) allows Alice to encapsulate a key $k$ to Bob for some time $t$ such that Bob can decapsulate it at any time $t'\leq t$. Crucially, a corruption of Bob's secret key after time $t$ does not reveal $k$. In this work, we generalize and extend this idea by also taking Post-Compromise Security (PCS) into account and call it Interval Key-Encapsulation Mechanism (IKEM). Thus, we do not only protect confidentiality of previous keys against future corruptions but also confidentiality of future keys against past corruptions. For this, Bob can regularly renew his secret key and inform others about the corresponding public key. IKEM enables Bob to decapsulate keys sent to him over an interval of time extending into the past, in case senders have not obtained his latest public key; forward security only needs to hold with respect to keys encapsulated before this interval. This basic IKEM variant can be instantiated based on standard KEM, which we prove to be optimal in terms of assumptions as well as ciphertext and key sizes. We also extend this notion of IKEM for settings in which Bob decapsulates (much) later than Alice encapsulates (e.g., in high-latency or segmented networks): if a third user Charlie forwards Alice's ciphertext to Bob and, additionally, knows a recently renewed public key of Bob's, Charlie could re-encrypt the ciphertext for better PCS. We call this extended notion IKEMR. Our first IKEMR construction based on trapdoor permutations has (almost) constant sized ciphertexts in the number of re-encryptions; and our second IKEMR construction based on FS-PKE has constant sized public keys in the interval size. Finally, to bypass our lower bound on the IKEM(R) secret key size, which must be linear in the interval size, we develop a new Interval RAM primitive with which Bob only stores a constant sized part of his secret key locally, while outsourcing the rest to a (possibly adversarial) server. For all our constructions, we achieve security against active adversaries. For this, we obtain new insights on Replayable CCA security for KEM-type primitives, which might be of independent interest.
Last updated:  2024-09-17
Breaking and Repairing SQIsign2D-East
Wouter Castryck, Mingjie Chen, Riccardo Invernizzi, Gioella Lorenzon, and Frederik Vercauteren
We present a key recovery attack on SQIsign2D-East that reduces its security level from $\lambda$ to $\lambda/2$. We exploit the fact that each signature leaks a Legendre symbol modulo the secret degree of the private key isogeny. About $\lambda/2$ signatures are enough for these Legendre symbols to fully determine the secret degree, which can then be recovered by exhaustive search over a set of size $O(2^{\lambda/2})$. Once the degree is known, the private key isogeny itself can be found, again by exhaustive search, in time $\tilde{O}(2^{\lambda/2})$. We also present a new version of the protocol which does not leak any such information about the private key and show that our modified protocol is more efficient than the original one. Finally, we give a security analysis as well as a new proof of security.
Last updated:  2024-09-17
On the Complexity of Cryptographic Groups and Generic Group Models
Cong Zhang, Keyu Ji, Taiyu Wang, Bingsheng Zhang, Hong-Sheng Zhou, Xin Wang, and Kui Ren
Ever since the seminal work of Diffie and Hellman, cryptographic (cyclic) groups have served as a fundamental building block for constructing cryptographic schemes and protocols. The security of these constructions can often be based on the hardness of (cyclic) group-based computational assumptions. Then, the generic group model (GGM) has been studied as an idealized model (Shoup, EuroCrypt 1997), which justifies the hardness of many (cyclic) group-based assumptions and shows the limits of some group-based cryptosystems. We stress that, the importance of the length of group encoding, either in a concrete group-based construction or assumption, or in the GGM, has not been studied. In this work, we initiate a systematic study on the complexity of cryptographic groups and generic group models, varying in different lengths of group encodings, and demonstrate evidences that ``the length matters''. More concretely, we have the following results: -- We show that there is no black-box/relativizing reduction from the CDH-secure groups (i.e., over such groups, the computational Diffie-Hellman assumption holds) with shorter encodings, to the CDH-secure groups with longer encodings, within the same security parameter. More specifically, given any arbitrary longer CDH-secure group, it is impossible to generically shorten the group encoding and obtain a shorter CDH-secure group within the same group order. -- We show that there is a strict hierarchy of the GGMs with different lengths of encodings. That is, in the framework of indifferentiability, the shorter GGM is strictly stronger than the longer ones, even in the presence of computationally bounded adversaries.
Last updated:  2024-09-17
Traffic-aware Merkle Trees for Shortening Blockchain Transaction Proofs
Avi Mizrahi, Noam Koren, Ori Rottenstreich, and Yuval Cassuto
Merkle trees play a crucial role in blockchain networks in organizing network state. They allow proving a particular value of an entry in the state to a node that maintains only the root of the Merkle trees, a hash-based signature computed over the data in a hierarchical manner. Verification of particular state entries is crucial in reaching a consensus on the execution of a block where state information is required in the processing of its transactions. For instance, a payment transaction should be based on the balance of the two involved accounts. The proof length affects the network communication and is typically logarithmic in the state size. In this paper, we take advantage of typical transaction characteristics for better organizing Merkle trees to improve blockchain network performance. We focus on the common transaction processing where Merkle proofs are jointly provided for multiple accounts. We first provide lower bounds for the communication cost that are based on the distribution of accounts involved in the transactions. We then describe algorithms that consider traffic patterns for significantly reducing it. The algorithms are inspired by various coding methods such as Huffman coding, partition and weight balancing. We also generalize our approach towards the encoding of smart contract transactions that involve an arbitrary number of accounts. Likewise, we rely on real blockchain data to show the savings allowed by our approach. The experimental evaluation is based on transactions from the Ethereum network and demonstrates cost reduction for both payment transactions and smart contract transactions.
Last updated:  2024-09-17
TentLogiX: 5-bit Chaos-Driven S-Boxes for Lightweight Cryptographic Systems
Maha Allouzi and Arefeh Rahaei
Cryptography is a crucial method for ensuring the security of communication and data transfers across networks. While it excels on devices with abundant resources, such as PCs, servers, and smartphones, it may encounter challenges when applied to resource-constrained Internet of Things (IoT) devices like Radio Frequency Identification (RFID) tags and sensors. To address this issue, a demand arises for a lightweight variant of cryptography known as lightweight cryptography (LWC). In developing any cryptographic algorithm, the substitution box (S-box) is a fundamental component, providing nonlinear functionality between inputs and outputs. Researchers have TentLogiX diverse S-box designs tailored for various applications, but only a few manage to balance the trade-offs among cost, performance, and security, particularly in the context of resource-constrained IoT devices. This paper delves into the realm of S-boxes employed in popular LWC algorithms, categorizing them by their input–output bit sizes and elucidating their strengths and limitations. The focus then shifts to a novel 5-bit S-box design, utilizing chaotic mapping theory to introduce a randomized behavior. Subsequently, the paper proposed TentLogiX a 5-bit S-box, constructed based on compound chaotic system, tent-logistic systems, which has better chaotic performance and wider sequences and explores its security robustness through various cryptanalysis techniques, including bijective analysis, nonlinearity assessment, linearity evaluation, and differential cryptanalysis. The paper concludes by presenting a thorough comparison that underscores the superiority of the TentLogiX 5-bit S-box over its 5-bit counterparts.
Last updated:  2024-09-17
Falsifiability, Composability, and Comparability of Game-based Security Models for Key Exchange Protocols
Chris Brzuska, Cas Cremers, Håkon Jacobsen, Douglas Stebila, and Bogdan Warinschi
A security proof for a key exchange protocol requires writing down a security definition. Authors typically have a clear idea of the level of security they aim to achieve, e.g., forward secrecy. Defining the model formally additionally requires making choices on games vs. simulation-based models, partnering, on having one or more Test queries and on adopting a style of avoiding trivial attacks: exclusion, penalizing or filtering. We elucidate the consequences, advantages and disadvantages of the different possible model choices. Concretely, we show that a model with multiple Test queries composes tightly with symmetric-key protocols while models with a single Test query require a hybrid argument that loses a factor in the number of sessions. To illustrate the usefulness of models with multiple Test queries, we prove the Naxos protocol security in said model and obtain a tighter bound than adding a hybrid argument on top of a proof in a single Test query model. Our composition model exposes partnering information to the adversary, circumventing a previous result by Brzuska, Fischlin, Warinschi, and Williams (CCS 2011) showing that the protocol needs to provide public partnering. Moreover, our baseline theorem of key exchange partnering shows that partnering by key equality provides a joint baseline for most known partnering mechanisms, countering previous criticism by Li and Schäge (CCS 2017) that security in models with existential quantification over session identifiers is non-falsifiable.
Last updated:  2024-09-17
Randomness in Private Sequential Stateless Protocols
Hari Krishnan P. Anilkumar, Varun Narayanan, Manoj Prabhakaran, and Vinod M. Prabhakaran
A significant body of work in information-theoretic cryptography has been devoted to the fundamental problem of understanding the power of randomness in private computation. This has included both in-depth study of the randomness complexity of specific functions (e.g., Couteau and Ros ́en, ASIACRYPT 2022, gives an upper bound of 6 for n-party $\mathsf{AND}$), and results for broad classes of functions (e.g., Kushilevitz et al. STOC 1996, gives an $O(1)$ upper bound for all functions with linear-sized circuits). In this work, we make further progress on both fronts by studying randomness complexity in a new simple model of secure computation called Private Sequential Stateless (PSS) model. We show that functions with $O(1)$ randomness complexity in the PSS model are exactly those with constant-width branching programs, restricting to “speak-constant-times” protocols and to “read-constant-times” branching programs. Towards this our main construction is a novel PSS protocol for “strongly regular branching programs” (SRBP). As we show, any constant-width branching program can be converted to a constant-width SRBP, yielding one side of our characterization. The converse direction uses ideas from Kushilevitz et al. to translate randomness to communication. Our protocols are concretely efficient, has a simple structure, covers the broad class of functions with small-width, read-once (or read-a-few-times) branching programs, and hence may be of practical interest when 1-privacy is considered adequate. Also, as a consequence of our general result for SRBPs, we obtain an improvement over the protocol of Couteau and Ros ́en for $\mathsf{AND}$ in certain cases — not in terms of the number of bits of randomness, but in terms of a simpler protocol structure (sequential, stateless).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.