381 Followers
8 Following
7.3K Posts
Unofficial IACR eprint updates. Only posts about new papers, follow @eprintrevision for updates about revisions.
Websitehttps://eprint.iacr.org
#eprint An analysis of a weakened version of PRISM by Jolijn Cottaar, Steven D. Galbraith, Luciano Maino, Monika Trimoska (https://ia.cr/2026/906)
An analysis of a weakened version of PRISM

PRISM (PKC25) is a hash-and-sign signature scheme whose security relies on the hardness of computing large-prime-degree isogenies originating from a curve of unknown endomorphism ring. In PRISM, the degree of such isogenies is obtained by hashing messages onto a set of large odd integers that pass a primality test. In this work, we investigate the impact of the choice of primality test on the security of PRISM. We first show that when a weak primality test is used, the assumption underlying the security proof in the standard model does not hold. We then extend our analysis to the assumption used in the security proof in the (quantum) random oracle model. In this setting, we argue that the Miller-Rabin test suffices and estimate the minimal number of iterations required for PRISM to achieve the desired security level, thus minimising signing costs.

IACR Cryptology ePrint Archive
#eprint Zero-Knowledge Proofs for Gradient Boosted Decision Trees by Jiacheng Gao, Wenjie Qu, Yuan Zhang, Sheng Zhong, Jiaheng Zhang (https://ia.cr/2026/907)
Zero-Knowledge Proofs for Gradient Boosted Decision Trees

Gradient boosted decision trees (GBDTs) are among the most effective models for tabular data and are widely used in domains such as finance, healthcare, and risk assessment. As these models are increasingly trained and served by external providers, clients need a way to check that a prediction or a model update was produced by the claimed training pipeline. At the same time, the provider may need to keep the training data and model parameters private. This makes zero-knowledge proofs for GBDT training and inference a natural tool for accountable machine learning. Existing constructions for proving GBDT training typically rely on generic ZKP compilers. They build a certification circuit that checks the forest against the training data, and then prove the circuit execution. This leads to high prover cost. On the other hand, a more direct approach to decompose proof of training into algebraic constraints inevitably introduces many auxiliary witnesses to assist proving. Proving these constraints separately could result in a huge amount of independent auxiliary commitments, whose committing and opening could dominate both proof size and prover time. Batching these constraints is also difficult because they come from different stages of training and have potentially different witness shapes and sizes, which are committed over different domains. We present \textsc{Terrae}, a zero-knowledge proof system for quantized GBDT training and inference based on KZG polynomial commitments. \textsc{Terrae} avoids both dependency on proving circuit computation and proving each constraint separately by leveraging the structure of GBDT training and novelly batching the constraints. We introduce two batching techniques: domain-lifting batching for linear constraints and interleaving batching for non-linear constraints. Both techniques work over differently-sized domains and reduce many constraints to a single claim without introducing extra polynomial commitments. We also design a histogram proof that proves the correctness of converting sample-wise data into its frequency representation, which may be of independent interest. Our evaluation shows that, compared with prior approaches, \textsc{Terrae} significantly reduces proof-generation time while adding only a small proof-size overhead.

IACR Cryptology ePrint Archive
#eprint Titan: Efficient Polynomial Commitments from IOPs over Groups by Chethan Kamath, Ravi Prakash, Samipa Samanta, Sruthi Sekar, Nitin Singh (https://ia.cr/2026/908)
Titan: Efficient Polynomial Commitments from IOPs over Groups

In this paper, we propose Titan, an efficient polynomial commitment scheme (PCS) with transparent setup. It achieves commitment time of $O(n)$, evaluation time of $O(\sqrt{n})$ while the proof size and verification scales as $O(\sqrt[4]{n})$. Titan features an order of magnitude smaller proof sizes than hash based PCS, while featuring a significantly more efficient prover and verifier compared to state of the art group based schemes like Dory and Hyrax. To achieve this balance, Titan borrows two-tiered commitments from Dory, and realizes outer commitment using interactive protocols of proximity (IOPP) over groups, specifically WHIR, instead of expensive bilinear pairings. This allows Titan to be instantiated over general curves with discrete-log hardness such as Pasta Curves, instead of requiring pairing friendly curves. We compile a variant of Spartan protocol for R1CS with Titan PCS to realize a new SNARK, which we call TitanSnark. Our construction TitanSnark preserves the prover efficiency of the existing Spartan protocol, while improving proof size and verification quadratically from $O(\sqrt{n})$ to $O(\sqrt[4]{n})$. Concretely, for circuits of size $\geq 2^{22}$ this results in around $3\times$ more efficient proof size and verification. Our blueprint of combining IOPPs over groups with Pedersen style inner commitments is of independent interest, as are several optimizations towards efficiently realizing WHIR IOPP over prime-order groups.

IACR Cryptology ePrint Archive
#eprint On Succinct Non-Interactive Secure Computation with Malicious Security by Maya Farber Brodsky, Arka Rai Choudhuri, Abhishek Jain, Omer Paneth (https://ia.cr/2026/909)
On Succinct Non-Interactive Secure Computation with Malicious Security

A non-interactive secure computation (NISC) protocol allows a client with input $x$ and a server with input $y$ to compute $f(x,y)$ using a single message from the client and a single response from the server. The protocol is called succinct if the size of the server’s message depends only on the output length and is independent of the size of $y$ and the complexity of $f$. In the semi-honest setting, succinct NISC is known from fully homomorphic encryption (FHE). In contrast, malicious security is currently known only from non-standard assumptions, such as SNARKs for NP. In this work, we construct maliciously secure succinct NISC protocols for natural and widely studied functionalities from standard assumptions, namely, FHE and batch arguments (BARGs). Our first result is a protocol for private set membership (PSM): the client holds an element $x$, the server holds a large set $S$, and the function outputs $1$ if and only if $x \in S$. We then give several generalizations: - Dictionary lookup: The server holds a dictionary $D$ of key–value pairs, the client’s input is a key $k$, and the output is $D[k]$. - Verifiable dictionary lookup: The server’s dictionary must additionally satisfy a predicate $P$, computable by a read-once machine with small state. - UP search: The client input is an instance $x$, and the output is $D[w]$, where $w$ is the unique witness for $x$ under some UP relation. Our protocols achieve split-simulation security against a malicious server and standard security against a malicious client. Split-simulation is a relaxation of the standard real-ideal paradigm, where correctness of the client’s output and indistinguishability of the server’s view are guaranteed separately. At the heart of our results lies a new simulation technique in which the server’s large input is extracted piece by piece and reconstructed into a coherent input. This reconstruction is enabled by a new monotone coupling argument based on Strassen’s theorem.

IACR Cryptology ePrint Archive
#eprint UnifOMR: Oblivious Message Retrieval with Near-optimal Concrete Efficiency by Ben Fisch, Zeyu Liu, Eran Tromer, Yunhao Wang (https://ia.cr/2026/910)
UnifOMR: Oblivious Message Retrieval with Near-optimal Concrete Efficiency

End-to-end encryption guarantees message confidentiality but does not hide metadata such as communication patterns among senders and recipients, or their identities. Oblivious Message Retrieval (OMR) is a cryptographic protocol that enables servers to assist recipients in retrieving their messages from a database without learning the mapping between messages and recipients, thereby protecting such metadata. This paper investigates two central questions of OMR: (1) What is the precise relationship between OMR and the better-studied primitive of Private Information Retrieval (PIR)? (2) Can OMR schemes achieve concrete efficiency comparable to state-of-the-art PIR protocols? We show that OMR with a property we call strong detection-key-unlinkability is at least as hard as PIR, and that existing OMR constructions already satisfy this property. This PIR-to-OMR reduction has low overhead, suggesting that OMR cannot be made substantially more efficient than PIR. We then present $\mathsf{UnifOMR}$, which achieves $20\times$ to $1080\times$ faster server runtime over the state-of-the-art $\mathsf{SophOMR}$ under practical parameter settings. For $2^{19}$ messages of 612 bytes each, $\mathsf{UnifOMR}$ completes in only ${\sim}25$ seconds with 4 MB of communication, compared to $>1250$ seconds and 260 KB for $\mathsf{SophOMR}$. These gains come with two trade-offs: an asymptotically linear digest size (albeit with small constants), and two rounds of interaction between the detector and the client. Furthermore, crucially, $\mathsf{UnifOMR}$ uses batch PIR as a black-box component, which in our experiments accounts for $50$--$92\%$ of the server runtime. Thus, $\mathsf{UnifOMR}$ nearly matches the aforementioned lower bound concretely (for databases of $2^{16}$ to $2^{23}$ messages, each with $612$ to $3060$ bytes), given the status quo of batch PIR.

IACR Cryptology ePrint Archive
#eprint UC4Free! Existing Threshold Signatures are UC Secure by Jan Bobolz, Elizabeth Crites, Markulf Kohlweiss, Akira Takahashi (https://ia.cr/2026/911)
UC4Free! Existing Threshold Signatures are UC Secure

Threshold signatures have received considerable attention in recent years due to ongoing standardization efforts and deployment in real-world systems. In this work, we prove the universal composability of a wide range of threshold signature schemes, including state-of-the-art protocols compatible with standard signatures used in practice, such as BLS and Schnorr signatures, as well as emerging post-quantum solutions. Importantly, we show UC security without any modifications to the existing protocols. To this end, we design natural game-based definitions to capture different combinations of main threshold signature scheme properties, such as different levels of unforgeability, adaptive corruption, robustness, and different degrees of preprocessing. These definitions generalize prior definitional work, such as Bellare et al. (CRYPTO'22), and cover a wide range of existing schemes. Moreover, we identify and resolve gaps in prior work. We then express these properties in terms of a UC ideal functionality $\mathcal{F}\text{-}\mathtt{TS3}$. We prove that a threshold signature scheme UC-realizes $\mathcal{F}\text{-}\mathtt{TS3}$ if and only if it satisfies our game-based definitions. This opens up the usage of (existing) threshold signature schemes in a UC setting, enabling scheme designers to formulate their protocols relative to an ideal threshold signature functionality and use the UC composition theorem to argue security given any concrete instantiation. To further support UC scheme designers and to give further guidance on UC modeling for threshold signatures, we provide additional ideal threshold signature functionalities $\mathcal{F}\text{-}\mathtt{TS2}$, $\mathcal{F}\text{-}\mathtt{TS1}$, $\mathcal{F}\text{-}\mathtt{TSSync2}$, and $\mathcal{F}\text{-}\mathtt{TSSync1}$, which capture fewer properties than $\mathcal{F}\text{-}\mathtt{TS3}$ but are more convenient to use. $\mathcal{F}\text{-}\mathtt{TS2}$, $\mathcal{F}\text{-}\mathtt{TS1}$, $\mathcal{F}\text{-}\mathtt{TSSync2}$, $\mathcal{F}\text{-}\mathtt{TSSync1}$ can also be UC-realized by schemes proven secure according to our game-based definitions. Through this work, we show that composable security does not require sacrificing performance, but it does require rigor when setting up game-based definitions and ideal functionalities.

IACR Cryptology ePrint Archive
#eprint Improved TensorPIR: Single-Server PIR with Lower Communication Cost by Yingchu Lv, Yanbin Pan, Huaxiong Wang (https://ia.cr/2026/912)
Improved TensorPIR: Single-Server PIR with Lower Communication Cost

Private Information Retrieval (PIR) is of growing importance in privacy-preserving data access, as it enables users to retrieve information from databases without revealing their query content, thereby aligning with modern data protection and regulatory standards. State-of-the-art schemes, such as HintlessPIR and TensorPIR proposed by Li et al. at CRYPTO 2024, leverage lattice-based cryptography for efficient and privacy-preserving data retrieval. HintlessPIR achieves a communication complexity of $O(N^{1/2})$, which remains suboptimal for large databases. To further reduce communication overhead, the same work introduces TensorPIR, lowering the asymptotic complexity to $O(N^{1/3})$. However, this improvement requires larger parameters and more CRT moduli, leading to a practical communication cost that is not significantly smaller than that of HintlessPIR. In this work, we propose a new framework that rethinks the encryption strategy for the index, reducing both communication and computation costs through fewer CRT moduli. In experiments on 16 GB, 32 GB, 64 GB, and 128 GB databases, our total communication cost drops to as low as 45.5% of TensorPIR's. Theoretically, as $N$ grows, our query and answer sizes are reduced to 36.9% and 22.2% of TensorPIR's, respectively. Compared with HintlessPIR, our scheme achieves lower theoretical communication complexity, leading to substantially smaller practical communication for large $N$. Moreover, our total online time is reduced to 28.9% to 56.1% of HintlessPIR's.

IACR Cryptology ePrint Archive
#eprint Algorithmic Toolkit for Linearization of S-boxes by Alex Biryukov, Philip Tureček, Aleksei Udovenko (https://ia.cr/2026/913)
Algorithmic Toolkit for Linearization of S-boxes

Linearization is a cryptanalysis technique in which a nonlinear function (an S-box) is represented by an affine mapping on a certain subset of inputs. Its variants were applied to analyze Keccak, LowMC, RAIN and AIM. In these primitives, the S-boxes are either very small (up to 5 bits) or are very specific monomial functions over a binary field. Linearization of arbitrary S-boxes was never practically explored due to the lack of theoretic, algorithmic, and cryptanalytic understanding. For the first time, we develop an algorithmic toolkit which allows one to compute strong linearizations of S-boxes, when they exist. For up to $n=8$ bits, our algorithms are able to find provably the best possible approximations, while for larger S-boxes it is feasible to obtain good approximations together with meaningful upper bounds. We apply our algorithms to a variety of S-boxes from existing primitives, to monomial functions, to so-called APN functions, and to 16-bit Super-Sboxes. We obtain interesting results raising many new open questions and open up new research directions, as well as a foundation for developing cryptanalytic attacks. To advance the cryptanalytic utility of linearization, we study and solve the problem of covering an S-box with multiple approximations. As an application, we derive a generic linearization approach for the CICO problem (constrained-input-constrained-output) over SPN-based permutations (Substitution-Permutation Networks) with general linear layers. This is the first such general cryptanalysis based on the existence of a strong linearization of the S-box.

IACR Cryptology ePrint Archive
#eprint Post-Quantum Authenticated Key Exchange via Signcryption with Ephemeral Key Masking by Mostefa Kara, Konstantinos Karampidis, Muath AlShaikh (https://ia.cr/2026/914)
Post-Quantum Authenticated Key Exchange via Signcryption with Ephemeral Key Masking

We present PQES-AKE, a novel two-party authenticated key exchange (AKE) protocol built upon the Post-Quantum Encryption and Signcryption Scheme (PQES) introduced by Kara et al. The protocol achieves mutual authentication, session key secrecy, and forward secrecy in a post-quantum adversarial model. The central design principle of PQES-AKE is the concealment of ephemeral Diffie-Hellman (DH) keys within affine masks derived from randomness generated internally by the PQES signcryption algorithm. The security of PQES-AKE rests on three hardness assumptions, including Learning With Errors (LWE) for ciphertext confidentiality, a Secret-Base Discrete Exponentiation (SBDE) assumption for signature unforgeability, and the Computational Diffie-Hellman (CDH) assumption for session key secrecy. The protocol completes in two network rounds, with a total communication cost of approximately 627 bytes, and requires 17 modular exponentiations. A Python prototype evaluated on a commodity Intel Core i7 laptop achieves an average execution time of 6.2ms. These results confirm that PQES-AKE provides a competitive, single-primitive alternative to composite KEM-then-authenticate constructions in post-quantum secure channel establishment.

IACR Cryptology ePrint Archive
#eprint Execution-time and microarchitectural profiling of RustCrypto and PQClean ML-KEM/ML-DSA implementations under Linux cgroup resource constraints by Akram Bensebaa (https://ia.cr/2026/915)
Execution-time and microarchitectural profiling of RustCrypto and PQClean ML-KEM/ML-DSA implementations under Linux cgroup resource constraints

Existing benchmarks of post-quantum cryptographic (PQC) algorithms focus on raw throughput or latency on unconstrained hardware, leaving open the question of how implementation choices affect microarchitectural behavior under varying resource pressure. This paper addresses that gap by profiling RustCrypto and PQClean implementations of ML-KEM-768 and ML-DSA-65 across three cgroups-enforced configurations (0.5 CPUs, 1.0 CPUs, and unconstrained). The C baseline uses PQClean reference code; the Rust implementation uses the RustCrypto project's ml-kem crate v0.3.0 and ml-dsa crate v0.0.4. We utilize robust non-parametric statistics (Median and Interquartile Range) over the full execution distributions to reduce sensitivity to scheduler-induced timing variance and avoid arbitrary outlier filtering. Alongside hardware performance counters, we present Flamegraphs to analyze sampled execution hotspots. Results demonstrate that the evaluated Rust configurations exhibit a latency overhead for both ML-KEM (+42.4%) and ML-DSA (+47.8%) under severe 0.5 CPUs throttling compared to clang-compiled C. Massif traces attribute all additional Rust heap usage entirely to runtime initialization, with no observable dynamic allocation within the measured cryptographic core execution paths.

IACR Cryptology ePrint Archive