[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3d650cc9ff07462e5c55cc3d9c0da72a3f2c5df2.camel@redhat.com>
Date: Mon, 16 Jun 2025 13:27:24 -0400
From: Simo Sorce <simo@...hat.com>
To: James Bottomley <James.Bottomley@...senPartnership.com>, Ignat Korchagin
<ignat@...udflare.com>, David Howells <dhowells@...hat.com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>, Stephan Mueller
<smueller@...onox.de>, torvalds@...ux-foundation.org, Paul Moore
<paul@...l-moore.com>, Lukas Wunner <lukas@...ner.de>, Clemens Lang
<cllang@...hat.com>, David Bohannon <dbohanno@...hat.com>, Roberto Sassu
<roberto.sassu@...wei.com>, keyrings@...r.kernel.org,
linux-crypto@...r.kernel.org, linux-security-module@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: Module signing and post-quantum crypto public key algorithms
On Mon, 2025-06-16 at 11:14 -0400, James Bottomley wrote:
> The main worry everyone has is that while it is believed that there's
> not a quantum short cut over classical for lattice algorithms, they
> haven't been studied long enough to believe there's no classical short
> cut to breaking the encryption. The only real algorithms we're sure
> about are the hash based ones, so perhaps we should start with XMSS/LMS
> before leaping to ML-. Particularly for kernel uses like modules, the
> finite signatures problem shouldn't be that limiting.
The only case where you can use LMS/XMSS in software is if you perform
exclusively verification, or if you perform a small number of
signatures and then immediately destroy the private key.
LMS/and XMSS absolutely cannot be used as software algorithms to
generate signatures while keeping a key around because ensuring the
status is never reused is fundamentally impossible in software. And a
single reuse in LMS/XMSS means complete breakdown of the crypto-system.
Due to the above in general implementing LMS/XMSS signature generation
in software is a *very bad idea*(TM) because people do not understand
how it can be used safely, and I would seriously discourage it.
The next option in this line of thought is SLH-DSA (which I would favor
if not for the following).
The problems with SLH-DSA are that it has rather large signatures and
is the slowest of all the algorithms and that CNSA 2.0 does not list
SLH-DSA as approved :-(
> > > Current estimates say Shor's algorithm in "reasonable[1]" time
> > > requires around a million qubits to break RSA2048, so we're still
> > > several orders of magnitude off that.
> >
> > Note that you are citing sources that identify needed physical qbits
> > for error correction, but what IBM publishes is a roadmap for *error
> > corrected* logical qbits. If they can pull that off that computer
> > will already be way too uncomfortably close (you need 2n+3 error
> > corrected logical qbits to break RSA).
>
> The roadmap is based on a linear presumption of physical to logical
> qbit scaling. Since quantum error effects are usually exponential in
> nature that seems optimistic ... but, hey, we should know in a couple
> of years.
To be honest it does not really matter, either we'll have a workable
quantum computer or not, if we do we do, and the scaling will be rapid
enough that the difference in required bits won't really matter. I find
it very unlikely that we'll find ourselves in a situation where we'll
have a QC that can efficiently performer the Grover's algorithm with
enough bits, and yet implementing Shor's one is too hard and will take
a decade or more to reach.
> > so it is not really a concern, even with the smallest key sizes the
> > search space is still 2^64 ... so it makes little sense to spend a
> > lot of engineering time to find all places where doubling key size
> > break things and then do a micro-migration to that. It is better to
> > focus the scarce resources on the long term.
>
> Well the CNSA 2.0 doc you cite above hedges and does a 1.5x security
> bit increase, so even following it we can't do P-256, 25519 or RSA2048
> we have to move to at least P-384 and X448 (even though it allows
> RSA3072, I don't think we should be supporting that). So if we're
> going to have to increase key size anyway, we may as well up it to 256
> bits of security.
>
> So even if you believe quantum is slightly more imminent than the
> Kazakh Gerbil invasion, we should still begin with the key size
> increase.
What I believe is that we should not worry about Grover, because if we
get a workable Grover implementation that works we'll get Shor's too
which breaks clsssic algorithms entirely. Therefore we better move to
PQ algorithms and not spend time on a "small transition".
Of course we can decide to hedge *all bets* and move to a composed
signature (both a classic and a PQ one), in which case I would suggest
looking into signatures that use ML-DSA-87 + Ed448 or ML-DSA-87 + P-521
,ideally disjoint, with a kernel policy that can decide which (or both)
needs to be valid/checked so that the policy can be changed quickly via
configuration if any of the signature is broken.
This will allow for fears of Lattice not being vetted enough to be
managed as well as increasing the strength of the classic option, while
maintaining key and signature sizes manageable.
--
Simo Sorce
Distinguished Engineer
RHEL Crypto Team
Red Hat, Inc
Powered by blists - more mailing lists