[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <de070353cc7ef2cd6ad68f899f3244917030c39b.camel@redhat.com>
Date: Fri, 13 Jun 2025 13:33:54 -0400
From: Simo Sorce <simo@...hat.com>
To: Ignat Korchagin <ignat@...udflare.com>, David Howells
<dhowells@...hat.com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>, Stephan Mueller
<smueller@...onox.de>, torvalds@...ux-foundation.org, Paul Moore
<paul@...l-moore.com>, Lukas Wunner <lukas@...ner.de>, Clemens Lang
<cllang@...hat.com>, David Bohannon <dbohanno@...hat.com>, Roberto Sassu
<roberto.sassu@...wei.com>, keyrings@...r.kernel.org,
linux-crypto@...r.kernel.org, linux-security-module@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: Module signing and post-quantum crypto public key algorithms
Premise: this problem can't be ignored, even if you think Quantum
Computers are BS, various government regulations are pushing all
commercial entities to require PQ signatures, so we have to deal with
this problem.
On Fri, 2025-06-13 at 16:21 +0100, Ignat Korchagin wrote:
> Hi David,
>
> On Fri, Jun 13, 2025 at 3:54 PM David Howells <dhowells@...hat.com> wrote:
> >
> > Hi,
> >
> > So we need to do something about the impending quantum-related obsolescence of
> > the RSA signatures that we use for module signing, kexec, BPF signing, IMA and
> > a bunch of other things.
>
> Is it that impending? At least for now it seems people are more
> concerned about quantum-safe TLS, so their communications cannot be
> decrypted later. But breaking signatures of open source modules
> probably only makes sense when there is an actual capability to break
> RSA (or ECDSA)
We do not know when Q-day (or Y2Q if you prefer) will strike, "never"
is still a possibility.
But, as a data point, IBM just announced a roadmap for a contraption
with 200 error corrected logic qubits.
That is substantial progress, so we cannot assume it will never happen,
the risk is too high (it is not me saying this, it is the cryptography
community consensus).
In terms of impending, what is pressing businesses at this time is the
CNSA 2.0 requirements, which wants software and firmware signatures to
transition to PQ algorithm in 2025 (yes this year) with complete phase
off of classic signatures by 2030 (it is an aggressive timeline, yes).
This is because a lot of the keys are embedded in HW (think Secure
Boot), so you can't wait until *after* you have a machine that can
generate forged signatures to protect your software update process.
A Quantum computer capable of breaking RSA == you can load any code in
a kernel that uses RSA/ECC signed modules.
> We need to consider cases, for example, when a python script calls
> some binaries via system(3) or similar in a tight loop. Yes, with IMA
> we would verify only once, but still there are cases, when software
> updates happen frequently or config management "templates" the
> binaries, so they change all the time.
In general, if you care about performance what you want to do is to
limit the amount of signatures you have to check to the bare minimum,
that is why I proposed to David the use of hashes, where you can have a
whole bundle of them and a single signature covering them all. This is
paramount for something like IMA if you want to make it usable wrt
performance.
> > I don't think we can dispense with signature checking entirely, though: we
> > need it for third party module loading, quick single-module driver updates and
> > all the non-module checking stuff. If it were to be done in userspace, this
> > might entail an upcall for each signature we want to check - either that, or
> > the kernel has to run a server process that it can delegate checking to.
>
> Agreed - we should have an in-kernel option
>
> > It's also been suggested that PQ algorithms are really slow. For kernel
> > modules that might not matter too much as we may well not load more than 200
> > or so during boot - but there are other users that may get used more
> > frequently (IMA, for example).
>
> Yep, mentioned above.
Note that PQ algorithms are not all slow, but mostly signatures are
large, much larger than hashes, which is another reason to move to
storing hashes in the kernel, rather than signatures.
Where the smaller classic signatures (ECC) are 32 bytes, the smallest
produced by ML-DSA are 2420 bytes (with a public key of 1312).
For SLH-DSA the smaller signature is 7856 bytes (but hey! 32 bytes
pubic key).
The proposed FN-DSA standard has smaller signature sizes (the smallest
in the drafts is ~666B with a 897B public key, numbers are still
subject to change), yet it requires a IEEE-754 FPU to implement and is
a bit crazy, and it is generally not recommended for software
signatures.
Some algorithm is also slow (eg SLH-DSA with strong parameters), but
ML-DSA is comparable to ECC in performance.
In any case reducing the number of verification operations is a net
positive for kernel boot/operations performance so moving to hash-based
checks instead of full signature verification wherever possible just
makes general engineering sense, even if it can make loading of hash
lists slightly more complicated.
HTH,
Simo.
--
Simo Sorce
Distinguished Engineer
RHEL Crypto Team
Red Hat, Inc
Powered by blists - more mailing lists