[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01f2f3171dd0f1cd9dcb496ac66bc6903767a2d2.camel@HansenPartnership.com>
Date: Fri, 13 Jun 2025 12:13:43 -0400
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: David Howells <dhowells@...hat.com>, Herbert Xu
<herbert@...dor.apana.org.au>, Stephan Mueller <smueller@...onox.de>, Simo
Sorce <simo@...hat.com>, torvalds@...ux-foundation.org, Paul Moore
<paul@...l-moore.com>
Cc: Lukas Wunner <lukas@...ner.de>, Ignat Korchagin <ignat@...udflare.com>,
Clemens Lang <cllang@...hat.com>, David Bohannon <dbohanno@...hat.com>,
Roberto Sassu <roberto.sassu@...wei.com>, keyrings@...r.kernel.org,
linux-crypto@...r.kernel.org, linux-security-module@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: Module signing and post-quantum crypto public key algorithms
On Fri, 2025-06-13 at 15:54 +0100, David Howells wrote:
> Hi,
>
> So we need to do something about the impending quantum-related
> obsolescence of the RSA signatures that we use for module signing,
> kexec, BPF signing, IMA and a bunch of other things.
Wait, that's not necessarily the whole threat. There are two possible
ways quantum could compromise us. One is a computer that has enough
qbits to run the shor algorithm and break non-quantum crypto. The
other is that a computer comes along with enough qbits to speed up the
brute force attacks using the grover algorithm. NIST still believes
the latter will happen way before the former, so our first step should
be doubling the number of security bits in existing algorithms, which
means ECC of at least 512 bits (so curve25519 needs replacing with at
least curve448) and for all practical purposes deprecating RSA (unless
someone wants to play with huge keys).
> From my point of view, the simplest way would be to implement key
> verification in the kernel for one (or more) of the available post-
> quantum algorithms (of which there are at least three), driving this
> with appropriate changes to the X.509 certificate to indicate that's
> what we want to use.
Can you at least enumerate them? There's still a dispute going on
about whether we should use pure post-quantum or hybrid. I tend to
think myself that hybrid is best for durable things like digital
signatures but given the NIST advice, we should be using > 512 bit
curves for that.
> The good news is that Stephan Mueller has an implemementation that
> includes
> kernel bits that we can use, or, at least, adapt:
>
> https://github.com/smuellerDD/leancrypto/
So the only hybrid scheme in there is dilithium+25519 which doesn't
quite fit the bill (although I'm assuming dilithium+448 could easily be
implemented)
>
> Note that we only need the signature verification bits. One
> question, though: he's done it as a standalone "leancrypto" module,
> not integrated into crypto/, but should it be integrated into crypto/
> or is the standalone fine?
>
> The not so good news, as I understand it, though, is that the X.509
> bits are not yet standardised.
>
>
> However! Not everyone agrees with this. An alternative proposal
> would rather get the signature verification code out of the kernel
> entirely. Simo Sorce's proposal, for example, AIUI, is to compile
> all the hashes we need into the kernel at build time, possibly with a
> hashed hash list to be loaded later to reduce the amount of
> uncompressible code in the kernel. If signatures are needed at all,
> then this should be offloaded to a userspace program (which would
> also have to be hashed and marked unptraceable and I think
> unswappable) to do the checking.
>
> I don't think we can dispense with signature checking entirely,
> though: we need it for third party module loading, quick single-
> module driver updates and all the non-module checking stuff. If it
> were to be done in userspace, this might entail an upcall for each
> signature we want to check - either that, or the kernel has to run a
> server process that it can delegate checking to.
I agree we can't predict everything at build time, so we need a runtime
scheme (like signatures) as well. However, I'm not convinced it should
be run outside the kernel. The expansion of the TCB plus the amount of
checking the kernel has to do to make sure the upcall is secure adds to
the vulnerability over in-kernel where everything just works.
> It's also been suggested that PQ algorithms are really slow. For
> kernel modules that might not matter too much as we may well not load
> more than 200 or so during boot - but there are other users that may
> get used more frequently (IMA, for example).
If we go with a hybrid signature scheme, we can start off with only
verifying the pre-quantum signature and have a switch to verify both.
> Now, there's also a possible hybrid approach, if I understand Roberto
> Sassu's proposal correctly, whereby it caches bundles of hashes
> obtained from, say, the hashes included in an RPM. These bundles of
> hashes can be checked by signature generated by the package signing
> process. This would reduce the PQ overhead to checking a bundle and
> would also make IMA's measuring easier as the hashes can be added in
> the right order, rather than being dependent on the order that the
> binaries are used.
I think you're referring to the IMA digest list extension proposal:
https://github.com/initlove/linux/wiki/IMA-Digest-Lists-Extension
I'm not sure it's been progressed much.
Regards,
James
Powered by blists - more mailing lists