[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sewmzvn1.fsf@oldenburg.str.redhat.com>
Date: Sat, 06 Jul 2024 12:01:54 +0200
From: Florian Weimer <fweimer@...hat.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, "Jason A. Donenfeld"
<Jason@...c4.com>, jolsa@...nel.org, mhiramat@...nel.org,
cgzones@...glemail.com, brauner@...nel.org,
linux-kernel@...r.kernel.org, arnd@...db.de, Adhemerval Zanella Netto
<adhemerval.zanella@...aro.org>, Zack Weinberg <zack@...folio.org>,
Cristian RodrÃguez <cristian@...riguez.im>, Wilco
Dijkstra
<Wilco.Dijkstra@....com>
Subject: Re: deconflicting new syscall numbers for 6.11
* Mathieu Desnoyers:
> From an absolutely-not-security-expert perspective, here is how I see
> the desiderata breakdown:
>
> - There appears to be a need to make sure the random seed is not exposed
> across fork, core dump and other similar scenarios. This can be
> achieved by simply letting userspace use the appropriate madvise(2)
> advices on a memory mapping created through mmap(2). I don't see why
> there would be any need to create any RNG-centric ABI for this. If
> new madvise(2) advices are needed, they can simply be added there.
I don't think there's consensus about protecting coredumps and VM-level
forks (migration where multiple clones continue executing).
Personally, I'm not convinced either that it's sufficient to protect
just the RNG from VM-level forks if nonce-reliant ciphers are involved.
It needs careful condiseration how these ciphers are used, and I'm not
sure that VM-level fork protection for the RNG itself is even a critical
part of that. (The ciphers are still deterministic, and the forks will
compute the same result if the operations are ordered correctly,
resulting in no information leak. Anyway, I don't understand why
cryptographers prefer algorithms where nonces are so critical to avoid
long-term key leaks.)
> - There appears to be interest in having a RNG faster than a system call
> for various reasons I'm not familiar with. A vDSO appears to be one
> way to do this. Another way would be to let userspace implement it
> all, which raises the following question: what is the minimal state
> known only by the kernel currently unknown from userspace ? This
> brings the following point.
The history here is that we had a reasonable fast userspace
implementation that could deal with the process fork case (which is
quite easier within glibc). It could not deal with VM-level forks. The
goal was to provide something that is unpredictable in practice and
about as fast as random() (or even rand()), so that programmers could
just use arc4random() if they do not need a reproducible sequence and
not worry about performance. We removed this implementation from glibc
and replaced it with something that makes a system call on every
arc4random call. The promise at the time was that we'll soon get a vDSO
call to accelerate this, without the need for some sort of stream cipher
in glibc. That hasn't happened so far.
Meanwhile, it's been reported that if chrony uses arc4random from glibc,
NTP server performance drops by 25%:
Bug 29437 - arc4random is too slow
<https://sourceware.org/bugzilla/show_bug.cgi?id=29437.
Obviously, we need to fix this eventually.
The arc4random implementation in glibc was never intended to displace
randomness generation for cryptographic purposes. AndIt doesn't have
to: none of the major cryptographic libraries will give up their RNG in
favor of glibc's, so if you are doing cryptography, you already have a
RNG recommended by the cryptographers that is ready to use. The
arc4random implementation had a different use case, replacing random()
and rand() calls, but it was somehow repurposed.
Thanks,
Florian
Powered by blists - more mailing lists