[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140203013922.GB6264@thunk.org>
Date: Sun, 2 Feb 2014 20:39:22 -0500
From: Theodore Ts'o <tytso@....edu>
To: Stephan Mueller <smueller@...onox.de>
Cc: Jörn Engel <joern@...fs.org>,
"H. Peter Anvin" <hpa@...or.com>,
Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
macro@...ux-mips.org, ralf@...ux-mips.org, dave.taht@...il.com,
blogic@...nwrt.org, andrewmcgr@...il.com, geert@...ux-m68k.org,
tg@...bsd.de
Subject: Re: [PATCH,RFC] random: collect cpu randomness
On Sun, Feb 02, 2014 at 10:25:31PM +0100, Stephan Mueller wrote:
> Second, when I offered my initial patch which independently collects some
> entropy on the CPU execution timing, I got shot down with one concern raised
> by Ted, and that was about whether a user can influence the entropy collection
> process.
Um, that wasn't my concern. After all, when we sample keyboard timing
while trying to generate a GPG key, of course the user can and does
influence the entropy collection process.
The question is whether an attacker who has deep knowledge of the how
the CPU works internally, perhaps made worse with quantization effects
(i.e., it doesn't matter if analog-generated settling time is measured
in microseconds if the output is being clocked out in milliseconds),
such that it is predictable.
I really like Jörn's tests doing repeated boot testing and observing
on a SMP system, the slab allocation pattern is quite deterministic.
So even though the numbers might *look* random, an attacker with deep
knowledge of how the kernel was compiled and what memory allocations
get done during the boot sequence would be able to quite successfuly
measure it.
I'm guessing that indeed, on a 4-CPU KVM system, what you're measuring
is the when the host OS happens to be scheduling the KVM threads, with
some variability caused by external networking interrupts, etc. It
would definitely be a good idea to retry that experiment on a real
4-CPU system to see what sort of results you might get. It might very
well be that the attacker who knows the relative ordering of the
slab/thread activations but for which it's not entirely clear whether
one cpu will be ahead of another, that there is *some* entropy, but
perhaps only a handful bits. It's the fact that we can't be sure how
much uncertainty there might be with an attacker with very deep
knowledge the CPU which is why Jörn's conservatism of not crediting
the entropy counter is quite understandable.
Of course, this doesn't help someone who is trying to speed up the
time it takes GPG to generate a new key pair. But in terms of
improving /dev/urandom as it is used by many crypto applications, it
certainly can't hurt.
The real question is how much overhead does it add, and is it worth
it. Jörn, I take it that was the reason for creating an even faster,
but weaker mixing function? Was the existing "fast mix" causing a
measurable overhead, or was this your just being really paranoid about
not adding anything to the various kernel fastpaths?
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists