[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1387067223.git.price@mit.edu>
Date: Sat, 14 Dec 2013 21:00:43 -0500
From: Greg Price <price@....EDU>
To: "Theodore Ts'o" <tytso@....edu>
Cc: linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>
Subject: [PATCH 00/14] random: rework reseeding
Hi Ted, hi all,
This series reworks the way we handle reseeding the nonblocking pool,
which supplies /dev/urandom and the kernel's internal randomness
needs. The most important change is to make sure that the input
entropy always comes in large chunks, what we've called a
"catastrophic reseed", rather than a few bits at a time with the
possibility of producing output after every few bits. If we do the
latter, we risk that an attacker could see the output (e.g. by
watching us use it, or by constantly reading /dev/urandom), and then
brute-force the few bits of entropy before each output in turn.
Patches 1-9 prepare us to do this while keeping the benefit of 3.13's
advances in getting entropy into the nonblocking pool quickly at boot,
by making several changes to the workings of xfer_secondary_pool() and
account(). Then patch 10 accomplishes the goal by sending all routine
input through the input pool, so that our normal mechanisms for
catastrophic reseed always apply.
Patches 11-13 change the accounting for the 'initialized' flag to
match, so that it gives credit only for a single large reseed (of
128 bits, by default), rather than many reseeds adding up to 129 bits.
This is the flag that means we no longer warn about insufficient
entropy, we allow /dev/random to consume entropy, and other changes.
Patch 14 adds an extra stage after setting 'initialized', where we go
for still larger reseeds, of up to 512 bits estimated entropy by
default. This isn't integral to achieving catastrophic reseeds, but
it serves as a hedge against situations where our entropy estimates
are too high.
After the whole series, our behavior at boot is to seed with whatever
we have when first asked for random bytes, then hold out for seeds of
doubling size until we reach the target (by default 512b estimated.)
Until we first reach the minimum reseed size (128b by default), all
input collected is exclusively for the nonblocking pool and
/dev/random readers must wait.
Cheers,
Greg
Greg Price (14):
random: fix signedness bug
random: fix a (harmless) overflow
random: reserve for /dev/random only once /dev/urandom seeded
random: accept small seeds early on
random: move transfer accounting into account() helper
random: separate quantity of bytes extracted and entropy to credit
random: exploit any extra entropy too when reseeding
random: rate-limit reseeding only after properly seeded
random: reserve entropy for nonblocking pool early on
random: direct all routine input via input pool
random: separate entropy since auto-push from entropy_total
random: separate minimum reseed size from minimum /dev/random read
random: count only catastrophic reseeds for initialization
random: target giant reseeds, to be conservative
drivers/char/random.c | 198 ++++++++++++++++++++++++++++--------------
include/trace/events/random.h | 27 +++---
2 files changed, 150 insertions(+), 75 deletions(-)
--
1.8.3.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists