lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Mar 2022 14:21:38 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Michael Brooks <m@...etwater.ai>
Cc:     "Jason A. Donenfeld" <Jason@...c4.com>,
        linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
        Dominik Brodowski <linux@...inikbrodowski.net>
Subject: Re: [PATCH v2] random: mix build-time latent entropy into pool at
 init

On Thu, Mar 31, 2022 at 09:02:27AM -0700, Michael Brooks wrote:
> mix_pool_bytes() has numerous problems, as discussed in prior emails.
> Do we still want to be putting so much effort into a development dead
> end?

Michael, with respect, there were a number of things in your analysis
which simply didn't make any sense.  Discussing it on an e-mail thread
relating to stable bacports wasn't the right place, so I didn't extend
the discussion there.

You believe that max_pool_bytes() has numerous problems.  That's not
the same thing as it having problems.

And making incremental changes, with code review, is the much better
approach than just doing a rip-and-replace with some something else
--- which might have different, even more exciting problems.

Something for you to consider, since your comments seem to indicate
that you are not familiar with the full random driver design.  There
are two halves to how the random driver works.  The first half is the
collection of entropy, and the priamry way this is accomplished is by
taking timestamps of various events that an external attacker
hopefully won't have access to.  For example, keystrokes from the
user, mouse motion events, network and disk interrupts, etc.  Where
possible, we don't just use jiffies, but we also use high preceision
counters, such as the CPU counter.  The idea here is that even if the
external interrupts sources can be seen by an attacker, when the
interrupt is serviced when measured by a high precision cycle counter
(for example) is not going to be as easily guessed.  That being said,
we only get a tiny amount of entropy (by which I mean uncertainty by
the attacker) out of each event.  This is why it is important to
distill it in an input pool, so that as we add more and more
unpredictable inputs into the pool, it becomes less and less tractible
for the attacker to make educating guesses about what is in the pool.

Then periodically (and doing this periodically is important, because
we want to wait until there we have a large amount of uncertainty with
respect to the attacker accumulated in the pool) we extract from the
input pool and use that to reseed the second part of the random
driver, which is used to be called the "output pool".

It used to be that both the input pool and output pool were literally
bitpools that were mixed using an LFSR scheme, and then extracted
using cryptographic hash.

The output pool is now a ChaCha-based CRNG, and most recently the
"input pool" is a accumulating entropy using a Blake2 hash.  So in
many ways, the term "input pool" is a bit of a misnomer now, and
perhaps should be renamed.

For more information, I direct you to the Yarrow paper[1].  The basic
idea of using two pools coupled with a catastrophic reseed was
shamelessly stolen from Bruce Schneier's work.

[1] https://www.schneier.com/wp-content/uploads/2016/02/paper-yarrow.pdf

Are there reasons why we didn't just implement Yarrow?  That's because
/dev/random predates Yarrow, and we made incremental changes to adopt
("steal") good ideas from other sources, which hopefully don't
invalidate previous analysis and reviews about /dev/random.  Please
note that there are a number of academic researches who have published
peer previews of /dev/random, and that is incredibly useful.

We've made changes over time to improve /dev/random and to addresses
various theoretical weaknesses noted by these academic reviewers.  So
when you claim that there are "numerous problems" with the input pool,
I'll have to note that /dev/random has undergone reviews by
cryptographers, and they have not identified the problems that you
claim are there.

Regards,

						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ