lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmlMGx6+uigkGiZ0@zx2c4.com>
Date:   Wed, 27 Apr 2022 15:58:51 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     nadiah@...ucsd.edu, noahsd@...il.com, dodis@...nyu.edu,
        tessaro@...washington.edu, torvalds@...ux-foundation.org,
        djb@...yp.to, tytso@....edu, jeanphilippe.aumasson@...il.com,
        jann@...jh.net, keescook@...omium.org, gregkh@...uxfoundation.org,
        peter@...ptojedi.org
Cc:     linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: is "premature next" a real world rng concern, or just an academic
 exercise?

Hi folks,

The Linux kernel RNG currently pretends to care about the "premature
next" RNG threat model. I'm wondering whether this is sensible and
corresponds to anything real.

"Premature next" is the scenario in which:
- Attacker compromises the current state of a fully initialized RNG with
  a wild 'n crazy kernel infoleak.
- New bits of entropy are added directly to the key used to generate the
  /dev/urandom stream, without any buffering or pooling.
- Attacker then, somehow having read access to /dev/urandom, samples RNG
  output and brute forces the individual new bits that were added.
- Result: the RNG never "recovers" from the initial compromise, a
  so-called violation of what academics term "post-compromise security".

(Note that this is a different scenario from "premature first", which
relates to boot-time concerns; this email isn't about "premature
first".)

There are other varied scenarios to this, usually involving some
combination of:
a) Attacker has access to /dev/urandom output continuously or at some
   interesting interval.
b) Attacker controls one or more entropy sources providing some subset
   of varying size of those new bits of entropy.

The Linux kernel currently pretends to mitigate this for scenario (a) at
least, using "entropy estimation". The idea is that it waits until 256
estimated "bits" of new entropy are pooled before mixing them into the
key used to generate the /dev/urandom stream. Never mind the fact that
entropy estimation is an impossible proposition and thus flawed, it
certainly does nothing in the way of (b), since there's just one pool.

The NT kernel is a bit more robust, by way of their Fortuna RNG, in
which there are several pools, and entropy sources round-robin into
those pools. When it's time to reseed, the first pool is used every
time, the second pool is used every other time, the third pool is used
every third time, the forth pool is used every forth time, and so on. In
theory this should handle both (a) and (b) without needing entropy
estimation, and this sort of scheduler prompts interesting questions for
academics with regards to different types of scheduling (a random walk
instead of round-robin? sounds like a paper topic.) and trying to
characterize the rate of inputs (continuous? sporadic? modelable?).

While the above "problem" maps pretty clearly to things academics are
interested in -- post-compromise security for a system with a clear
model and various creative solutions -- I'm wondering whether any of
this matters in the real world. From conversations over the last several
months with various security experts and cryptographers, including those
who work on the "premature next" problem, the impression I get is that
nobody actually thinks this matters back on planet Earth, even from
people who write papers on it.

So the purpose of this email is to solicit feedback on whether anybody
can think of a plausible scenario in which it does matter. If it does
matter, the next step will be to determine how much it matters exactly,
in order for me to gauge the cost-benefit ratio of mitigating the issue
more robustly in the kernel (e.g. Fortuna requires non-zero code
complexity; does the benefit outweigh the cost of such complexity?). On
the other hand, if nobody can think of any reason why this matters, then
there are some nice improvements that I'm eager to make in a different
direction.

To review, this attack model concerns:
- An attacker who compromises the RNG at one point in time via a kernel
  infoleak.
- After that infoleak, the attacker somehow no longer has access to the
  system, but can prevent the RNG from recovering from the compromise by
  having pretty rapid access to /dev/urandom (and possibly also having
  compromised zero or more entropy sources).

The questions are thus:

1) When does an attacker with a kernel infoleak exercise it just once,
   and then attempt to maintain the leak with some sort of network access
   to lots of /dev/urandom output (from, e.g., large nonces)?

2) Or, if it's a local user attacker, when does that attacker infoleak
   once, but rather than just running the exploit again, cats /dev/urandom
   continuously?

3) More broadly speaking, what kernel infoleak is actually acceptable to
   the degree that anybody would feel okay in the first place about the
   system continuing to run after it's been compromised?

Looking forward to hearing opinions on this. There's certainly a lot to
split hairs about above -- incomplete/inaccurate description of the
"premature next" model, what Fortuna actually achieves, my entropy
estimation remark, and so forth -- but hopefully this at least throws
enough things at the board to begin the discussion.

Is "premature next" just an academic exercise, rather than a real world
RNG concern?

Regards,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ