lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmG8k1JrVexBGmJL@zx2c4.com>
Date:   Thu, 21 Apr 2022 22:20:35 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     Theodore Ts'o <tytso@....edu>, linux-kernel@...r.kernel.org,
        linux-crypto@...r.kernel.org, Andy Polyakov <appro@...ptogams.org>
Subject: Re: [PATCH] random: avoid mis-detecting a slow counter as a cycle
 counter

Hi Eric,

On Thu, Apr 21, 2022 at 9:30 PM Eric Biggers <ebiggers@...nel.org> wrote:
> The method that try_to_generate_entropy() uses to detect a cycle counter
> is to check whether two calls to random_get_entropy() return different
> values.  This is uncomfortably prone to false positives if
> random_get_entropy() is a slow counter, as the two calls could return
> different values if the counter happens to be on the cusp of a change.
> Making things worse, the task can be preempted between the calls.
>
> This is problematic because try_to_generate_entropy() doesn't do any
> real entropy estimation later; it always credits 1 bit per loop
> iteration.  To avoid crediting garbage, it relies entirely on the
> preceding check for whether a cycle counter is present.
>
> Therefore, increase the number of counter comparisons from 1 to 3, to
> greatly reduce the rate of false positive cycle counter detections.

Thanks for the patch. It seems like this at least is not worse than
before. But before I commit this and we forget about the problem for a
while, I was also wondering if we can do much, much better than before,
and actually make this "work" with slow counters. Right now, the core
algorithm is:

    while (!crng_ready()) {
        if (no timer) mod_timer(jiffies + 1);
	mix(sample);
	schedule();    // <---- calls the timer, which does credit_entry_bits(1)
	sample = rdtsc;
    }

So we credit 1 bit every time that timer fires. What if the timer
instead did this:

    static void entropy_timer(struct timer_list *t)
    {
        struct timer_state *s = container_of(...t...);
        if (++s->samples == s->samples_per_bit) {
            credit_entropy_bits(1);
            s->samples = 0;
        }
    }

Currently, samples_per_bit is 1. What if we make it >1 on systems with
slow cycle counters? The question then is: how do we relate some
information about cycle counter samples to the samples_per_bit estimate?
The jitter stuff in crypto/ does something. Andy (CC'd) mentioned to me
last week that he did something some time ago computing FFTs on the fly
or something like that. And maybe there are other ideas still. I wonder
if we can find something appropriate for the kernel here.

Any thoughts on that direction?

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ