lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190916045331.GC23719@1wt.eu>
Date:   Mon, 16 Sep 2019 06:53:31 +0200
From:   Willy Tarreau <w@....eu>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Herbert Xu <herbert@...dor.apana.org.au>,
        "Theodore Y. Ts'o" <tytso@....edu>,
        "Ahmed S. Darwish" <darwish.07@...il.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        Jan Kara <jack@...e.cz>, Ray Strode <rstrode@...hat.com>,
        William Jon McCann <mccann@....edu>,
        zhangjs <zachary@...shancloud.com>, linux-ext4@...r.kernel.org,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>
Subject: Re: Linux 5.3-rc8

On Sun, Sep 15, 2019 at 09:21:06PM -0700, Linus Torvalds wrote:
> The timer interrupt could be somewhat interesting if you are also
> CPU-bound on a non-trivial load, because then "what program counter
> got interrupted" ends up being possibly unpredictable - even with a
> very stable timer interrupt source - and effectively stand in for a
> cycle counter even on hardware that doesn't have a native TSC. Lots of
> possible low-level jitter there to use for entropy. But especially if
> you're just idly _waiting_ for entropy, you won't be "CPU-bound on an
> interesting load" - you'll just hit the CPU idle loop all the time so
> even that wouldn't work.

In the old DOS era, I used to produce randoms by measuring the time it
took for some devices to reset themselves (typically 8250 UARTs could
take in the order of milliseconds). And reading their status registers
during the reset phase used to show various sequences of flags at
approximate timings.

I suspect this method is still usable, even with SoCs full of peripherals,
in part because not all clocks are synchronous, so we can retrieve a
little bit of entropy by measuring edge transitions. I don't know how
we can assess the number of bits provided by such method (probably
log2(card(discrete values))) but maybe this is something we should
progressively encourage drivers authors to do in the various device
probing functions once we figure the best way to do it.

The idea is around this. Instead of :

     probe(dev)
     {
          (...)
          while (timeout && !(status_reg & STATUS_RDY))
               timeout--;
          (...)
     }

We could do something like this (assuming 1 bit of randomness here) :

     probe(dev)
     {
          (...)
          prev_timeout = timeout;
          prev_reg     = status_reg;
          while (timeout && !(status_reg & STATUS_RDY)) {
               if (status_reg != prev_reg) {
                     add_device_randomness_bits(timeout - prev_timeout, 1);
                     prev_timeout = timeout;
                     prev_reg = status_reg;
               }
               timeout--;
          }
          (...)
     }

It's also interesting to note that on many motherboards there are still
multiple crystal oscillators (typically one per ethernet port) and that
such types of independent, free-running clocks do present unpredictable
edges compared to the CPU's clock, so when they affect the device's
setup time, this does help quite a bit.

Willy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ