lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 15 Feb 2017 18:55:20 +0100
From:   Denys Vlasenko <vda.linux@...glemail.com>
To:     "Theodore Ts'o" <tytso@....edu>,
        Denys Vlasenko <dvlasenk@...hat.com>,
        "H. Peter Anvin" <hpa@...ux.intel.com>,
        Denys Vlasenko <vda.linux@...glemail.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: random: /dev/random often returns short reads

On Wed, Jan 18, 2017 at 7:07 PM, Theodore Ts'o <tytso@....edu> wrote:
> On Wed, Jan 18, 2017 at 04:44:54PM +0100, Denys Vlasenko wrote:
>> In this case, what code does is it returns fewer bytes,
>> even though *it has enough random bytes to return the full request*.
>>
>> This happens because the patch which added more conservative
>> accounting, while containing technically correct accounting per se,
>> it  forgot to take in the account another part of the code
>> which was relying on the previous, simplistic logic
>> "if we add N random bytes to a pool, now it has +N random bytes".
>>
>> In my opinion, it should have changed that part of code simultaneously
>> with introducing more conservative accounting.
>
> In the ideal world, yes.  I've acknowledged this is a bug, in the "be
> conservative in what you send, liberal in what you receive" sense..
> But no one complained for three year, and userspace needs to be able
> to retry short reads instead of immediately erroring out.
>
> The problem is changing that code to figure out exactly how many bytes
> you need to get in order to have N random bytes is non-trivial.  So
> our choices are:
>
> 1) Transfer more bytes than might be needed to the secondary pool
...
> 2) Transfer bytes without using the conservative entropy "overwrite"
> calculation if the blocking pool is mostly empty.  This means we will
> be over-estimating the entropy in that pool, which is unfortunate.
> One could argue that all of the cases where people have been whining
> about this, they are using HAVEGED which is providing pretend entropy
> based on the presumed unpredictability of Intel cahce timing, so
> careful entropy calculations is kind of silly anyway.  However, there
> might be some people who really are trying to do carefule entropy
> measurement, so compromising this isn't really a great idea.>
> I'm leaning a bit towards 1 if we have to do something (which is my
> proposed, untested patch).

I spend quite some time looking at both your patch which implements #1
and at the commit 30e37ec516ae5a6957596de7661673c615c82ea4 which introduced
"conservative accounting" code, and the same thought returns to me:
this complexity and problems are self-inflicted by
commit 30e37ec516ae5a6957596de7661673c615c82ea4.

The code is already conservative elsewhere, adding more conservative code -
and more complex code, especially considering that it should be even more
complex than it is, since it should be further fixed up in
"xfer_secondary_pool(r, nbytes)" location -
it does not look like worthy improvement to me.

I propose to simply revert 30e37ec516ae5a6957596de7661673c615c82ea4.

If you worry about this accounting more than I do,
how about dealing with it in a simpler way, a-la

-       xfer_secondary_pool(r, nbytes);
+       xfer_secondary_pool(r, nbytes * 5/4); /* be a bit paranoid */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ