lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Jan 2017 19:50:55 +0100
From:   Denys Vlasenko <vda.linux@...glemail.com>
To:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Theodore Ts'o" <tytso@....edu>,
        "H. Peter Anvin" <hpa@...ux.intel.com>,
        Denys Vlasenko <dvlasenk@...hat.com>
Subject: random: /dev/random often returns short reads

Hi,

/dev/random can legitimately returns short reads
when there is not enough entropy for the full request.
However, now it does so far too often,
and it appears to be a bug:

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char **argv)
{
        int fd, ret, len;
        char buf[16 * 1024];

        len = argv[1] ? atoi(argv[1]) : 32;
        fd = open("/dev/random", O_RDONLY);
        ret = read(fd, buf, len);
        printf("read of %d returns %d\n", len, ret);
        if (ret != len)
                return 1;
        return 0;
}

# gcc -Os -Wall eat_dev_random.c -o eat_dev_random

# while ./eat_dev_random; do ./eat_dev_random; done; ./eat_dev_random
read of 32 returns 32
read of 32 returns 32
read of 32 returns 28
read of 32 returns 24

Just two few first requests worked, and then ouch...

I think this is what happens here:
we transfer 4 bytes of entrophy to /dev/random pool:

_xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
        int bytes = nbytes;
        /* pull at least as much as a wakeup */
        bytes = max_t(int, bytes, random_read_wakeup_bits / 8);
        /* but never more than the buffer size */
        bytes = min_t(int, bytes, sizeof(tmp));
        bytes = extract_entropy(r->pull, tmp, bytes,
                                random_read_wakeup_bits / 8, rsvd_bytes);
        mix_pool_bytes(r, tmp, bytes);
        credit_entropy_bits(r, bytes*8);


but when we enter credit_entropy_bits(), there is a defensive code
which slightly underestimates the amount of entropy!
It was added by this commit:

commit 30e37ec516ae5a6957596de7661673c615c82ea4
Author: H. Peter Anvin <hpa@...ux.intel.com>
Date:   Tue Sep 10 23:16:17 2013 -0400

    random: account for entropy loss due to overwrites

    When we write entropy into a non-empty pool, we currently don't
    account at all for the fact that we will probabilistically overwrite
    some of the entropy in that pool.  This means that unless the pool is
    fully empty, we are currently *guaranteed* to overestimate the amount
    of entropy in the pool!


The code looks like it effectively credits the pool only for ~3/4
of the amount, i.e. 24 bytes, not 32.

If /dev/random pool was empty or nearly so, further it results
in a short read.

This is wrong because _xfer_secondary_pool() could well had
lots and lots of entropy to supply, it just did not give enough.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ