[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140723151459.GA6673@thunk.org>
Date: Wed, 23 Jul 2014 11:14:59 -0400
From: Theodore Ts'o <tytso@....edu>
To: Andrey Utkin <andrey.krieger.utkin@...il.com>
Cc: hannes@...essinduktion.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Reading large amounts from /dev/urandom broken
On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
>
> Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
>
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
>
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.
I've inquired on the bugzilla why the reporter is abusing urandom in
this way. The other commenter on the bug replicated the problem, but
that's not a "second bug report" in my book.
At the very least, this will probably cause me to insert a warning
printk: "insane user of /dev/urandom: [current->comm] requested %d
bytes" whenever someone tries to request more than 4k.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists