lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 06 Dec 2007 14:32:05 -0500
From:	Bill Davidsen <davidsen@....com>
To:	Adrian Bunk <bunk@...nel.org>
CC:	Marc Haber <mh+linux-kernel@...schlus.de>,
	linux-kernel@...r.kernel.org
Subject: Re: Why does reading from /dev/urandom deplete entropy so much?

Adrian Bunk wrote:
> On Tue, Dec 04, 2007 at 12:41:25PM +0100, Marc Haber wrote:
> 
>> While debugging Exim4's GnuTLS interface, I recently found out that
>> reading from /dev/urandom depletes entropy as much as reading from
>> /dev/random would. This has somehow surprised me since I have always
>> believed that /dev/urandom has lower quality entropy than /dev/random,
>> but lots of it.
> 
> man 4 random
> 
>> This also means that I can "sabotage" applications reading from
>> /dev/random just by continuously reading from /dev/urandom, even not
>> meaning to do any harm.
>>
>> Before I file a bug on bugzilla,
>> ...
> 
> The bug would be closed as invalid.
> 
> No matter what you consider as being better, changing a 12 years old and 
> widely used userspace interface like /dev/urandom is simply not an 
> option.

I don't see that he is proposing to change the interface, just how it 
gets the data it provides. Any program which depends on the actual data 
values it gets from urandom is pretty broken, anyway. I think that 
getting some entropy from network is a good thing, even if it's used 
only in urandom, and I would like a rational discussion of checking the 
random pool available when urandom is about to get random data, and 
perhaps having a lower and upper bound for pool size.

That is, if there is more than Nmax random data urandom would take some, 
if there was less than Nmin it wouldn't, and between them it would take 
data, but less often. This would improve the urandom quality in the best 
case, and protect against depleting the /dev/random entropy in low 
entropy systems. Where's the downside?

There has also been a lot of discussion over the years about improving 
the quality of urandom data, I don't personally think making the quality 
higher constitutes "changing a 12 years old and widely used userspace 
interface like /dev/urandom" either.

Sounds like a local DoS attack point to me...

-- 
Bill Davidsen <davidsen@....com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ