lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Apr 2016 20:55:28 +0200
From:	Stephan Mueller <smueller@...onox.de>
To:	Pavel Machek <pavel@....cz>
Cc:	Theodore Ts'o <tytso@....edu>,
	Sandy Harris <sandyinchina@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-crypto@...r.kernel.org, Jason Cooper <jason@...edaemon.net>,
	John Denker <jsd@...n.com>, "H. Peter Anvin" <hpa@...or.com>,
	Andi Kleen <andi@...stfloor.org>
Subject: Re: random(4) changes

Am Dienstag, 26. April 2016, 20:44:39 schrieb Pavel Machek:

Hi Pavel,

> Hi1
> 
> > > When dropping the add_disk_randomness function in the legacy
> > > /dev/random, I
> > > would assume that without changes to add_input_randomness and
> > > add_interrupt_randomness, we become even more entropy-starved.
> > 
> > Sure, but your system isn't doing anything magical here.  The main
> > difference is that you assume you can get almost a full bit of entropy
> > out of each interrupt timing, where I'm much more conservative and
> > assume we can only get 1/64th of a bit out of each interrupt timing.
> 
> Maybe 1/64th of a bit is a bit too conservative? I guess we really
> have more than one bit of entropy on any system with timestamp
> counter....
> 
> Making it 1/2 of bit (or something) should be very easy way to improve
> entropy early during boot...

I can easily settle on 1/2 bit here. The LRNG currently uses 0.9 bits which 
are based on measurements plus a safety margin. But I see no issue to even 
lower it further to, say, 1/2.

But simply enlarging the heuristic for the interrupt processing of the legacy 
/dev/random is a challenge IMHO. The key issue is the following:

When the legacy /dev/random receives one [block|HID] event, the following 
happens:

- add_[disk|input]_randomness assigns a time stamp containing majority of the 
entropy plus jiffies plus the event value and mix that triplet into the input 
pool

- for the very same event add_interrupt_randomness is also triggered and 
records the time stamp (plus jiffies, the instruction pointer and one 
register). Again, the majority of the entropy comes from the time stamp.

Both invocations are applied to the same event where the majority of entropy 
for each invocation is derived from a time stamp. It is clear that the 
invocation of both are highly correlated. So is the time stamp both 
invocations obtain. Thus, the time stamp of either one must not be credited 
with high entropy content.

If the credited entropy for an interrupt raises, the credited entropy for 
add_[disk|block]_randomness must be decreased. That is the core issue why I 
came up with a separate way of recording these events.

Ciao
Stephan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ