lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Jul 2014 08:52:07 -0400
From:	Theodore Ts'o <tytso@....edu>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>
Cc:	linux-kernel@...r.kernel.org, linux-abi@...r.kernel.org,
	linux-crypto@...r.kernel.org, beck@...nbsd.org
Subject: Re: [PATCH, RFC] random: introduce getrandom(2) system call

On Thu, Jul 17, 2014 at 12:57:07PM +0200, Hannes Frederic Sowa wrote:
> 
> Btw. couldn't libressl etc. fall back to binary_sysctl
> kernel.random.uuid and seed with that as a last resort? We have it
> available for few more years.

Yes, they could.  But trying to avoid more uses of binary_sysctl seems
to be a good thing, I think.  The other thing is that is that this
interface provides is the ability to block until the entropy pool is
initialized, which isn't a big deal for x86 systems, but might be
useful as a gentle forcing function to force ARM systems to figure out
good ways of making sure the entropy pools are initialized (i.e., by
actually providing !@#!@ cycle counter) without breaking userspace
compatibility --- since this is a new interface.

> > +	if (count > 256)
> > +		return -EINVAL;
> > +
> 
> Why this "arbitrary" limitation? Couldn't we just check for > SSIZE_MAX
> or to be more conservative to INT_MAX?

I'm not wedded to this limitation.  OpenBSD's getentropy(2) has an
architected arbitrary limit of 128 bytes.  I haven't made a final
decision if the right answer is to hard code some value, or make this
limit be configurable, or remote the limit entirely (which in practice
would be SSIZE_MAX or INT_MAX).

The main argument I can see for putting in a limit is to encourage the
"proper" use of the interface.  In practice, anything larger than 128
probably means the interface is getting misused, either due to a bug
or some other kind of oversight.

For example, when I started instrumenting /dev/urandom, I caught
Google Chrome pulling 4k out of /dev/urandom --- twice --- at startup
time.  It turns out it was the fault of the NSS library, which was
using fopen() to access /dev/urandom.  (Sigh.)

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ