lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090513060850.GZ31071@waste.org>
Date:	Wed, 13 May 2009 01:08:50 -0500
From:	Matt Mackall <mpm@...enic.com>
To:	Chris Peterson <cpeterso@...terso.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [resend] drivers/net: remove network drivers' last few uses of IRQF_SAMPLE_RANDOM

On Wed, May 13, 2009 at 01:34:47AM -0400, Chris Peterson wrote:
> 
> I know a new "pragmatic entropy accounting model" is in the works, but 
> until then, this patch removes the network drivers' last few uses of 
> theoretically-exploitable network entropy. Only 11 net drivers are 
> affected. Headless servers should use a more secure source of entropy, 
> such as the userspace daemons.

Actually, I'd rather not do this.

I've instead become convinced that what /dev/random's entropy
accounting model is trying to achieve is not actually possible.
It requires:

a) a strict underestimate of entropy
b) from completely unobservable, uncontrollable sources
c) with no correlation to observable sources

If and only if we meet all three of those requirements for all entropy
sources can we actually reach the theoretical point where /dev/random
is actually distinct from /dev/urandom. 

Practically, we're nowhere close on any of those points. We have no
good model for estimating (a) for most sources, and almost all sources
are directly or indirectly observable or controllable to some degree.

Once we acknowledge that, it's easy to see that the right way forward
is not to aim for perfect, but instead to aim for really good. And
that means:

1) significantly more sampling sources with lower overhead
2) more defense in depth
3) working well on headless machines and with hardware RNG sources
4) simpler, more auditable code
5) never starving users

So while your current patch is 'correct' in the current theoretical
model (and one I've personally tried to push in the past), I think the
theoretical model itself needs to change and this is thus a step in
the wrong direction. The future model will continue to sample network
devices on theory that they -might- be less than 100% observable and
that can only increase our total (unmeasurable) amount of entropy.

-- 
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ