lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9a8748490805151557g5031ad0dr22b75980161f31b6@mail.gmail.com>
Date:	Fri, 16 May 2008 00:57:40 +0200
From:	"Jesper Juhl" <jesper.juhl@...il.com>
To:	"Theodore Tso" <tytso@....edu>,
	"Jesper Juhl" <jesper.juhl@...il.com>,
	"Adrian Bunk" <bunk@...nel.org>,
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"Alan Cox" <alan@...rguk.ukuu.org.uk>,
	"Chris Peterson" <cpeterso@...terso.com>, jeff@...zik.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	mpm@...enic.com
Subject: Re: [PATCH] drivers/net: remove network drivers' last few uses of IRQF_SAMPLE_RANDOM

2008/5/16 Theodore Tso <tytso@....edu>:
> On Fri, May 16, 2008 at 12:13:39AM +0200, Jesper Juhl wrote:
>> My point is that the rate (and timing between) syscalls is depending
>> on very many factors; the kernel version (and configuration), the
>> software installed, the software currently executing, the state of the
>> software currently executing, the number of apps executing, the amount
>> of network traffic, the accuracy of the hardware clock, the speed of
>> (various) IO sources (network, disk, USB, etc), the speed (and type)
>> of the CPU, the speed of memory. And various other things.
>
> It Depends.
>
> For certain workloads, a lot of these issues might just boil out, or
> not result in as much entropy as you think.  Think about a certificate
> server which doesn't get much traffic, but when it is contacted, it is
> expected to create new high security RSA keys and the public key
> certificates to go with it.  If the attacker knows the machine type,
> distribution OS loaded, etc., it might not be that hard to brute force
> guess many of the factors you have listed above.
>

Hmm, I would like to know how you'd do that.
Even if you a) know the exact distro installed (and the configuration,
b) know the exact hardware in the machine, c) know the exact time it
was booted and know that there have been no ssh logins or similar that
might have generated syscalls, d) know exactely how many requests (and
of what type) have been made to the server and the exact times they
were made.
How would you go about brute force guessing the contents of the entropy pool?

If the server does, for example, this; every second it samples the
number of syscalls made during that second and uses that number as the
base of adding one or two bits of entropy. Every time a syscall is
made it uses the "time since last syscall in 'us'" to add one bit of
entropy to the pool. I'd say that even if that server sees very little
(and even predictable) traffic, we may have; details of the filesystem
layout on disk, a timer interrupt happening a few microseconds early
due to a flaky chip, a background process initiating some action a
millisecond early/late for scheduling reasons, the switch the machine
is connected to causing a network packet to arrive a tiny bit later
than normal and various other factors like that, to cause the
generated entropy to be off by a bit or two compared to your guess -
and by the time you realize you are off, another spurrious event has
probably happened, so you'll never end up in-sync with the entropy
pool...  Or is there some "obvious entropy pool guessing method" that
I'm just not aware of???


> Basically the question has always been one of the overhead to collect
> and boil down any input data (which after all, any user space process
> can send arbitrary data into the entropy pool via "cat my_secret_data
>> /dev/random") which will never hurt and might help.  The tricky bit
> is estimating how much "entropy" should be ascribed to data which is
> sent into the entropy pool, and this is where you have to be very
> careful.
>
Yes, I'm aware of that, and I'm not suggesting to use syscall rates as
a generator of high amounts of high quality entropy. Im merely
suggesting that sampling syscall rates and time-between and using
those numbers as the source of very low amounts of low quality entropy
might be worth-while. It wouldn't hurt on machines that have other,
higher quality, entropy sources. On machines that have no other
entropy sources it would ensure that we always have a steady (although
slow) trickle of new entropy info available...

> If you screw the entropy credit information then security of
> /dev/random will be impacted.  /dev/urandom won't be impacted since it
> doesn't care about the entropy estimation.  That's why only root is
> allowed to use the ioctl which atomically sends in some "known to be
> random" data and the entropy credit ascribed to that data.
>
I'm only talking about providing some data for /dev/random here.

-- 
Jesper Juhl <jesper.juhl@...il.com>
Don't top-post http://www.catb.org/~esr/jargon/html/T/top-post.html
Plain text mails only, please http://www.expita.com/nomime.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ