[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MDEHLPKNGKAHNMBLJOLKAEMHEOAB.davids@webmaster.com>
Date: Mon, 4 Jul 2005 16:56:33 -0700
From: "David Schwartz" <davids@...master.com>
To: "Chiaki" <ishikawa@...rim.or.jp>
Cc: <bugtraq@...urityfocus.com>,
"Charles M. Hannum" <mycroft@...bsd.org>
Subject: RE: /dev/random is probably not
> It's been a while since I looked at the /dev/random design on
> Linux (probably
> the early 2.4 days), however one thing that was quite clear was
> that they did
> not use any network I/O as entropy sources because an attacker,
> particularly
> one that already had control of other machines on the same LAN
> segment, could
> have a high degree of control over that source. I would be most
> interested if
> that has changed since the last time I looked at it.
If you're talking about a modern x86 system, you don't need to worry. Even
an attacker who had full view and control over the local LAN could not
predict the timing of network packets as seen by the CPU. There's entropy in
the offset between the network card's oscillator and the frequency
multiplier that produces the CPU core clock. The TSC at the time the packet
is noticed by the CPU still contains unpredictable entropy.
For every unforseen thing that makes the entropy not as good as we expect,
there's an unforseen thing that makes the entropy better than expected.
Realistically, there is nothing to worry about. (However, from a theoretical
standpoint, there's plenty of room for improvements and more provable
guarantees rather than "there's no known (or forseeable) way to break it".)
DS
Powered by blists - more mailing lists