lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 8 Aug 2008 16:51:10 -0400 (EDT)
From: "Leichter, Jerry" <leichter_jerrold@....com>
To: Eric Rescorla <ekr@...workresonance.com>
Cc: Nicolas Williams <Nicolas.Williams@....com>,
	Dan Kaminsky <dan@...para.com>, Dave Korn <dave.korn@...imi.com>,
	"'Ben Laurie'" <benl@...gle.com>, bugtraq@...urityfocus.com,
	security@...nid.net, "'OpenID List'" <general@...nid.net>,
	cryptography@...zdowd.com, full-disclosure@...ts.grok.org.uk
Subject: Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

| > You can get by with a lot less than 64 bits.  People see problems
| > like this and immediately think "birthday paradox", but there is no
| > "birthday paradox" here:  You aren't look for pairs in an
| > ever-growing set, you're looking for matches against a fixed set.
| > If you use 30-bit hashes - giving you about a 120KB table - the
| > chance that any given key happens to hash to something in the table
| > is one in a billion, now and forever.  (Of course, if you use a
| > given key repeatedly, and it happens to be that 1 in a billion, it
| > will hit every time.  So an additional table of "known good keys
| > that happen to collide" is worth maintaining.  Even if you somehow
| > built and maintained that table for all the keys across all the
| > systems in the world - how big would it get, if only 1 in a billion
| > keys world-wide got entered?)
| I don't believe your math is correct here. Or rather, it would
| be correct if there was only one bad key.
| 
| Remember, there are N bad keys and you're using a b-bit hash, which
| has 2^b distinct values. If you put N' entries in the hash table, the
| probability that a new key will have the same digest as one of them is
| N'/(2^b). If b is sufficiently large to make collisions rare, then
| N'=~N and we get N/(2^b).
| 
| To be concrete, we have 2^15 distinct keys, so, the probability of a
| false positive becomes (2^15)/(2^b)=2^(b-15).  To get that probability
| below 1 billion, b+15 >= 30, so you need about 45 bits. I chose 64
| because it seemed to me that a false positive probability of 2^{-48}
| or so was better.
You're right, of course - I considered 32,000 to be "vanishingly small"
compared to the number of hash values, but of course it isn't.  The
perils of looking at one number just as decimal and the other just in
exponential form....

In any case, I think it's clear that even for extremely conservative
"false hit" ratios, the table size is quite reasonable.  You wouldn't
want the table on your smart card or RFID chip, perhaps, but there even
a low-end "smartphone" would have no problems.

							-- Jerry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ