lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 9 Dec 2004 02:47:22 +0100 (MET)
From: Pavel Kankovsky <peak@...o.troja.mff.cuni.cz>
To: BugTraq <bugtraq@...urityfocus.com>
Subject: Re: MD5 To Be Considered Harmful Someday


On Tue, 7 Dec 2004, Joel Maslak wrote:

> The short-term fix seems to be something I've been recommending for a
> while:
> 
> Compute hashes with both SHA-1 and MD5.

Incidentally, this is the method used by SSL/TLS. But they don't
concatentate hash values, they xor them together.


On Wed, 8 Dec 2004, Jack Lloyd wrote:

> The most obvious example of that is that by using one of the known MD5
> collision pairs, you can cause 5/9 of the hash output to change while
> keeping the rest of the hash constant. While this is not a problem
> when the hash is merely a hash, it does mean you can't realistically
> model it as a PRF.

If you mix the results with xor (or other function having similar
properties), then the result is PRF as long as at least one of
constituent functions is PRF.

Of course, I assume the constituent functions return the same number of
bits. When you take MD5 (128 bits) and SHA1 (160 bits) then you have to
either expand the result of MD5 or to truncate the result of SHA1 to make
them match.


On Wed, 8 Dec 2004, Jack Lloyd wrote:

> It actually looks like it's better to generate 2^80 MD5 collisions
> instead of 2^64 SHA-1 collisions, since the initial collisions would
> be trivial to generate, and even if MD5 wasn't broken 80*2^64 + 2^80
> << 64*2^80 + 2^64. So in fact you can generate a MD5||SHA collision
> with only a tiny bit more work than generating a SHA collision.

This means xor is no worse than concatenation. :)


> So this really doesn't buy you anything.

This buys you some level of resistance against specific weaknesses in hash
algorithms like the recent Chinese attacks against MD5 et al.

When you take two (or more) functions built in a different way
(unfortunately, this is not true for most of commonly used hash functions)
and combine their results properly then it is likely there will be a
strong "impedance mismatch" between their weaknesses and it will be very
difficult to break both of them simultanously.


On Sun, 5 Dec 2004, Ruth A. Kramer wrote:

> 3. At one point I read the thesis of the guy that wrote rsync (is name
> isn't on the tip of my tongue at the moment, that's embarrassing) -- I
> was fairly disappointed (and surprised) to find that the reliability of
> rsync relied on a similar "probalistic" approach (can't find the right
> words).  As I recall, this was not mentioned on the first page of his
> thesis (I could be wrong), the README for rsync, nor as a user message
> when the program is invoked.  IIRC, when the subject was brought up in
> his thesis, it was effectively dismissed as "it'll never happen".  (The
> words invoked, if not actually used, phrases like "the heat death of the
> universe".)

Rsync trades the risk of hitting a collision for the reduction of data it
has to transfer over the network.

The only way to avoid the risk of hash collision is to transfer
everything--but the more data you transfer, the higher is the risk it
will be corrupted en route (download a handful of terabytes over FTP,
i.e. relying on TCP checksums only, and you'll see...).

It's always the "probabilistic approach".


On Wed, 8 Dec 2004, Dan Kaminsky wrote:

> There are, however, others that abuse extra rounds to great effect.  
> For instance, SHA-0 is an 80 round algorithm.  Biham's paper
> (http://eprint.iacr.org/2004/146/) showed that an 82 round variant is
> actually much weaker.

Yes...but the algorithm for MD5 password hashes does not add additional
rounds to the hash.

It iterates the whole hash function in a way similar to its ordinary use
on multi-block input (i.e. iterated invocation of its compression function
with a predefined initial value on one end, and Merkle-Damgaard padding on
the other end).

> Of course, as I've said elsewhere passwords really aren't at all 
> vulnerable to the MD5 attack.  But, if they were, extra iterations 
> wouldn't be helpful.  Once the first round collided, all future rounds 
> would continue to collide.

The iterations are not uniform. There are 2*3*7 = 42 different ways
additional input (password plaintext, salt) is fed into the hash.


--Pavel Kankovsky aka Peak  [ Boycott Microsoft--http://www.vcnet.com/bms ]
"Resistance is futile. Open your source code and prepare for assimilation."



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ