lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 16 Feb 2005 17:25:06 -0800
From: Robert Sussland <robert@...wood.org>
To: Gadi Evron <gadi@...ila.gov.il>
Cc: bugtraq@...urityfocus.com
Subject: Re: SHA-1 broken



On Feb 16, 2005, at 4:56 AM, Gadi Evron wrote:

> Now, we've all seen this coming for a while.
> http://www.schneier.com/blog/archives/2005/02/sha1_broken.html
>
> Where do we go from here?
>
We abandon the requirement of collision resistance. This is a strange 
requirement, and is not supported by experience. Collision resistance 
is not a "hard" problem in the sense that factoring large numbers or 
computing discrete logs is hard. Collision resistance in deterministic 
hash functions smells too much like generating entropy without secrets. 
I have no reason to believe that careful analysis of *any* publicly 
known deterministic many-to-one function will not allow me to produce a 
collision, assuming I control all inputs into the function.

 From my point of view, the issue is what weaker assumption do we 
replace collision resistance with -- how about:

target collision resistance, with the "strength" of resistance equal to 
the average advantage an attacker would gain in matching a fixed 
target, as the target is averaged over all possible inputs in a measure 
space? Then, producing "rare" messages which could be targeted would 
not weaken the hash, as the probability of such messages occurring 
would be low. 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ