lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 12 Feb 2015 18:24:21 +0530
From: Donghoon Chang <>
To: "" <>
Subject: Re: [PHC] PHC status report

Dear Samuel,

Let me try to explain how NIST chose 14 candidates from 51 in scientific
ways. (At that time, I was not involved in NIST.)  But, it is clear why 14
was chosen based on scientific facts, not based on any kind of elegance

Among 51, 34 candidates except MD6, SWIFFTX, SANDstorm, and the 14 second
candidates had security issues which were clearly described in SHA-3 zoo ( Here, security issue means
that there was found a non-randomness or collision, or etc. of underlying
primitive(s) (such as permutation or block cipher or a function) of each of
the 34 candidates. If there is any non-randomness property of the
underlying primitives, it is hard to justify the entire hash construction.
For example, SHA-256 is based on Merkle-Damgard (MD) domain extension. MD
construction guarantees that if its compression function is collision
resistant, then SHA-256 hash function is collision resistant. However, if
the underlying compression function is not collision resistant, it would be
difficult to guarantee the collision resistance of the hash function. Can
you imagine that we would have used SHA-2 without any worry if the
compression function of SHA-2 were not collision resistant? So, for the
structural soundness of SHA-256 hash function, it is necessary to show that
its compression function seems to be collision resistant. Likewise, since
the hash function may be used in many applications such as a pseudorandom
number generator, the desirable requirement on the underlying primitive is
that there is no non-randomness property. In that sense, the 34
candidates failed to justify the structural soundness of each algorithm
since they're found non-randomness of their underlying primitive as shown
in the SHA-3 zoo.

In case of MD6, its designers withdrew it before 14 were chosen for the
second round candidates. (This was publicly known.)

In case of SWIFFTX, its performance was extremely slow compared to any
other candidates so no one would use it.

In case of SANDstorm, its design is very complicated in a way that it is
very hard to evaluate its security. A fact is that there was no single
paper analyzing it.

In case of the remaining 14 candidates, there was no any security issue on
their underlying primitives and performance-wise there was no extremely

It is reasonable to say that the 14 candidates were chosen based on the
scientifically measurable evidences regarding security and performance,
which were provided in SHA-3 zoo, eBASH, academic papers etc..

Therefore, I don't think that the PHC panel's 9-candidate selecting process
is similar to the 14 2-round candidates selection procedure of the SHA-3

- Donghoon

2015-02-12 8:05 GMT+05:30 Samuel Neves <>:

> On 11-02-2015 21:24, Donghoon Chang wrote:
> > In other words, NIST's report is like this.
> >
> > "We could not choose X algorithm though X has many good features with
> > plentiful paragraphs."
> >
> > But, I found that this kind of effort in the PHC is missing. Instead of
> > saying encouraging words or describing good points of each algorithm, the
> > report says like this,
> >
> > "We could not choose X algorithm because X has some negative aspects with
> > very few words."
> That is indeed how the later stage SHA-3 reports are done. However, I went
> back and looked at the SHA-3 Round 1 report
> [1], which would be the rough analogous to the phase we are in right now.
> There is no comment on the 37 rejected
> candidates beyond some initial generalities about criteria (which, as you
> pointed out, were mainly security and
> performance). I am quite confident each of these rejections are fully
> justified, but the report is definitely not where
> one will find them. Similarly, Phase 1 of the eSTREAM competition yielded
> no formal report at all (that I can find; I
> only found [2]), simply a selection.
> [1]
> [2]
> Best regards,
> Samuel Neves

Content of type "text/html" skipped

Powered by blists - more mailing lists