lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 12 Feb 2015 00:52:46 +0530
From: Donghoon Chang <pointchang@...il.com>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
Subject: Re: [PHC] PHC status report

I would like to add more comments on the following new criteria.

1. Elegance of design
2. Originality and innovation

Unlike the criteria in art or literature, the criteria of standards or
algorithms related to the security of system should be able to be measured
by scientific ways.

Now, a question arises, "given an algorithm, how can we measure the
elegance of design and originality and innovation of the algorithm in
scientific ways?"

As far as I know, there were (or are) no such criteria in cases of AES,
SHA-3, Caesar competitions, because such criteria are not scientifically
measurable! I believe that they can be only measurable by a biased
favoritism!

In the link (https://password-hashing.net/index.html), it is written, "To
identify new password hashing schemes suitable for widespread adoption, the
PHC follows the model of focused cryptographic competitions such as AES,
eSTREAM, or SHA-3 (see the Cryptographic competitions
<http://competitions.cr.yp.to/> website)." However, I wonder whether the
PHC follows the model of focused cryptographic competitions or not.

- Donghoon Chang





2015-02-11 21:16 GMT+05:30 Donghoon Chang <pointchang@...il.com>:

> In the beginning of the competition, the criteria were clearly mentioned
> from the following link.
>
> https://password-hashing.net/call.html
>
> However, as mentioned from another following link, when 9 candidates have
> been recently selected, new criteria, which are totally unrelated to the
> original ones, were added.
>
> https://password-hashing.net/report1.html
>
> The two new criteria are added as follows:
>
> 1. Elegance of design
> 2. Originality and innovation
>
> I wondered who added these new criteria, which were never mentioned
> before, without the request of any permission of changing it internally and
> publicly.
>
> Due to the above new criteria, some of the candidates were not selected.
> Directly speaking, the new criteria were secretly created, which is against
> the rule of competition, to kick out those candidates, which is unfair and
> even a crime.
>
> Another main issue of the PHC is that 9 candidates were chosen without
> providing comparison results with proper metrics, which are against all
> other competitions such as AES, SHA-3, Caesar Competitons, etc. The
> competition should be scientifically based on proper metrics with clear
> comparison (not based on voting out of favoritism), according to the
> criteria which were given in the beginning.
>
> Since I was one of internal evaluators of SHA-3 candidates as a guest
> researcher of NIST for three years, it more seems to me that the procedure
> of the PHC looks immature to me. Even the PHC panel's recent selecting
> procedure undermines the dignity of the PHC, the PHC panel, submitters, and
> even crypto community. Please think it seriously.
>
> I hope that the PHC panel might accept and correct their mistake and make
> every effort to restore our dignity including everyone participating in the
> PHC.
>
> - Donghoon Chang
>
>
>
>
>
> 2015-02-11 16:37 GMT+05:30 Krisztián Pintér <pinterkr@...il.com>:
>
>> On Tue, Feb 10, 2015 at 4:00 PM, Solar Designer <solar@...nwall.com>
>> wrote:
>> > I think publishing the upvote and downvote counts
>>
>> counts? why not the votes themselves? why is it a secret? normally
>> secret voting protects the voters from retaliation. i don't think it
>> applies to that case. on the contrary, keeping the vote secret (as
>> well as unexplained) casts the shadow of doubt on the rationality of
>> it.
>>
>>
>> > we used the voting as a tool to focus
>> > further discussion rather than to definitively choose the finalists.
>>
>> i was under the impression that the voting was the selection method.
>> if it wasn't, then it indeed does not matter much. what matters is the
>> actual rationale, what the selection was based on.
>>
>>
>> > I guess you'd prefer some scoring system?  It wouldn't work.  Opinions
>> > varied on what properties would be best to have vs. to avoid, so we
>> > couldn't possibly have arrived at a common scoring system.
>>
>> the very reason to have a detailed rationale is that people don't have
>> to agree the result, they can rely on the facts. if the panel declares
>> a winner using one set of preferences, but that does not coincide my
>> own preferences, i can still use the results to find my own winner for
>> my own situation.
>>
>>
>> >> this discussion should also be summarized, anonimized, and made public.
>> > That's significant effort that's unlikely to address your concerns.
>>
>> that is the only task the panel had. you are talking about skipping
>> the only reason this competition exists. when you signed up to this,
>> what is it exactly that you offered to do? also, how is it that it
>> does not address my concerns? my concern is that the selection process
>> is not transparent, and basically no information is published.
>> publishing the actual decision process exactly solves that problem.
>>
>
>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ