lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 11 Feb 2015 23:45:12 +0200
From: Somitra Sanadhya <somitra@...td.ac.in>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] PHC status report

Hi Jean,

I trust you and already made my opinion clear in the detailed comments in
the last mail. The last comment was only to counter the "unpaid volunteer"
remark. Once we agree to serve in some capacity, we do the work
voluntarily. A detailed report and reasonable comparison and evaluation was
all we expected. Even if there was a single report, containing the details
of the process followed and factual analysis, none of these questions will
be floating today.

Regards,
Somitra

On Wed, Feb 11, 2015 at 11:38 PM, Jean-Philippe Aumasson <
jeanphilippe.aumasson@...il.com> wrote:

> On Wed, Feb 11, 2015 at 10:31 PM, Somitra Sanadhya <somitra@...td.ac.in>
> wrote:
> > In all academic paper submissions in conferences, researchers submit
> their
> > papers and "unpaid volunteers" review the papers.
>
> Reviewing papers is part of an academic's job. And most
> conferences/journals do at least single-blind reviews (that is,
> reviewers are almost always anonymous). There's been some rigorous
> studies showing that it's better than non-blind reviews, btw.
>
>
>
> >
> > On Wed, Feb 11, 2015 at 11:24 PM, Jean-Philippe Aumasson
> > <jeanphilippe.aumasson@...il.com> wrote:
> >>
> >> Succinct response (not much time to elaborate):
> >>
> >> * You will find "secretive selection" and non-public discussions of
> >> the committee panel in all crypto competitions: AES, NESSIE, eSTREAM,
> >> SHA-3, CAESAR, and PHC.
> >>
> >> * A non-negligible difference of PHC compared to other projects: we're
> >> all unpaid volunteers, working on this project on our free time, where
> >> NIST had a dedicated team for AES and SHA-3, a budget to organize
> >> workshops, etc.
> >>
> >> * We wished we had time and resources to prepare a lengthy and
> >> detailed report, but that wasn't the case. We felt that it was better
> >> to publish such a summary than nothing at all.
> >>
> >>
> >> On Wed, Feb 11, 2015 at 10:14 PM, Somitra Sanadhya <somitra@...td.ac.in
> >
> >> wrote:
> >> > Dear all,
> >> >
> >> > I think the discussion is not leading to a meaningful and factual
> >> > direction.
> >> > Let me humbly try to steer the discussion towards concrete measures
> and
> >> > stop
> >> > this blame game. There are some TO-DO items in the comments, which can
> >> > help
> >> > restore the credibility of the competition (at least among some of us
> >> > who
> >> > are not happy about it).
> >> >
> >> > 1. The panel should make the voting and discussion on the selection
> >> > criteria
> >> > public.
> >> >
> >> > I refer to the AES competition archived selection criteria here:
> >> > http://csrc.nist.gov/archive/aes/pre-round1/aes_9709.htm Note the
> >> > language
> >> > from section 4: "In order to provide a basis for the analysis and
> >> > evaluation
> >> > of encryption algorithms submitted to be considered for incorporation
> >> > into
> >> > the FIPS for AES, evaluation criteria will be used to review candidate
> >> > algorithms.  All of NIST’s analysis results will be made publicly
> >> > available."
> >> >
> >> > It is in the spirit of an open competition to keep the process
> >> > transparent.
> >> > All discussion leading to the selection should have been public
> already.
> >> >
> >> > 2. I had pointed this earlier that the comment on our design Rig is as
> >> > follows: "Similar to Catena, but received less attention (cf. bugs
> found
> >> > in
> >> > the specification and code)".
> >> >
> >> > There are two parts to this comment. (a) Similar to Catena. (b) bugs
> >> > found.
> >> >
> >> > I would have liked the panel to clarify each of the two points with
> >> > respect
> >> > to the revised submission Rig v2 which is the one the PHC website
> lists
> >> > since October 2014. Clearly, (b) is false for this. Further, is (a)
> >> > measurable by any means ?  Aren't there novel ideas in the design of
> >> > Rig,
> >> > apart from ideas from Catena ? Given the publicly available eprint
> >> > report
> >> > which I referred to in my previous mail, is it fair to dismiss the
> >> > design in
> >> > this single sentence ? If Rig v2 was not being considered, shouldn't
> the
> >> > designers have been informed at the time of the revised submission
> >> > itself
> >> > that this version will not be evaluated any more ? (Related: weren't
> >> > other
> >> > design also allowed to be modified around the same time ? The changes
> >> > were
> >> > not overhauling the design, these were minor modification to handle
> the
> >> > issues which Bill Cox found. I don't think there was any comment on
> the
> >> > design after that. If the panel was not looking at the revised
> >> > submission
> >> > then we could have as well saved our time to do other things, rather
> >> > than
> >> > investing our time in something which no one was interested in looking
> >> > at.)
> >> >
> >> > 3. It is not just our design. Most designs have one line comments on
> >> > them in
> >> > the document shared earlier. To say that the panel could not prepare a
> >> > detailed document is mocking the competition. As pointed by Krisztian
> >> > earlier, many of these one liners are actually not factual but based
> on
> >> > opinions. The report should have had meaningful comparison of the
> >> > candidates, not just one liners on the entries. Dismissing entries by
> >> > such
> >> > one liners is devaluing the effort put by so many designers in the
> >> > competition.
> >> >
> >> > If you want some specific metrics, then here is a randomly thought
> list
> >> > which is not exhaustive: performance on various platforms,
> cryptographic
> >> > strength, memory hardness, .... (add whatever else you like, make a
> >> > baseline
> >> > and compare all entries on some rational basis).
> >> >
> >> > In my humble opinion, the bitterness which we are witnessing in the
> >> > mailing
> >> > list is due to the secretive selection and the improper rationale for
> >> > selection in the document. If these were public and based on detailed
> >> > discussions, I don't think anyone would have complained. IMHO, the
> panel
> >> > members should have already realized that there is a lot to blame
> >> > themselves
> >> > rather than the people questioning their decision now. To blame the
> >> > questions on the "frustration of non-finalists" is not showing the
> >> > maturity
> >> > expected from a panel, which contains many good people whom many of us
> >> > trusted (if that was not the case then you wouldn't have received so
> >> > many
> >> > submissions in the first place). Honestly, please discuss with some
> >> > researchers in universities around you about the way the selection has
> >> > happened so far, showing them the "selection rationale document" and
> >> > "the
> >> > process followed" (the secret voting, and not even following that
> voting
> >> > perfectly;  claiming that this was in addition to the private
> discussion
> >> > etc). I am quite certain that none of them will favor the process as
> >> > followed. All of it was easily avoided by keeping the process in
> public
> >> > domain and having a well thought out selection document.
> >> >
> >> > To quote Bruce Schneier on the AES competition (a losing finalist for
> >> > the
> >> > competition): "I have nothing but good things to say about NIST and
> the
> >> > AES
> >> > process".
> >> >
> >> > We were expecting a competition in the spirit of AES, but
> unfortunately
> >> > things haven't gone that way.
> >> >
> >> > 4. I deviate slightly to the "flexibility" and "simplicity" ideas of
> the
> >> > AES
> >> > competition. The AES competition had a criteria termed "simplicity",
> but
> >> > it
> >> > already created lots of discussion/confusion that time. To quote a few
> >> > lines
> >> > from the TwoFish team (
> https://www.schneier.com/paper-twofish-final.pdf)
> >> > "
> >> > "Simplicity” is the NIST criterion that’s hardest to describe. Lines
> of
> >> > pseudocode, number of mathematical equations, density of lines in a
> >> > block
> >> > diagram: these are all potential measures of simplicity. Our worry
> about
> >> > simplicity as a measure is that the simplest algorithms are often the
> >> > easiest to break. ...".
> >> >
> >> >  If things are hazy then people will question them.
> >> >
> >> > Regards.
> >> > Somitra
> >> >
> >
> >
>

Content of type "text/html" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ