lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150212005050.GA10286@openwall.com>
Date: Thu, 12 Feb 2015 03:50:50 +0300
From: Solar Designer <solar@...nwall.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] PHC status report

On Wed, Feb 11, 2015 at 11:14:04PM +0200, Somitra Sanadhya wrote:
> 2. I had pointed this earlier that the comment on our design Rig is as
> follows: "Similar to Catena, but received less attention (cf. bugs found in
> the specification and code)".
> 
> There are two parts to this comment. (a) Similar to Catena. (b) bugs found.

I think the main part is "received less attention", whereas "Similar to
Catena" is assigning a category (which spot it'd compete for as a
potential winner if advanced to finalist status) and "bugs found" is
merely an example (an outdated one as you say).  Having no bugs in v2 is
great, and it's bad if this was overlooked by some panelists in the
post-voting discussion, but it's unlikely to have made enough of a
difference on its own.

The panel decided not to select candidates that currently ranked so much
worse than their competitors that they would have almost no chance as
potential winners even if advanced to finalists now.  Selecting them
would be OK if we had an intermediate pre-final round, and we briefly
discussed introducing one (with 12 or so round 2 candidates), but
decided against it as it wasn't on the previously published timeline, we
didn't want to extend the duration of the competition, and most
importantly we felt it would ultimately be unlikely to result in us
making a better selection of winners.

> Given the publicly available eprint report
> which I referred to in my previous mail,

Per https://eprint.iacr.org/2014/881 Rig is on par with Catena in terms
of feature set and main properties (except for performance, which is not
studied there).  Right?  I think this makes it "similar to Catena" at
least competition-wise (even if not as much internals-wise).

Is your point that Rig did receive attention?  Yes, it did receive some.
This ePrint Report was rather late in the process of deciding on the
finalists ("Date: received 24 Oct 2014"), though.

> is it fair to dismiss the design in this single sentence ?

I'd say that (almost?) none of the comments on the submissions, both
finalists and non-finalists, are doing them justice.  These are just
summaries of primary reasons for selection and non-selection, for these
two groups respectively.  The panel did not mean to "dismiss" the
non-finalists - the panel merely did not select them as finalists.

For example, I could complain about the wording the report uses on
yescrypt, but if that's why the panel selected yescrypt, then the report
is correct even if I'd select yescrypt for different reasons.  I think
the panel mostly misses what might be yescrypt's main advantage: I think
yescrypt is the most scalable entry to this competition, in multiple
ways (memory, optional parallelism at multiple levels), all while
providing decent attack resistance across this large "surface" (vs.
specialized competitors for some lines or spots on that surface).  Maybe
I didn't emphasize this enough.  Maybe I should, now that the report is
public and I can use the info (this apparent lack of understanding by
the panel) without abusing my inside knowledge.

Of course, in cases where a possible misunderstanding regarding a
candidate's strong sides actually resulted in non-selection it's more
important, and it certainly can feel bad.

(I contributed to editing the report, but not for its entry on yescrypt.
Nor on Rig, for that matter, as this summary does reflect the panel's
reasoning.)

I am also unhappy that we didn't find the resources to prepare a more
complete yet balanced report (still with similar amount of detail for
each candidate).  In fact, ideally we should have done it before
finalizing the selection of finalists, as formulating the reasoning in
greater detail can potentially affect the selection.  I doubt it would
in this case, but in general that would be a cleaner procedure to follow.
Maybe we should do it that way for the winners, if time permits.  It'd
feel bad to delay finalizing and announcing the winners for the report
writing, though.  And the non-finalist submitters would feel even more
unhappy about them not having received similar attention to finalist
non-winners.  Hmm.  Just thinking out loud.

> If Rig v2 was not being considered, shouldn't the
> designers have been informed at the time of the revised submission itself
> that this version will not be evaluated any more ? (Related: weren't other
> design also allowed to be modified around the same time ? The changes were
> not overhauling the design, these were minor modification to handle the
> issues which Bill Cox found. I don't think there was any comment on the
> design after that. If the panel was not looking at the revised submission
> then we could have as well saved our time to do other things, rather than
> investing our time in something which no one was interested in looking at.)

There was no formal decision to consider or not consider revisions to
submissions made during selection of the finalists.  In practice,
revisions made early had greater chance of being fully considered by
more panel members.  Maybe the website should have mentioned that
post-submission revisions "might not" be considered by some panel
members (those who would have already spent time to form an opinion
based on a previous revision).  I guess that if earlier versions of Rig
felt closer to being competitive, more panel members would likely spend
their time to review Rig v2 when you submitted it.

> 3. It is not just our design. Most designs have one line comments on them
> in the document shared earlier. To say that the panel could not prepare a
> detailed document is mocking the competition. As pointed by Krisztian
> earlier, many of these one liners are actually not factual but based on
> opinions. The report should have had meaningful comparison of the
> candidates, not just one liners on the entries. Dismissing entries by such
> one liners is devaluing the effort put by so many designers in the
> competition.

Opinions, yes.  Maybe unfortunately, opinions matter for adoption of
already selected schemes as well.  I do feel that we should usually give
"hard" metrics priority over opinions, though.

> If you want some specific metrics, then here is a randomly thought list
> which is not exhaustive: performance on various platforms, cryptographic
> strength, memory hardness, .... (add whatever else you like, make a
> baseline and compare all entries on some rational basis).

At least one panel member tried applying his own scoring system to all
of the candidates.  (In my opinion, the resulting selection was weird.)

Anyway, the "table approach" was also suggested by another panel member
when discussing the finalist selection report:

| | I think it would be better, if we include a summary, pros/cons for each
| | candidate. Or maybe a table that shows the features of each candidate.

to which I replied:

| I meant to create such a table (but never found the time), however not
| in the context of justifying finalist selection.  It would just have
| been helpful to have, to keep track of all the 22 non-withdrawn
| candidates.  Now it still makes sense to have a properties and features
| comparison table, at least for the finalists.
| 
| [...] created an xls table to choose the finalists, but I think it would
| have been wrong for us to use solely a numeric comparison of candidates'
| scores to choose the finalists as this ignores the diversity aspect.
| (This was OK for one person's votes.  Just not for our entire process.)
| Also, I'd list many other criteria - e.g., [...] table does not list TMTO.
| 
| For the report on finalists selection, we could refer to a table like
| that to illustrate that we've tried to choose diverse candidates - but
| probably not to fully explain the selection made.

So, yes, this makes sense - and we still have a use for a table like
that, for the finalists.  But it's not a silver bullet, and at least
initially we wouldn't have data to put into some of the fields.  With
9 finalists, this should be easier to do, though.

We also already have summaries of pros and cons of the candidates on the
wiki, where the submitters can edit too.  And on the wiki they don't
have to be balanced.  It's much harder to do the same in a report, yet
keep it balanced.

> In my humble opinion, the bitterness which we are witnessing in the mailing
> list is due to the secretive selection and the improper rationale for
> selection in the document. If these were public and based on detailed
> discussions, I don't think anyone would have complained. IMHO, the panel
> members should have already realized that there is a lot to blame
> themselves rather than the people questioning their decision now.

The report could definitely have been better.  I actually expected the
sort of criticism we're seeing now.

> To blame
> the questions on the "frustration of non-finalists" is not showing the
> maturity expected from a panel,

I don't speak for others, by my only comment about the "non-finalist
frustration" playing a role was in context of specific strong wording
used.  Not the fact that you have questions, and which ones.  The
questions are reasonable anyhow.

> which contains many good people whom many
> of us trusted (if that was not the case then you wouldn't have received so
> many submissions in the first place). Honestly, please discuss with some
> researchers in universities around you about the way the selection has
> happened so far, showing them the "selection rationale document" and "the
> process followed" (the secret voting, and not even following that voting
> perfectly;  claiming that this was in addition to the private discussion
> etc). I am quite certain that none of them will favor the process as
> followed. All of it was easily avoided by keeping the process in public
> domain and having a well thought out selection document.

Crucial detail: we had decided that the voting would not directly
determine the selection of finalists before the voting started.

I recall I specifically raised the concern that lesser-known candidates
would not receive as many votes for/against as the better-known ones, so
that was one of the aspects we were planning on dealing with in the
post-voting discussion.

> To quote a
> few lines from the TwoFish team (
> https://www.schneier.com/paper-twofish-final.pdf) " "Simplicity??? is the
> NIST criterion that???s hardest to describe. Lines of pseudocode, number of
> mathematical equations, density of lines in a block diagram: these are all
> potential measures of simplicity. Our worry about simplicity as a measure
> is that the simplest algorithms are often the easiest to break. ...".

I think the tradeoff between simplicity and other desirable properties
is well understood by the PHC panel.  Otherwise e.g. yescrypt would have
no chance of being selected.

Alexander

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ