lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 02 Apr 2015 13:45:10 -0300
From: Marcos Simplicio <mjunior@...c.usp.br>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] OMG we have benchmarks

On 02-Apr-15 11:48, Bill Cox wrote:
> On Wed, Apr 1, 2015 at 10:55 PM, Milan Broz <gmazyland@...il.com> wrote:
> 
>> Hi Bill,
>>
>> On 04/02/2015 06:35 AM, Bill Cox wrote:
>>> These charts look like Lyra2 is still running with 2 threads, and
>>> Yescrypt is still running with 6 rounds rather than a more comparable
>>> 2 rounds.
>>
>> The code for #if (nPARALLEL > 1) for Lyra2 is not compiled in.
>> This is the patch
>>
>> https://github.com/mbroz/PHCtest/commit/2e0d07b4a3f7d2dd69a1729c6770c0e39938fdc4
> 
> 
> Your patch looks good.  It compiles on my machine and runs with 1 thread.
> I did have to move the -lcrypto to the end of the gcc command line to get
> Pufferfish to compile.  Do you know what causes this?
> 
> 
>> Seriously, I do not want to tweak algorithms myself.
>> I can add another version but point me to git where is the modification
>> when author did these changes. But yescrypt was submitted in some form
>> and I think it should be tested this way.
>>
> 
> I think running with default parameters is fine for one test, but this
> shows the entry with the least computational hardness as being better than
> the rest.  I think we also need to run benchmarks where we compensate for
> weaker default parameters to compare them more even.

That comes without saying: it is useless to have a winner today just to
discover tomorrow that we missed some aspect.

That is why I believe normalizations are needed to compare the schemes
and isolated one parameter, computational hardness of PWXForm vs.
Blake2b vs. BlaMka being possibly one of them. As I understand it,
discussing (and experimenting with) the possible other aspects is
exactly the purpose of this list. :)

On that matter, we are finishing running some tests in which the number
of passes through memory and the (theoretical?) computational hardness
of both yescrypt and Lyra2 are similar, namely: yescrypt with T=2 and
everything else as usual vs. Lyra2 with T=1 and 1.5 rounds of BlaMka,
which amounts for 12 MUL + 12 ADD + 12 XORs like pwxform. From what my
students showed me, it seems to be a draw, but I prefer not to speculate
without the actual experiments to support this impression.


> The  results still don't line up with mine, which is probably my fault
> since I make a lot of mistakes.  I'll track it down.  However, this is a
> case of one benchmarker doing work to validate another benchmarker.  This
> is a bit nuts, so I'll post what I think should happen.  Basically, I think
> we should require the authors to validate the numbers we post...
> 

Confirmation from authors is indeed good (even though I may miss
something). I did not complain, though, because that sounded right to
me: at least the charts were similar to the results we reported in our
Reference Guide. Then I confirmed with Ewerton that setting "nPARALLEL =
1" would do the trick.

Sorry for allowing the confusion to flourish: I only found the e-mails
discussing there was a possible confusion now that you have (?) solved it :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ