[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOLP8p54eHJ5v-jX+RkVYCROvUPMUv00LPFDboN4MPJzT1osRg@mail.gmail.com>
Date: Thu, 2 Apr 2015 07:48:52 -0700
From: Bill Cox <waywardgeek@...il.com>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
Subject: Re: [PHC] OMG we have benchmarks
On Wed, Apr 1, 2015 at 10:55 PM, Milan Broz <gmazyland@...il.com> wrote:
> Hi Bill,
>
> On 04/02/2015 06:35 AM, Bill Cox wrote:
> > These charts look like Lyra2 is still running with 2 threads, and
> > Yescrypt is still running with 6 rounds rather than a more comparable
> > 2 rounds.
>
> The code for #if (nPARALLEL > 1) for Lyra2 is not compiled in.
> This is the patch
>
> https://github.com/mbroz/PHCtest/commit/2e0d07b4a3f7d2dd69a1729c6770c0e39938fdc4
Your patch looks good. It compiles on my machine and runs with 1 thread.
I did have to move the -lcrypto to the end of the gcc command line to get
Pufferfish to compile. Do you know what causes this?
>
> For yescrypt - I did not want to modify code (except for clear segfaults
> which prevents test run).
>
> > These are pretty pictures, but the benchmarks I ran with a bit more
> > care showed a very different picture. I'd hate to see a winner
> > selected based on nice coloring and one algorithm running 2 threads
> > vs all the rest with 1, and one algorithms running 6 rounds rather
> > than 2.
>
> I can convert it to black&white if it helps ;-)
>
Actually, I'm pretty massively colorblind - basically no working red
cones. These charts are very hard for me to read. So... black and white
would be great :-)
> Seriously, I do not want to tweak algorithms myself.
> I can add another version but point me to git where is the modification
> when author did these changes. But yescrypt was submitted in some form
> and I think it should be tested this way.
>
I think running with default parameters is fine for one test, but this
shows the entry with the least computational hardness as being better than
the rest. I think we also need to run benchmarks where we compensate for
weaker default parameters to compare them more even.
> I added Steve's test because it was a nice idea, if there is a problem,
> let's fix it or if it is unfixable (without code modification) abandon
> the test completely.
>
> For Lyra2 it was clear bug with compile parameters but I think it is fixed.
> If there is another mistake, please let me know.
>
> Thanks,
> Milan
This is really good work, and thanks again for doing it.
The results still don't line up with mine, which is probably my fault
since I make a lot of mistakes. I'll track it down. However, this is a
case of one benchmarker doing work to validate another benchmarker. This
is a bit nuts, so I'll post what I think should happen. Basically, I think
we should require the authors to validate the numbers we post...
Bill
Content of type "text/html" skipped
Powered by blists - more mailing lists