lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 01 Apr 2015 11:01:36 +0200
From: Milan Broz <>
Subject: Re: [PHC] OMG we have benchmarks

On 04/01/2015 10:45 AM, Solar Designer wrote:
> On Wed, Apr 01, 2015 at 03:02:07AM -0500, Steve Thomas wrote:
>> Note I believe there might be a problem with some of it: battcrypt on 5x and
>> POMELO on 3x and 5x. Since those algorithms don't have t_costs for those and I
>> think they are run at lower settings.
>> But ignoring that these are the best benchmarks I've seen since they're
>> normalized for rounds across memory and time vs memory (instead of having t_cost
>> or m_cost as an axis).
> Cool!  Why are these for t_cost from 2 to 5, though?  Where's t_cost 0
> and 1?  I think only behavior with the lowest supported t_cost matters
> for selection of a scheme, whereas exactly how higher t_cost affects the
> behavior is merely additional information to be used for fine-tuning.
> Also, are the Lyra2 results included here for 1 or 2 threads?
> I assume the rest are for 1 thread?

Well, it was just test run, but once it is already here:

- I fixed point colors, so it should now match all 4 versions.

- Yescrypt has in 3 x round two more points (2GiB) - just typo in script.
(I'll run all algs up to 8GiB but it will take long time -> later)

- Lyra2 should be for 1 thread (-DnPARALLEL=1)

- Parameters are according to Steve's table
(also in )

- the low memory setting is "unstable" because of RUSAGE measurement:
  Real memory us is simple difference of getrusage(RUSAGE_SELF, ...)
  before and after run (well, here maximum of three runs).

  So for small memory allocations it can be even zero (because that process
  already have some memory pre-allocated). My intention was to show that it
  really have expected peak there.
  (So it will not match your calculation exactly but it must be very close.)

Graph for t_min is here (but is somehow strange)


Powered by blists - more mailing lists