lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 26 Mar 2015 21:24:59 +0300
From: Solar Designer <solar@...nwall.com>
To: discussions@...sword-hashing.net
Cc: Paulo Santos <pcarlos@....usp.br>
Subject: Re: [PHC] Another PHC candidates "mechanical" tests (ROUND2)

On Thu, Mar 26, 2015 at 02:29:29PM -0300, Marcos Simplicio wrote:
> On 26-Mar-15 13:27, Solar Designer wrote:
> > On Thu, Mar 26, 2015 at 12:27:04PM -0300, Marcos Simplicio wrote:
> >>> Lyra2 is less suitable for low sizes like this.
> >>
> >> Just for the sake of clarity: why exactly?
> > 
> > Because at too low sizes it's likely weaker than bcrypt at least
> > against GPU attacks.  What exactly is "too low" is to be determined, and
> > will vary by likely attacker's hardware.
> 
> Hum... That makes me think we need to include bcrypt in our GPU
> benchmarks and see what happens.

Yes, and in your CPU benchmarks too, so you'd be comparing GPU attacks
on Lyra2 vs. bcrypt at the same defensive running time for them (on CPU).

> I'm not a GPU specialist, but we do
> have a person working with the GPU implementations and the results shown
> in our report (Sec. 7.3, Figure 20) are that, in the best conditions
> from an attacker's perspective and for a memory usage of 2.3 MB, the
> GPU-based implementation was 4.5 times slower (in terms of throughput)
> than in the CPU.

OK, this may suggest the threshold at ~0.5 MB, which is quite close to
my guess of 1 MB.

(For scrypt, it's trickier due to needing to adjust its TMTO factor, but
for Lyra2 I expect this to be almost linear.)

> Obviously, there may be some optimization missing or maybe we need have
> tests with an even lower memory usage, but so far I cannot say I agree
> with that impression (which does not mean you are wrong, of course).
[...]
> We will try going down from 2.3 in steps of ~1/2 and see what happens.
> I'm actually very curious to know :)

Yes, please.  I am also very curious.

Thanks!

Alexander

Powered by blists - more mailing lists