lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140828172319.GB2477@openwall.com>
Date: Thu, 28 Aug 2014 21:23:19 +0400
From: Solar Designer <solar@...nwall.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Memory performance and ASIC attacks

On Thu, Aug 28, 2014 at 12:07:24PM -0400, Bill Cox wrote:
> TwoCats and Yescrypt are the most ASIC attack resistant algorithms in
> the competition for hash sizes of 32MiB and up.

If so, why not for lower sizes as well?  Do you mention this as the
lower boundary just in case, since Pufferfish (and bcrypt, but it's not
in PHC) might win at some really low sizes (perhaps way below 1 MiB)?

> Lyra2 is a close
> second, off by about 2X in my tests, only because Lyra2 does not have
> a multi-threading option.

Only 2x worse while completely lacking computation latency hardening?
Are you sure it's safe to rely solely on memory latency and bandwidth?
Previously, you were not so sure.

> Here's two sets of cache tests.

Thanks!  Per these results, for 4096 byte blocks all 3 levels of cache
are equally fast, and RAM is less than 2x slower.  This may be so for 1
thread on an otherwise idle system, but it is important to remember that
with multiple running threads things may be very different: the caches
closer to the CPU cores may demonstrate much better scalability than
others (and than RAM) since they and/or the buses to them are more
numerous.

(With multi-threaded hashing, with some working set sizes close to the
total L3 cache size, L3 turns out to be moderately slower than RAM.
This is despite of L3 being several times faster than RAM for purely
sequential reads, as opposed to for reads in large random blocks, also
from multiple threads on the same system.  Luckily, this phenomenon, or
rather it occurring to a sufficient extent that L3 appears slower than
RAM, is not very common.)

Alexander

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ