| lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
|
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAOLP8p6cRhmzwQJo1ceEcCDeyD8VAgzbAn6eD5EVedR_Wwp+og@mail.gmail.com> Date: Tue, 11 Feb 2014 17:47:47 -0500 From: Bill Cox <waywardgeek@...il.com> To: discussions@...sword-hashing.net Subject: Is bandwidth all that counts? Having submitted my NoelKDF with it's multiplication compute-time hardening, I am now wondering if the compute time we force an attacker to spend matters at all. An attacker will simply add password hashing cores, which are close to free, to his FPGA or ASIC, until his memory bandwidth is full. If I force him to spend a full second to write and then read 4GiB once (which I do), he'll just run 5 of my hashing cores in parallel on an FPGA and fill it's 40GiB/sec memory bandwidth, doing 5 guesses per second, so who cares that I forced him to spend as long as me computing the hash? Now the reverse is not true - if we spend time on a complex hash function instead of filling memory rapidly, an attacker will be more efficient, maxing out his memory bandwidth while we don't, and that ration is pure win for the attacker. It seems to me that the important thing is to fill memory as rapidly as possible, wasting as little time as feasible just doing computations rather than reading/writing memory. NoelKDF is pretty respectable in this regard, filling memory on my development machine at about 5GiB with 1 thread, and 10GiB with 2 threads. However, the machine is probably capable of over 20GiB/sec. Bill
Powered by blists - more mailing lists