lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 21 Jan 2014 11:36:35 -0500 From: Bill Cox <waywardgeek@...il.com> To: discussions@...sword-hashing.net Subject: Re: [PHC] Modified pseudo-random distribution in NoelKDF On Mon, Jan 20, 2014 at 9:19 PM, Solar Designer <solar@...nwall.com> wrote: > How do you implement thread-level parallelism, or is this scheme with > the second loop at 1% only suitable for p=1? With N threads, memory is broken into N equal parts, and each thread gets it's own. Threads are merged before the final loop, and data is read based on the password across all memory. I suppose I could have N threads reading randomly from all memory at the end, and then combine their hashes, however it becomes quickly bandwidth limited. My "cheat killer" pass was a good idea when I thought "smoke" could be used to hide the memory access pattern, but smoke didn't provide enough protection from code that can see the individual cache lines I read from. Now I'm thinking that the final pass may need to be independent of the password to avoid a timing attack. >> The average recalculation for 10,000,000 nodes covered by evenly >> spaced pebbles [...] > > What if they are not evenly spaced? You're making the distribution of > lookup indices highly non-uniform, so perhaps an attacker with limited > memory can adjust the spacing accordingly and achieve a lower TMTO > penalty? I've tried many combinations, including random placement, weighted random (more pebbles based on incoming edge density), weighted even spacing, covering nodes pointed to by edges below a certain length, and covering nodes with multiple edges pointing at them. I've also tried combinations of evenly spaced with these other schemes. Nothing seems to improve over evenly spaced so far. This makes some sense. The difference between 1 pebble every 4 and 1 every 8 is enormous. If I use pebbles for anything other than minimizing the average spacing between pebbles, it has to offer a massive benefit, because the recalculation sub-DAG size seems to be exponentially related to the spacing. Even going from every 4 to every 5 is a huge difference. When an attacker has no knowledge of what the next address will be, I seriously doubt it is possible for him to achieve the same penalty as evenly spaced every 4 while only using only 1/8th the number of pebbles as memory size. Bill
Powered by blists - more mailing lists