lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOLP8p7bdUzEGac8xoxmcGdihugnbR1_c80+ji=P6z5msCYbyw@mail.gmail.com>
Date: Tue, 14 Jan 2014 21:20:16 -0500
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] A must read...

On Tue, Jan 14, 2014 at 5:31 PM, Andy Lutomirski <luto@...capital.net> wrote:
> On Tue, Jan 14, 2014 at 2:28 PM, Bill Cox > Out of curiosity, have you tried MultHash(hash, v[addr], prevV, v[addr
> - C]) where C is something like L2 cache size?  It might help to have
> even more taps.

That's funny because my previous hash function was exactly that,
though C was 1.  It did help, and was the version that "passed" the
DieHarder tests.  I'd be curious to understand your feeling for why
that would work better.  I discovered it through trial and error.

I was able to drop the additional tap and still "weakly" pass the
Dieharder tests (no failures, but three weak passes indicates some
non-randomness).  I need to understand better why it matters that
memory seem highly random when all I'm doing is XORing in some of the
locations into an already randomized output.  It seems to me that it
should be fine without the extra tap, but it doesn't slow my machine
down much to add it back.  As you suggest, it's already cached.  I was
concerned about hardware without parallel instruction issuing
capability, so I simplified the loop.

> What does your code do if you want to use a lot more time than
> (memory/bandwidth)?  Do you just do the same thing in a loop?

Yes, exactly that.  I have two parameters for this.  One is motivated
from the Catena paper, and is called "garlic".  It doubles both memory
and runtime, and can be used to increase the hashing strength of an
already hashed password.  The other is just repeat_count, and it
simply continues the inner hashing loop for a number of iterations
without increasing memory.  It seems to me that users would prefer to
be able to increment either their time or memory cost or both, rather
than using my garlic parameter, which doubles both at the same time,
but I haven't figured out how to make that work.

Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ