lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 15 Jan 2014 12:23:07 -0500
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] A must read...

On Wed, Jan 15, 2014 at 9:54 AM, Solar Designer <solar@...nwall.com> wrote:
> On Wed, Jan 15, 2014 at 06:42:40AM -0500, Bill Cox wrote:
>> What should we call this sort
>> of time-hardening using multipliers?  I'm thinking "sequential
>> compute-hardened", "sequential time-hardened", "sequential
>> multiply-hardened" or somthing like that.
> ... so maybe a better name for the new concept would be "compute latency
> bound"?  We could then say that our KDF is "almost memory bandwidth
> bound, memory latency bound, and compute latency bound".  The "almost"
> here is because in practice we'd tend to bump into one of these three
> things before we bump into the other two, with the other two staying
> e.g. ~2x higher (providing good guarantee against much attack speedup).
> So I'm not sure.  Should we use the word "hardened" instead of "bound"
> for all three things, except for those that we actually deliberately
> bump into (rather than merely get close)?

Bound for coming close, and hardened for things we bump into sounds
good.  I like compute latency bound/hardened.

> Besides integer multiply, I was considering floating-point, along the
> lines of DJB's hash127 - or even taking hash127 as the crypto

I thought about that too, but there are two issues:

- Even though Android phones mostly have floating point units, most
apps assume they don't, and avoid the use of floating point.  I think
the resulting binaries compute floating point in software, even when
an FPU is present, but I could be wrong.  In practice, floating point
is still missing on Android.
- There are still hardware bugs in some floating point implementations.

The area is still so small that I don't know if it makes sense to
worry about it.  RAM is where we use lots of sillicon.  In comparison,
even floating point math wont use much.  Maybe it would add up if we
ran on many processors in parallel on a GPU, which is a mode I want to
support.

Bill

Powered by blists - more mailing lists