lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 3 Apr 2014 07:49:50 -0400
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Tortuga issues

On Thu, Apr 3, 2014 at 7:03 AM, Jeremi Gosney <epixoip@...dshell.nl> wrote:
> On 4/2/2014 9:26 PM, Bill Cox wrote:
>> Tortuga fails on both windows and Linux for > 1MiB m_cost, due to
>> allocating hashing memory on the stack.
>
>
> Just a heads-up, the optimized implementation of Pufferfish has this
> `issue' as well, as it calls alloca() to dynamically allocate the sbox
> buffers on the stack. The reference implementation allocates memory on
> the heap with calloc() so this is not a problem there, but you'll blow
> out the stack on the optimized implementation if using an m_cost > 10
> (it doesn't "go to 11.")
>
> And yes, this was done intentionally. Since it is unlikely that anyone
> will be using an m_cost > 10, it's a mostly-safe optimization
> (especially for attackers, which is largely what the optimized
> implementation was, rewriting the algorithm from an attacker's perspective.)
>
> For optimized defender code, where one might just be crazy enough to use
> an m_cost of 11, there might be some benefit in writing a custom malloc
> implementation that can quickly allocate heap memory without the
> unnecessary overhead, not unlike JTR's mem_calloc_tiny(). But I think
> this is implementation-specific detail that is outside the scope of the
> PHC. Ideally implementers should be coding to the reference
> implementation and making their own optimizations, using the optimized
> code only as, erm, a reference.

Fair enough.  I consider an unintended crash to be a minor bug, easily
fixable, so no biggie anyway.  Now if the output hashes don't pass
automated randomness tests, that's a show stopper for me.  I'll see if
I can get them all running the same way, with a PHS call.  It's less
likely the mistake will be in my testing that way.

I was also thinking of running valgrind on all the entries.  I might
find a few bugs that the authors can fix that way.

Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ