lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Mar 2014 13:05:48 -0400
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] "Why I Don't Recommend Scrypt"

On Thu, Mar 13, 2014 at 12:49 PM, Tony Arcieri <bascule@...il.com> wrote:
> On Thu, Mar 13, 2014 at 9:43 AM, CodesInChaos <codesinchaos@...il.com>
> wrote:
>>
>> Servers can't afford hashing for a second and using a GB of RAM in the
>> process
>
>
> This presupposes either a high rate of password hash verifications or
> extremely RAM-limited servers.
>
> --
> Tony Arcieri

I keep hearing about password verification servers, where the server
does 100s or 1000s of verifications per second.  This may be very
appealing to large data center hosted applications.  I think there's a
question of just how secure you can be hashing 1MiB in 1ms, but Solar
Designer's shared ROM concept is pretty interesting.  With a custom
256 GiB SSD, you can hash a MiB of RAM and maybe ~1MiB or SSD ROM in
~1ms (to within about 10X... I haven't looked at the numbers).  An
attacker has to have a copy of that SSD to break the password, and
it's easier to be confident that an attacker hasn't downloaded 256GiB
over a slow connection than it is to be confident he hasn't copied a
master key of some sort.

Then even if an attacker does get a copy of the whole SSD, he still
has that 1ms memory-hard algorithm he has to run, and the way Solar
Designer's building it, you'll probably have to do that on regular
CPUs rather than GPUs.

My prefered solution would be to have the browsers run native
implementations of the password hashing winning in server relief mode
over a secure connection.  In the shorter term, that could be
implemented in well optimized Javascript, which runs plenty fast
enough on the most popular browsers now days.  You could still argue
that a large SSD backed hashing server is more secure, and you don't
have to wait for browsers or anything.  Part of the issue is who do
you trust?  I do not trust large data center admins.  They've proven
many times now that they are not on average very good at protecting
passwords.  On the other hand, those admins surely don't trust the
users to take any responsibility for password protection.  I prefer to
have my compute-intensive hashing done locally, because I can verify
it, while I have no idea what those admins do.

Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ