lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <op.xcqjtnjvyldrnw@laptop-air>
Date: Fri, 14 Mar 2014 16:35:37 -0700
From: "Jeremy Spilman" <jeremy@...link.co>
To: discussions@...sword-hashing.net, "Solar Designer" <solar@...nwall.com>
Subject: Re: [PHC] "Why I Don't Recommend Scrypt"

>> E.g. with bcrypt you cannot increase
>> the iteration count in an "offline" way (you have to start again from
>> the source password),

> True, although this can be done in a higher-level algorithm running on
> top of bcrypt.  Of course, it's no longer bcrypt proper, then - and yes,
> it'd be nice to have this built-in.

I was thinking about this during the thorough flogging we've been giving  
PBKDF2. There are a surprising number of features in the scope of  
"password storage and verification" which could be bundled or not within  
the underlying hashing function.

I like the idea of a hash-agnostic API which can properly handle salting,  
peppering, stretching, lengthening, serializing/deserializing, offloading,  
and upgrading. Let the hash function provide the tunable [compute-hard,  
memory-hard, ROM-port-hard] function, but where everything else lives one  
layer up?

>> Or even more advanced features such as the ability to
>> delegate computation to an external untrusted system.

> This is an interesting topic, which somehow we haven't touched on this
> list yet, although at least Jeremy Spilman and I thought of it before.

The approach I've been developing involves a Site handing just the hash of  
a salted-stretched-password to an external service. The Service HMAC's  
this hash with a Site-specific shared key and a Site-specific private key  
to generate pseudo-random indices into a very large ROM-on-SSD. (I'm  
currently running 16TB ROMs in multiple data centers if anyone is  
interested in playing with this, please let me know)

Data is retrieved from the calculated indices of the ROM and HMAC'd with  
the hash from the Site as well as the Site-specific shared and private  
keys again, and the result is returned to the Site. Site then uses this  
'Salt2' to further stretch their hash as desired and finally save the  
result to their database along with their original salt.

The net effect is that the salt and hash in the Site's database are not  
sufficient to brute force any passwords, without also having access to the  
external ROM.

The ROM is designed to be shared across many Sites (hence all the HMAC'ing  
with site-specific keys, to prevent siphoning of the ROM from one site for  
use with another).

By running many pseudo-random uniformly distributed lookups into the ROM  
for each request, it forces an attacker to exfiltrate substantially all of  
the ROM to run an offline attack. For example, if you require 64 lookups  
into the ROM for each request, then an attacker possessing the salts &  
hashes from the Site along with 90% of the ROM still would not be able to  
verify 99.9% of their guesses.

I was thinking this kind of external service was out-of-scope for this  
PHC, but if people are interested I'd be happy to discuss the details.


> Moreover, there are probably ways to
> implement this feature as a built-in better than it'd be implemented
> externally, by introducing blinding with some per-hash-computation
> randomness rather than merely by relying on fixed secrets (the
> difference is in how much metadata is leaked to the external party).

I believe the lower-bound here is the external party will know that you  
made a request (either a user tried to login, or the Site is generating  
fake logins), and obviously the given input for each request, but the  
external service would not know if that input corresponds to a successful  
or unsuccessful login attempt.

Assuming the external party is hostile, and saves every input you ever  
send it, then the Service can presume a request with a duplicate input  
means there's a user trying the same password twice, but still not know  
anything about the password, which user, or even necessarily if it's a  
valid password.

I think the key requirement is to treat the external party as untrusted,  
i.e. they can't recover any plains, and there's no hostile output they can  
provide back to you that would ever weaken your stored value or lead to  
collisions. If the external service is compromised, the attacker gets the  
metadata, but they don't get any passwords unless they compromise the Site  
as well.

If there's a way to reduce metadata leakage any further, I would be very  
interested in this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ