lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 9 Jan 2014 03:38:11 +0000
From: Peter Maxwell <>
Subject: Re: [PHC] Lyra, Password Key Derivation Based On The Sponge Construction

On 9 January 2014 02:09, Bill Cox <> wrote:

> On Wed, Jan 8, 2014 at 7:48 PM, Peter Maxwell <>wrote:
>>  On 8 January 2014 23:46, Bill Cox <> wrote:
>>> RC4 is the perfect example to use, and I disagree about the entropy
>>> ratio.  If you have 20 bits of entropy in a user's password, and 256 bits
>>> of entropy from salt, using RC4 initialized from this 276 bits to generate
>>> 4GB of hash data is a *good* idea.
>> Only if you're talking about the tiny minority of systems that can afford
>> to use 4Gb of memory that you can guarantee is protected and not ever paged
>> to disk, *per-login*.  If your server needs to handle even 10 logins per
>> second, that's a wee bit of a problem... it wasn't that long ago that I was
>> admining production webservers with 1Gb memory or less.
> Catena's framework offers a good solution for this.  Servers that can't
> support the load of compute and memory intensive KDFs have the option of
> offloading to the client.  I am a big fan of "Server Relief Support".  At
> the same time, that does not mean that the KDF has to be compute or memory
> intensive in applications that can't support it.  That's what m_cost and
> t_cost are for.  However, the value of a memory-hard KDF goes up in
> proportion to the memory used.

"Server Relief" or pass-the-hash schemes rely on altering the protocol used
between the client and server, it isn't a drop-in replacement for standard
password logins.  If we can expect people to replace entire authentication
mechanisms then surely we can do better than a pure password system to
begin with?

​Similarly, even if we accept we can offload the work onto the client, that
could be a mobile device which arguably has even less memory to spare; or
the client could be a javascript context in a browser which may be rather
limited indeed in terms of usable memory.

That aside, we're still back at the original problem of protecting the
entirety of a large memory region when you've filled it with an RC4
keystream derived from what is essentially a 20bit key (yes, I will admit
that 20 bits may be somewhat of a contrived example but lets run with it as
it was what you specified to start with).

>> If you cannot guarantee to protect that entire 4Gb blob leakage then as
>> Poul-Henning pointed out your lack of entropy becomes an issue, i.e. the
>> salt is public and therefore doesn't count towards the entropy so your 20
>> password bits is only 2^20 ~ 1m combinations and an attacker can
>> potentially determine the password from even a small fragment* of that 4Gb
>> keystream with less effort than the memory-hard problem.
> I could use the world's best crypt-strength hash or permutation to fill
> memory, and if an attacker gets access to it, he'll have a way to abort
> incorrect password guesses early.  That's why I think we need to perform a
> significant number of initial hashes on the password to create an
> intermediate derived password before writing data derived from it to a lot
> of memory.  If memory leaks to an attacker, and we've done 2048 rounds of
> SHA-256 before filling memory, the attacker will be no better off than if a
> user had bothered to protect his id_rsa private key the hard-coded maximum
> of 2048 rounds of SHA-256.

Yes, that's correct but a 20bit password protected with 2k rounds of
SHA-256... I'll take that bet.

Essentially what I'm saying is the larger the memory region you use, the
less guarantees you can make on protecting it, which in turn places more
importance on how you've filled that memory region.

>> [*] - in this specific scenario with RC4, it would really need to be near
>> the beginning of the keystream output but the argument generalises better
>> if the filling algorithm is not cryptographically strong.
> What argument?  Are people attackers going to invert the hash?

​As above.

>> My point is that it's useless to worry about the entropy of the data we
>>> write to memory.  RC4 is already too strong, and a waste of CPU cycles.  So
>>> long as we don't lose significant entropy along the way, any dumb hash that
>>> an attacker has to actually compute is good enough.
>> As a side issue: RC4 is not generally considered particularly strong
>> these days, there are non-trivial biases in the keystream.
> Which is irrelevant for memory-hard KDFs.  Why should we care?

A mix of above and that I was giving some useful advice, specifically that
RC4 really shouldn't be used in anything new.

>> The main question however concerns the feasibility of protecting memory
>> regions that large, or for that matter anything anywhere near that large,
>> when you're filling with password dependent data using an algorithm that
>> can be easily inverted.
> Easily inverted?  Not that it matters, but who cares if we can invert a
> few steps of a hash when we have billions of operations needing inverting?

​That's the point, if a small amount of that memory area is enough to test
passwords against then you don't need to do billions of operations, you can
brute force it directly, e.g. in your RC4 example compromise of a small
section near the start of the keystream output is sufficient.

>> In other words, if - like you said - you're not losing entropy along the
>> way then it implies a permutation, and if it is easy to invert said
>> permutation then an attacker only needs a word or two of memory to
>> undermine your memory-hardness.
> There can be weaknesses in a hash function that could potentially lead to
> non-memory hardness.  The hash function has to be considered.  However,
> some basic non-linearity and the property that entropy is not lost during
> hashing (permutations or low collision hashes) is all that's required,

Ok, lets consider a hypothetical example where you fill each element of the
memory region m with something like m[i] = hash( password, i, salt ).  I
know this is a very trite example but it's to illustrate a point.
 Obviously, if a single m[i] were compromised then an attacker need only
iterate hash() over the 20bits of password to find the correct password.
 The hash() function could be 2k iterations of SHA-256 which is
non-invertible itself and has strong security properties but in this
artificial scenario still fails because the low entropy password can be
brute forced.

I suspect we've deviated quite heavily from where we started and
Poul-Henning's caution is somewhat more subtle than what we've went over
here but I need to sign-off for the night so I'll leave it there for now.

Content of type "text/html" skipped

Powered by blists - more mailing lists