lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 17 Apr 2013 09:48:45 -0700
From: "Dennis E. Hamilton" <dennis.hamilton@....org>
To: <discussions@...sword-hashing.net>,
	"'Jeffrey Goldberg'" <jeffrey@...dmark.org>
Subject: RE: [PHC] Let's not degenerate when if the PRF is too narrow for desired output

Funny, Marsh Ray and I were discussing the problems with PBKDF2 last month.  Here are my latest thoughts:

There are two problems.  First, asking for more key from PBKDF2 does not increase the work factor if parallel execution is possible.  Secondly, parallel execution allows PBKDF2 to be used selectively to attack parts of the key, as discussed on this thread.

The requirement should be that all of the key needs to be generated to know what any of its bits are.  That usually means that it can't be parallelized either and/or there is an end-game on the results that effectively makes every output block produced from the key dependent on having produced all of them.  

 1. The first (and probably best) solution is to use a PBKDF2 PRF whose output block (the tag) size is at least as large as the desired derived key.

 2. Another is to wait for Keccak with acceptable variable-output block size for use in an iterated MAC.

 3. Finally, make a PBKDF2x that satisfies these requirements.  That would appear to be better than complex multiple-primitive alternatives.

3.1 If serialization is acceptable, each block generated by PBKDF2 can be made dependent on its predecessor by using the salt for the first block but using the previous block's final round contribution (U[block-1,count]) as the salt for the succeeding block. (The block result is the xor of its U[block,i] values so U[block,
count] values are not easily extractable and a block's salt is not present in that block's xor-ing.)  

3.2 In this case, every block but the initial block depends on all of its predecessors.  Making each block also dependent on all of its successors is trickier.  Here this is the same as making every block but the last depend on the final block, but they can't all depend on the final block in a trivial way.  The approach I favor, off the top of my head, is to produce additional HMAC rounds following the production of the last block to generate a series of hashes that are xor-ed back, one each, to the predecessors.  This is tantamount to generating (abbreviated) additional blocks that are then discarded but for their sealing together of the block values that are delivered.

3.3 There are variations that still allow parallel generation, but that seems undesirable considering that the big deal for PBKDF2 is increased work factor.

 - Dennis



-----Original Message-----
From: Simon Josefsson [mailto:simon@...efsson.org] 
Sent: Wednesday, April 17, 2013 02:26
To: Jeffrey Goldberg
Cc: discussions@...sword-hashing.net
Subject: Re: [PHC] Let's not degenerate when if the PRF is too narrow for desired output

Jeffrey Goldberg <jeffrey@...dmark.org> writes:

> This is mostly me whining. I just got bitten by the fact that PBKDF2
> does weird stuff if you ask for more derived data than is natural for
> the PRF you give it.

Perhaps this can be converted into a requirement for a nicer KDF: the
cost to compute one bit of the output should be as high as computing the
entire output?

/Simon

Powered by blists - more mailing lists