lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 22 Mar 2014 22:06:57 -0700
From: "Jeremy Spilman" <>
Subject: Re: [PHC] Transforming hash to different cost setting

You can always add additional rounds of hashing without the password by  
feeding the output from the previous round as the input to the next round.

You'll need to know how many outer loops / rounds, and the cost parameters  
for each round. The total number of rounds would be serialized with the  
hash, but the cost parameters for each round could be  determined a  
variety of ways...

1) Constant cost for each round
2) Fully deterministic from the current round number (baked into the PHF)  
like Catena
3) Defined externally but globally for all passwords (e.g. a static array  
of cost parameters for each round)
4) Defined explicitly in the hash serialization (possibly unique per  

Each trade-off some flexibility in controlling the cost for simplicity.  
IMO #2 or #3 are good options.

As Bill/Solar point out, the other trade-off to increasing cost by  
slapping on additional rounds is latency increasing faster than cost, up  
to 3x as much. Adding a second round with the same cost parameters  
increases cost and latency equally. But adding a second round with  
"better" cost settings means the first round is "wasting" latency using  
the wrong cost settings. Alex says it better;

> Another/worse drawback is that area-time cost for upgraded hashes is  
> lower than
> it can be for same-duration new hashes.

So this raises the question of whether we should actually have two  
different integers allowing us to change cost both ways. Just one example  
way to do this... assuming approach #2 or #3 above, serialize two integers;

- A 'r1' parameter indicates the 'starting round'
- A 'r2' parameter indicates the 'ending round'

When you are upgrading all your hashes in your database offline, you  
increase 'r2'. When the user successfully logs in, you can then change  
'r1' and 'r2' arbitrarily, possibly making 'r1' = 'r2'. This means you'll  
only need to use the less efficient method until the next time the user  
successfully logs in after you've done an offline upgrade.

I think the trade-offs are more complexity, and a signal of how recently  
the user has logged in -- both embedded in the hash serialization, as well  
as through timing the login latency. You could hide both the serialized  
indicator as well as the timing attack at the cost of even more complexity  
and some loss of flexibility. To exploit the timing attack would require a  
number of invalid login attempts to a valid username, which you want to  
defend against in any case.

Sometimes the last time the user logged in is public information anyway.  
But if you need to protect this, I think you can at least decrease the  
latency delta through careful choice of cost parameters. If you start at  
5/5 (meaning a single round at cost '5') and increase all hashes offline  
to be 5/7 (consecutive rounds at 5, 6, 7) then you want to find the single  
round (8/8?) which matches the latency of 5/7 as closely as possible. But  
this may be difficult or impossible in practice due to 8/8 having  
completely different resource contention than 5/7... but that's the whole  

Solar discussed a lot of this under the 'cost upgrade' thread. But if we  
all stopped writing just because Solar said it earlier, the list would be  
a whole lot quieter! :-)


Powered by blists - more mailing lists