lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Mar 2014 21:59:31 -0700
From: Larry Bugbee <>
Subject: Re: [PHC] "Why I Don't Recommend Scrypt"

On Mar 13, 2014, at 4:20 PM, Thomas Pornin <> wrote:
> Another point is that I don't actually believe in tunable parameters. At least not many. If we take 100 sysadmins, and give them a password hashing function with a single tunable parameter, we can expect that about 80 or 90 of them will do at least token effort at setting that parameter to a right value for their problem at hand. With two parameters, that proportion will drop; since measuring performance is actually hard (or, at least, much too rarely done, for some reason), two or more tunable parameters imply a lot of combinations which won't be explored correctly, or at all.
> In my view, a "good" password hashing function should provide enough parameters to make it adequate for a large variety of situations, but must refrain from complexity. A big part of what makes a cryptographic algorithm "good" is how much it intrinsically protects users from their own mistakes. Mutatis mutandis, this is why I really prefer HMAC over most other MAC algorithms: since HMAC has no IV, it is much harder to get wrong.
> That's my opinion, of course, but it implies that I am not a big fan of functions which can be tuned for both CPU and RAM "usage" (for some notion of usage). Such functions are doomed to be awfully misapplied in practice.

Good point.

Too many options is not good, but...  

I submit the situation here is different.  There are wide differences in defender's available resources, workload (verifications per second), as well as perhaps 2 or 3 general levels of protection vs performance trades.  If an algorithm is "factory tuned" for a grossly different config, workload, etc, the mismatch might be so gross as to cause admins to go outside the box and ultimately be in an even worse place. 

A good compromise might be a set of "macro settings" to tune the algorithm for various likely machine configurations, workload, and levels of protection.  Each macro setting would define the values for each of the detailed tunable settings.   

Appropriately designed tables would assist the admin in making the right (or at least a close) choice.  Then, when a macro setting is selected, the detailed tunable values are set accordingly.  There should be a sufficient number of these macro settings so the admin can be comfortable he/she is making a reasonable choice.  ...and for those with special situations, and the knowledge to do so, the detailed tunable settings could override the macro preset.

Macro settings or not, table design will be an important consideration in making the right choices.

Powered by blists - more mailing lists