lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <op.xrapwef2yldrnw@laptop-air> Date: Mon, 22 Dec 2014 16:49:16 -0800 From: "Jeremy Spilman" <jeremy@...link.co> To: discussions@...sword-hashing.net, epixoip <epixoip@...dshell.nl> Subject: Re: [PHC] How important is salting really? On Thu, 11 Dec 2014 17:32:03 -0800, epixoip <epixoip@...dshell.nl> wrote: > the primary advantage to salting today is the > factor of N slowdown in cracking. If you have 1M unique salts, then your > effective speed is 1/1000000 of your total speed. 30 GH/s becomes 30 > KH/s. It limits the types and variety of attacks we can run. I saw your post on Ars Technica trying to explain this to general public in the wake of their breach. http://arstechnica.com/staff/2014/12/ars-was-briefly-hacked-yesterday-heres-what-we-know/ It's an interesting case-in-point of explaining, what is the level of protection users should expect? In this case, the function provides a non-negligible cost to complete, but not a particularly high bar per individual guess. You mentioned a single R9 290X can pull ~ 12.2 GH/s on raw MD5, and here we have 2,048 iterations. For the Ars hack, we have presumably a very large number of salted hashes leaked along with corresponding PII. In a non-targeted attack, you gain the most cracked hashes per second on average, by trying each most probable password in sequence of expected popularity, on each remaining salt. For each pass, the input to the cracking function is the salts, everything else is fixed data. In this case the per-user hash rate is the single guess rate divided by the number of uncracked salts. In a targeted attack, of course the salt is fixed, and becomes less relevant. If your metric is "% of database cracked over time" then salting greatly effects that metric if the database is large. As you mentioned, for Forbes' 1M passwords, crack rate is down around 20%. The massive size of the breach--e.g. 1 million accounts--reduces the speed with which you advance to the next word in the dictionary. In this case, losing a lot of hashes at once helps users if the attacker is completely un-selective in their cracking. But can we always make that assumption--that the attack will not be selective? If the metric is average latency to crack a single password, that metric is at all not sensitive to how large the leak is or the number of salts. It focused purely on the cost of hashing. When might the target size be one? Well, for example, in a targeted or on-demand attack, versus the general case of an attacker trying to re-sell in bulk the weakest passwords in the database. In this case you might look at what is the average cost to crack just one account. You can't rely on hiding among millions of salts to make the attack slower. Another way to look at it, what is the minimum amount of entropy a password would need under the hashing scheme to resist an offline attack given some assumed resource and time constraint? E.g. For 2,048 MD5, lets say I can target a single account with $10,000 of hardware and achieve 50 MH/s, or 30 trillion guesses in a week's time, that's going to successfully crack what percentage of user-chosen passwords?
Powered by blists - more mailing lists