lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Fri, 3 Jan 2014 11:27:49 -0500
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Weakness in keystretch hashing, and a solution (I think)

The lower bound on area*time for an attacker is too low for my current
"keystretch" hash.  I'll have to make modifications.  I don't know if it's
helpful to discuss this sort of thing here, but just in case other PHC
entries have similar weaknesses, there's still a month for authors to fix
them.  I suspect the other authors consider this a rookie mistake :-)

The great thing about memory-hard KDFs is that the longer you run them, not
only will an attacker have to also run longer, but he will also have to pay
for more memory.  This is the key advantage vs other KDFs.  An attacker can
always choose to use less memory than the user, and instead recompute the
missing data on-the-fly as needed during hashing.  To insure that an
attacker has a lower bound on area*time, it is critical for a memory-hard
KDF to increase in runtime by at least a factor of n, when an attacker
tries to use 1/n of the memory.  Scrypt has this property, though I suspect
that an attacker using maybe 30% of the memory will reduce his cost*time
per guess by something like 2X.  That factor hit's a wall quickly, however,
and the minimum cost*time that he must pay is still very effective.

For my current keystretch hash, I only have 64 bytes of state to generate
the next memory "page" (just my word for a small block of memory) of
memory, which I set to 16KB normally, plus a random previous page.  An
attacker could keep only the 64 bytes per 16KB page, with a memory
reduction of 256X.  To generate the next page, he must generate the
previous random page, which depends on a random previous page and so on.
 There are an average of log2(numPages) that he must compute, which isn't
strong enough to be considered memory-hard.  An device using minimum memory
(1/256th) would be much more cost effective for an attacker than what he
would have to build to attack scrypt.  Logarithmic runtimes are good for
attackers.

The simplest fix I see is to increase the state used to compute pages to
the page length or more (scrypt uses 2X).  In this example attack where an
attacker keeps only key state, he'd save no memory, but runtime would still
increase by log2(numPages).  If he keeps only 1/4 the key state and 1/4
page memory, he'll have to generate both missing keystate and page data
3/4ths of the time, which seems to blow up exponentially in terms of how
much of the missing memory and key state has to be computed.  Exponential
is bad for attackers.

Does anyone see any other weaknesses in my simple approach of generating
"pages" sequentially from random previous pages, with a very simple hash
and a page of key state that updates with each page generated?

Some thoughts: later pages are accessed less than early pages, but why do I
care?  If the attacker has to keep later pages in RAM, we've won.
 Generation sequentially like this is bad for multi-threading, so I'll have
to come up with something.  Alexander's solution is good for this.  By the
way, I'll just throw a little fan admiration Alexander's way.  Few people I
have ever run into can optimize code like that.

Content of type "text/html" skipped

Powered by blists - more mailing lists