lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOLP8p5ANjavDtHf_RYxAxWC2MFK+cz=CeZV-pxNAV11od0eyQ@mail.gmail.com>
Date: Sat, 18 Jan 2014 12:43:12 -0500
From: Bill Cox <waywardgeek@...il.com>
To: discussions@...sword-hashing.net
Subject: Modified pseudo-random distribution in NoelKDF

The original NoelKDF loop looked a bit like:

v[0] = SHA256(password, salt)
for i = 1 .. memlen-1
    v[i] = H(v[i-1], v[rand() % i]
output v[memlen-1]

This has tons of issues, but it illustrates one themes I've kept so
far in NoelKDF.  Each memory location (or "block" of memory locations)
depends on the previous as well as a pseudo-random prior memory
location (or block).

My theory was that for attackers using much less than 1/4 of the
memory in a TMTO would suffer a massive recomputation penalty.  It
turns out that recomputations with this uniform distribution of edge
lengths is not high enough.  So, I've modified the distribution to
look like:

v[0] = SHA256(password, salt)
for i = 1 .. memlen-1
    edgeLength = floor( (i - 1) * (pseudo-random value between 0 and 1) ^ 3 )
    v[i] = H(v[i-1], v[i - edgeLength])
output v[memlen-1]

Basically, instead of a uniformly random edge lengths, I heavily
weight edge length distribution to be shorter by cubing the random
value between 0 and 1 before multiplying it by the memory position.
This causes the sub-DAGs rooted at a random location to use more high
memory locations, which in turn require more low memory locations.

When an attacker attempts to use much less than 1/4 of the memory, he
begins to run into a steep recalculation penalty.  I've run
simulations for various pebble distributions in memory, and the best
so far (for the attacker) that I've found is the simple uniform
distribution.  I suspect there are better.  Here's the data I've
collected so far.

After filling memory, I run a "cheat killer" pass that requires the
attacker to prove he knows the values of 1% of password dependent
pseudo-random memory locations.  The size of the sub-DAGs that must be
recomputed on average for uniform pebble distribution are:

memlen     mem covered     ave sub-DAG size       recomp penalty
     100,000              25%                        .53%
         5.3X
  1,000,000              25%                        .31%
         31X
10,000,000              25%                        .13%
      130X
     100,000           12.5%                      21.7%
      217X
  1,000,000           12.5%                      21.6%                    2,160X
10,000,000           12.5%                      17.8%                  17,800X

For a graph with 10 million nodes (40MB of memory), an attacker
keeping every 4th memory location will have to do 130 times more
computations than the defender.  For an attacker keeping every 8th
memory location, he'll have to do 17,800 times more computation than
the defender.

The cut size at 10 million nodes in the middle is 17.5%, or 1,750,000
edges.  A single cut takes over 1/8th the memory, so such cuts are
unlikely to provide effective attacks.

This scheme seems to begin offering decent protection against TMTO for
attackers using 1/4 the memory, and seems to provide a lot of
protection against anyone using 1/8th or less.

Can anyone find a worse distribution than evenly spaced?  I've tried
various combinations of evenly spaced pebbles and putting pebbles on
nodes with multiple edges pointing at them, and placing the pebbles
more densely early in memory.  Nothing seems to beat evenly spaced so
far.  If you suggest a distribution, I can easily test it.  There is
no reason to consider attackers who move their pebbles during
computation because I am willing to let him choose an optimal
placement in the beginning, and to "reset" it for free after each
sub-DAG recomputation.  Therefore, I only have to consider the best
possible static pebble placements.

I've also tried a number of different DAG architectures other than
random.  I've got what I call a "safe dating tree", where adjacent
leaves do not have a common parent until the root, and leaves 2 apart
have no common parent until a child of the root, and leaves 4 apart
have no common parents until level 3 in the tree, and so on.  This
increases the size of the DAGs that have to be recomputed a bit for
evenly spaced pebbles, but creates opportunities for clever pebble
placement to defeat it.  I have not come up with anything better than
pseudo-random trees so far for maximizing the average size of the
sub-DAG that requires recomputation, while resisting clever pebble
placement attacks.

Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ