[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <365503029.94040.1391540463045.open-xchange@email.1and1.com>
Date: Tue, 4 Feb 2014 13:01:02 -0600 (CST)
From: Steve Thomas <steve@...tu.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Re: NoelKDF ready for submission
I probably should of CC or just replied with the discussions mailing list, but
here's the analysis I did on NoelKDF. I found some new stuff it's at the bottom.
*** Start ***
I found two bugs in your reference code:
NoelKDF() Line 102:
+ numblocks <<= startGarlic;
for(i = startGarlic; i <= stopGarlic; i++) {
NoelKDF() Line 107:
- value += mem[2 * p * numblocks * blocklen + blocklen - 1];
+ value += mem[(2 * p + 1) * (uint64) numblocks * blocklen - 1];
I'm assuming this is a bug since the original is using the last int in the first
block. Which is known early on.
I also did some analysis of the access patterns. To build the first half with
hashWithoutPassword() you only need 31% to 25% in active memory of that half.
If you don't want to regenerate data. Also you don't need to store 8.67% to
10.94% of the total memory because these are not required in the first half.
Since only 35.03% to 35.27% of the lookups from the second half are in the first
half, regeneration of these will be minimal. Also if you have four times as many
pebbles in the second half than the first. Obviously you would start with them
all in the first half then move them over to the second half. Which has an added
benefit because near the start of the second half it is much more likely (80.16%
1 MiB to 98.03% 1024 MiB) to do a lookup in the first half. Then drops to 20.63%
chance that it will do a lookup in the first half.
I still need to do more but this is just what caught my eye.
Best I got is 8.67% to 10.94% reduction in memory for on average 3.06% to 3.86%
more time.
------
"31% to 25%" was found by keeping track of each block's last access and just
finding the maximum number of blocks that were needed later on at any given
point in time.
"8.67% to 10.94%" was found by observe which blocks never got accessed and not
storing them, but store the block if the previous block was not stored.
These next three are estimates that are good to around 7 decimals.
"35.03% to 35.27%" = (sum of (1 - (floor((2 ^ 96 * i / (i + blocks - 1)) ^
(1/3)) + 1) / 2 ^ 32) for i = 0 to blocks-1) / blocks
This came from the formula for "distance" and distance >= i:
(i + blocks - 1) * value ^ 3 / 2 ^ 96 >= i
value < (2 ^ 96 * i / (i + blocks - 1)) ^ (1/3)
Let i = {0, 1, 2, ..., blocks-1}, blocks = {128, 256, 384, ..., 130944, 131072}
and solve for value
Sum them up and divide by blocks.
"80.16% 1 MiB to 98.03% 1024 MiB" = 1 - (floor((2 ^ 96 / blocks) ^ (1/3)) + 1) /
2 ^ 32
This came from the formula for "distance" and distance >= i:
(i + blocks - 1) * value ^ 3 / 2 ^ 96 >= i
value < (2 ^ 96 * i / (i + blocks - 1)) ^ (1/3)
Let i = 1, blocks = {128, 256, 384, ..., 130944, 131072} and solve for value
* Note for i = 0 it is 100% chance it is from the first half
"20.63%" = 1 - (floor((2 ^ 96 / 2) ^ (1/3)) + 1) / 2 ^ 32
This came from the formula for "distance" and distance >= i:
(i + blocks - 1) * value ^ 3 / 2 ^ 96 >= i
value < (2 ^ 96 * i / (i + blocks - 1)) ^ (1/3)
Let i = blocks - 1 and solve for value
"3.06% to 3.86%" = 8.67% * 35.27% to 10.94% * 35.27%
*** End ***
Now for the new stuff:
xorIntoHash() needs to be replaced preferably with a cryptographic hash that
uses more than the last hash's size bytes of data. You can reverse a 256 bit
non-parallel hash to on average 3 bits of the state in "mem" 7 steps prior
through hashBlocks()'s "fromAddr". Also 6 bits from 6 prior, 9 bits from 5
prior, 12 bits from 4 prior, ..., 21 bits from 1 prior, and 256 bits from the
last block.
Content of type "text/html" skipped
Powered by blists - more mailing lists