lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 4 Apr 2014 22:58:12 +0200
From: Krisztián Pintér <pinterkr@...il.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Gambit review

Bill Cox (at Friday, April 4, 2014, 10:37:43 PM):

> No problem.  I'm enjoying reading everyone's code.  Sorry about being
> a jerk sometimes.

i'm getting over it one day.

> Did you use bit reversal before reading Catena?

i never published, so it does not count. i accidentally discovered the
good mixing property of bit reversal as a child, when i was looking
for an algorithm to calculate points of the mandelbrot set in an order
that gives me an overlook of the scene as soon as possible.

> If you
> like, I could add your memory pattern to my pebbling algorithm to see
> how it does.

that would certainly be very useful, as it might discover some
weakness. alas, it can not provide a proof for real life sizes. i
tried to come up with proof, but i'm no math person.

>   I am still trying to figure out what the impact is of
> XORing over memory rather than overwriting it.

that one i can answer. if you overwrite, the slot becomes unused for a
while. i mean, once it has been read, and until it gets a new value,
it just sits there unused. at any point in time, a certain fraction of
the memory (and a quite huge fraction) is in this idle state. if you
cleverly reuse memory slots, you can run the algorithm with smaller
memory footprint. i prevent this by never discarding any value.

xoring could be omitted while absorbing (overwrite mode), but that
does not seem to gain much.

> I still think there is maybe a 100X to 1000X potential cache access
> speedup in an ASIC for predictable addressing,

predictable addressing is something i'm not willing to give up. it is
in my view an absolute necessity. i'd rather present a weaker
algorithm than one that can be cracked wide open using side channel
attacks.

> Now that I've seen a couple entries that use the new AESENC

as soon az keccak-ni arrives in 2019 processors, you will be a fan of
gambit instead.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ