[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d890958b0be4ba4a4f0f7750215a767@BY2PR03MB074.namprd03.prod.outlook.com>
Date: Thu, 23 Jan 2014 22:33:59 +0000
From: Marsh Ray <maray@...rosoft.com>
To: "discussions@...sword-hashing.net" <discussions@...sword-hashing.net>
CC: "waywardgeek@...il.com" <waywardgeek@...il.com>
Subject: RE: [PHC] Initial multiply-compute-hardened Catena-3 benchmark
Assume the attacker is able to rent a virtual machine sharing the same CPU as the defender. By observing his own perceived cache latency carefully, he is able to learn what memory locations are accessed by the defender during the hashing of a specific user's password. This has been demonstrated possible in practice.
Now say the PHC algorithm imposes memory-latency hardness by using the supplied password to generate a seed with which to initialize a PRNG sequence to pick memory addresses within a large memory block. Going by the general rule that 50% of users choose one of the 10,000 most common passwords, the attacker only needs to learn just one or two cache line addresses in order to have a better than 50% chance of guessing the user's password on the first try.
- Marsh
-----Original Message-----
From: Dennis E. Hamilton [mailto:dennis.hamilton@....org]
Sent: Thursday, January 23, 2014 12:56 PM
To: discussions@...sword-hashing.net
Cc: waywardgeek@...il.com
Subject: RE: [PHC] Initial multiply-compute-hardened Catena-3 benchmark
I notice that "timing attack" is used here yet I don't think the usual use of that term seems to apply.
Do you really mean that there is a feasible way to mitigate a work factor that a particular measure imposes?
Maybe those should be "speed-up" or "accelerator" attacks?
If you do mean "timing attack," please say more about what you are thinking of.
- Dennis
-----Original Message-----
From: Bill Cox [mailto:waywardgeek@...il.com]
Sent: Thursday, January 23, 2014 11:10
To: discussions@...sword-hashing.net
Subject: [PHC] Initial multiply-compute-hardened Catena-3 benchmark
I've signed in a branch of Catena that replaces the hashing function with a simple multiply, OR, and ADD. I also made it hash blocks of
4096 bytes of memory at once rather than 64. The result runs 13X faster, filling 1GB of memory in 1.37 seconds on a single thread.
In comparison, NoelKDF hashes 1GB in 0.42 seconds on 1 thread, or 3.2X faster. However, there's a TMTO attack against NoelKDF that requires almost the same runtime, but only uses 0.5GB. There is no such attack against Catena-3, IMO.
Taking that into account, Catena-3 takes only about 40% longer to hash the same memory as an attacker optimized version of NoelKDF. I'm leaning towards Catena-3 now, for timing attack resistance, at least if we have an option for a fast hash in the inner loop.
Bill
Powered by blists - more mailing lists