lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140501044507.GA25157@openwall.com>
Date: Thu, 1 May 2014 08:45:07 +0400
From: Solar Designer <solar@...nwall.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] on timing attacks

On Mon, Apr 28, 2014 at 04:25:34PM +0200, Kriszti?n Pint?r wrote:
> scenario: we can run a task on the same computer, or otherwise can
> listen in on its memory usage patterns (power analysis, etc). but
> otherwise we have no access to either any memory or on-disk databases.
> we can acquire a memory access fingerprint, that is, some
> characteristics of the pattern in which memory is accessed (number of
> cache misses in a time window, access of certain memory locations or
> blocks, etc). this fingerprint is unique to the password/salt
> combination. therefore we can check a password/salt hypothesis by
> running the algorithm against it, and matching to the access
> fingerprint.
> 
> granted, this attack is largely thwarted by secret salt. however, in
> some situations, for example in case of server relief, the salt might
> be available to the attacker, while the password hash is not.

However, note that in case of "full" server relief there would be no
secret-dependent memory lookups on the server (there would be only a
quick last step, like HMAC), so that case doesn't fit your scenario.

In case of "partial" server relief - where both the client and the
server do a significant amount of processing - yes, your scenario may
apply, and yes a (second) secret(*) salt for the server's portion of
computation would mitigate it.  However, note that in that case without
a second per-account secret salt another attack would be possible:
precomputation of hashes of candidate passwords to the known salts
(which are revealed to the client to allow for server relief) before the
attacker gains access to the hashes.  That way, the attacker, due to
advance planning, may quickly crack some passwords and access the
corresponding accounts right upon gaining (perhaps read-only) access to
the (otherwise too slow to compute) hashes.  Thus, the same mitigation
is desirable _anyway_, even with fully cache-timing resistant hashes, if
partial server relief is used at all.  With full server relief, this
mitigation isn't even possible: it wouldn't actually mitigate the
precomputation attack.

Arguably, there are more significant problems with server relief here,
than with cache timing leaks, although the corresponding attacks are
different enough that they can't be directly compared as more vs. less
significant outside of specific usage context.

(*) In the above context, the salts only need to be secret as long as
the hashes are, so they're not required to be any more secret than e.g.
Unix password hashes' salts have traditionally been.  Storage along with
the hashes is still OK.

> in general, salts are not considered secret, so accessing them or
> guessing them might a possibility.

Historically, salts haven't been considered secret, but there's this
precomputation attack and there are timing attacks on comparison of
hashes (of course, it can also be dealt with by using a constant-time
comparison function, if one is available or can be implemented).  Now
side-channel attacks on hash functions themselves are added to these two
attack categories, for a total of three (or are there more?)

I think it's fair game to assume or stipulate that salts, while not
secret for purposes of possible offline attacks, are stored along with
the hashes and are not revealed separately, except if mandated by the
protocol (such as for server relief), in which case some security
tradeoffs are accepted (and need to be understood and possibly mitigated
by other means).

> in short, correlation between secret information and memory access
> patterns not only offers a shortcut, but in fact opens up a new attack
> vector that previously was not present.

I agree.

Alexander

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ