[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140826003350.GA10700@openwall.com>
Date: Tue, 26 Aug 2014 04:33:50 +0400
From: Solar Designer <solar@...nwall.com>
To: discussions@...sword-hashing.net
Subject: Re: [PHC] Tradeoff cryptanalysis of password hashing schemes
Hi Dmitry,
On Sun, Aug 24, 2014 at 12:25:31PM +0400, Dmitry Khovratovich wrote:
> I do not quite agree with you here.
I am fine with the difference of opinion, but I think there's also some
misunderstanding here, so I'll try to clarify:
> On Sat, Aug 23, 2014 at 11:16 PM, Solar Designer <solar@...nwall.com> wrote:
> > (I think per Colin's definition of sequential memory-hard function it
> > is OK if "reducing the memory does not increase the cost", as long as
> > the product of space and time does not decrease by more than a constant
> > factor. You appear to be using a stricter definition/requirement here.
> > We might need a different name for tradeoff-resistant functions like
> > that, or always mention that property separately.)
My point was that "sequential memory-hard function" is already defined
in the way Colin did it, whether we like it or not, so another term or
an additional clarification should be used whenever we refer to a
function necessarily meeting a stricter requirement. Not a terribly
important point, and not affecting the discussion we had significantly,
hence I put it in braces.
> The definition you mention is about asymptotics. Of course, in practice a
> constant factor does make sense, for instance if it is 1000 or so.
Sure. This was actually one of my reasons to design yescrypt, instead
of simply being happy with scrypt.
For scrypt, the factor is at most 2 or 4, depending on what you count,
although this is for theoretical space and time, whereas implementation
peculiarities could push it a bit(?) further. To me, even a factor of
2 or 4 is something to care about and try to avoid if practical.
> Even if it is not that big, any cost reduction is beneficial for the
> adversary. Furthermore, the product of time and space is quite rough a
> metric. It is relevant
> when your chip consists of homogeneous cores, and the cost is proportional
> to the total number of calls to such cores. Whereas this model seems
> correct for exhaustive key search or Bitcoin mining, its applicability to
> functions that use Gbytes of RAM is doubtful, to say the least.
Yes, perhaps.
> Coupling this with the concept that the a good hashing scheme should
> motivate the adversary to crack it on the same hardware and the same
> parameters where it is running, we get that a memory-hard function should
> prohibit as much as possible any cost reduction for the adversary, by
> constant factor or not.
That's my opinion too. It's just not part of the definition.
> > I think PHC is quite different from usual cryptographic competitions in
> > that providing excessive (which probably means any at all) safety
> > margins e.g. against tradeoff attacks actually makes PHC candidates less
> > secure, not merely less attractive in other (non-security) ways (as it
> > would be the case with ciphers and regular crypto hashes). To ensure
> > e.g. yescrypt isn't broken in the sense you mention, I'd need to provide
> > some safety margin in my recommended value for t, or to define yescrypt
> > such that t=0 would correspond to a higher running time (e.g., to what
> > is now requested with t=1 or higher). However, I think doing so would
> > be counter-productive for actual goals of PHC, because there's a
> > security tradeoff here, not just a security vs. something else tradeoff.
> > The value of t that is optimal for security should be barely adequate to
> > withstand tradeoff attacks, with no safety margin.
>
> I am not sure that I understand correctly your notion of "security" in
> general, which for some reason may contradict the tradeoff resilience.
> The only such measurement of security that comes to my mind is the cost
> difference between running the hash scheme on server hardware and on
> adversary's hardware without any tradeoffs.
I primarily meant another aspect: the difference in memory usage by
different (or by differently-tuned, e.g. in terms of Lyra2's T or
yescrypt's t parameter) password hashing schemes at the same cost to the
defender. When we make a password hashing scheme more TMTO resilient,
or choose a scheme that is more TMTO resilient than another one as-is,
it tends to use less memory at same defensive use running time. So it
buys us less pre-TMTO security from offline attacks for the same
defensive cost. With Lyra2's T and yescrypt's t, this is
straightforward to see. I suspect it also holds true for most algorithm
revisions we can come up with. The (relatively few?) algorithm
revisions where this does not hold true are the good ones for us to
actually make.
The next question is then whether post-TMTO security is "less" or "more"
than it was with the less TMTO resilient scheme/settings. This question
isn't yet answered in your analysis (need to make and factor in
defensive benchmark data), except in the case where the least TMTO
resilient scheme is also the slowest at filling memory, which may be the
case for Catena as you say.
> If you double the number of
> iterations and halve the memory (to keep the same running time), this gap
> may indeed increase (maybe by 10% or so, if we talk about Gbytes of RAM).
While I agree it can be a 10% or so difference in efficiency for a 2x
change in memory usage (this is actually consistent with my benchmarks
on defensive use hardware), I think your logic is flawed. Just how
would you "double the number of iterations" yet "keep the same running
time" (even at half the memory, if we don't consider caches for a
moment)? I guess you meant you'd double the number of iterations
relatively to the memory cost setting (e.g., use 2*N instead of N
iterations in scrypt's SMix loops). However, this would still
correspond to the same iteration count in absolute terms. So same
iteration count, half the memory, half the pre-TMTO attack cost.
> However, if your scheme is not tradeoff-resilient, an adversary will
> certainly use the most beneficial tradeoff. If he can reduce his costs
> threefold because of tradeoffs, then doubling the number of iterations
> actually makes his life worse, both in relative and absolute terms.
OK, you mention a threefold cost reduction from TMTO. In that example I
agree, there's a 1.5x cost reduction for the attacker here, in absolute
terms, compared to a hypothetical design that uses 2x less memory
defensively in same running time, but is completely TMTO resistant.
So if an attack like this is found, it may be grounds to increase the
recommended iteration count (or make some other change).
However, a 2x or less cost reduction from TMTO wouldn't justify doubling
the iteration count. On the contrary, it would confirm that the
iteration count must not be doubled. Moreover, even a value "lower"
than what was initially analyzed could turn out to be closer to being
optimal, so this hypothesis could need to be tested next. (I mean
"lower" relatively to the requested memory cost, which could then be
increased for same running time. Not lower in absolute terms.)
> > As a result, I'm
> > afraid we should go the other way around: from smaller t to higher if we
> > have to as shown by attacks, and we should not be too surprised to find
> > such attacks, nor should we throw babies out with the water merely
> > because the initial proposed settings didn't have a safety margin and
> > proved somewhat too weak against some attacks. They shouldn't have had
> > that safety margin, or the scheme would have been suboptimal
> > security-wise in another way!
>
> As shown by crypto competitions, it is quite easy to tweak your design
> against every new attack. If all such designs remain in competition, then
> instead of evolution-like selection of the best and simplest design, we get
> a zoo of over-complicated entries. Cryptanalysts not only have their eyes
> blurred because of looking at the same candidates, but also their attention
> is spread over many more targets than needed.
I agree that these unfortunate aspects do exist and apply.
> Also, given our limited insight into the hardware and cryptanalysis
> development over the next years, having no security margin does not make
> sense to me at all.
You make a valid point about the limited insight. However, do we really
want to sacrifice some tangible security today for what might or might
not buy us any extra security tomorrow, as long as we're not talking any
potentially fatal future attack here, but merely some rather limited
attack cost reduction? Granted, some TMTO attacks, such as those on
scrypt, are beyond being "rather limited" - they actually enable GPUs to
be used to crack scrypt hashes much more efficiently (currently at
memory sizes of up to ~16 MB per hash, instead of under 1 MB per hash,
which would be the case otherwise). So I fully agree that we want to
have (maybe optional) TMTO resilience in new password hashing schemes.
However, we shouldn't make them TMTO resilient beyond reason, paying too
much for it in other security aspects.
If yescrypt's t were a hard-coded parameter, I'd include some (limited)
safety margin in its value. However, when it's externally supplied I
like to recommend a value for it that is currently believed to offer the
best security tradeoff for the nearest future, and adjust this
recommendation in the future if/as necessary. (Of course, some systems
may also use higher t simply because they can't afford to use more
memory, but can afford to spend more time. This was/is actually the
primary use for t, with its effect on TMTO resilience being a bonus.)
Cryptocoin use might be an exception, where the value effectively
becomes hard-coded for the lifetime of the coin. Maybe I should
recommend t=1 for such use right away, for coins that specifically want
to be ASIC-unfriendly.
Alexander
Powered by blists - more mailing lists