lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Sep 2014 13:48:55 -0700
From: Alex Elsayed <>
Subject: Re: Re: Re: Re: Re: Re: Second factor (was A review per day - Schvrch)

Andy Lutomirski wrote:

> On Wed, Sep 3, 2014 at 4:15 PM, Alex Elsayed
> <> wrote:
>> Andy Lutomirski wrote:
>>> On Wed, Sep 3, 2014 at 3:45 PM, Alex Elsayed
>>> <> wrote:
>>>> Andy Lutomirski wrote:
>>>>> On Wed, Sep 3, 2014 at 3:29 PM, Alex Elsayed
>>>>> <> wrote:
>>>>>> Besides, why store _anything_ on the user's computer? The user types
>>>>>> in N, and then P, and neither has any related data stored anywhere on
>>>>>> the computer; the less you process P the less time it spends resident
>>>>>> in memory as well. Treat it like a hot potato and hand it straight to
>>>>>> the token.
>>>>> Because I want to trust my token as little as possible.
>>>> My point is it doesn't actually trust your token any less.
>>>> Handing your token F(P) instead of P doesn't matter, because F(P) is
>>>> still sufficient for a malicious token to never ask again if it stores
>>>> it - and a malicious token disclosing P rather than F(P) only matters
>>>> if your password hygiene is really terrible (reuse, etc).
>>>> Besides, the token _does_ rely on you relaying the second exchange for
>>>> it - if it tries to do an exchange when you didn't _initiate_ an
>>>> exchange, that's a 'KILL IT WITH FIRE' indicator; that leaves only that
>>>> it surreptitiously stores what you give it and waits for someone to
>>>> steal it from you.
>>> Let me try to say it more precisely.  I want a fourth security
>>> requirement: even if the token is actively malicious (e.g. records
>>> things it shouldn't, sends maliciously incorrect output, and leaks
>>> things to <insert government agency here>), then the protocol should
>>> still be as secure as either password-authenticated or
>>> encrypted-key-authenticated protocols, depending on whether the user
>>> stores a key file on his/her hard drive.
>>> I think that my protocol achieves this, or at least tries to.  Yours
>>> seems to be completely insecure in this threat model.
>> Mm, I see.
>>> Yours might be fixable for this purpose by having the user do a second
>>> PAKE exchange with the server, protected by the output of the first
>>> one.  This requires more round trips than my approach.
>> Actually, another option is to run step 4 in _parallel_ with a PAKE
>> exchange directly between U and A. Since the exchanges are the same, this
>> results (assuming some bundling) in a doubling of message _size_ (and the
>> server must store two verifiers), but not of the _number_ of messages.
>> At that point, though, you might as well not pass the token anything (or
>> pass it public data like the name of the service you're authenticating
>> to), again obviating the need for storing anything on disk while
>> satisfying the additional constraint unconditionally.
> Hmm, interesting.  I had assumed that a secure protocol involving
> password tokens needed some mechanism for authenticating the user to
> the token to prevent the token from being useful if stolen.  But maybe
> this is entirely unnecessary.

At least with SRP, I think it is unnecessary - the only output of the 
protocol (on both sides) is a key, which if the protocol failed is different 
on each end. If the _total_ derived key is made by XOR of the derived key 
from each parallel execution, and the individual derived keys are never used 
for anything, then one can only discern that the _total_ authentication 
succeeded/failed, preserving the security property.

> On the other hand, it should also be impossible for a server and a
> stolen user's computer to do an offline dictionary attack without
> access to the token.  I think that two parallel augmented PAKE runs
> doesn't have this property.  This might be fixable by storing the
> client computer's key share in some form that can only be decrypted
> with access to the token (e.g. take the secret PAKE key, encrypt it
> against the token's secret, and encrypt *that* against the password).

Mm, with the parallel PAKE we've latched P as a necessary part of the 
protocol, so _here_ your original idea of passing H(P) to the token, which 
it uses to encrypt its (internal) value, would not weaken the scheme in the 
case of a malicious token.

Given token T holding secret X, user U holding password P, and server S:

U -> T: H(P)
T -> U: Y = E(k=X, H(P))
T -> S: R_t = PAKE(X)
U -> S: R_u = PAKE(Y)
T -> U: R_t
U: K = R_t ^ R_u

With that, both verifiers on the server depend on the (strong random) X, 
preventing dictionary attacks by a malicious server unless it has the token.

And unless it's a malicious token, even a malicious server that has it will 
still need to query the token once per guess (in order to get the differing 
values of Y, which is why I didn't simply have it return H(X))

> Hmm.  Maybe I should try to write this stuff down and post it to IACR.
>> Sadly, this loses the property from the original scheme that the server
>> doesn't need to even know that a token is in use - a PAKE exchange
>> between U and A appears the same to A as the token protocol.
>> It would also require care on the part of A to avoid timing leaks - it'd
>> need to check both verifiers even if one failed.
> It looks like the catid tabby PAKE might be compatible with a secret
> sharing approach like in my earlier email.  It has no security proof,
> though.
> --Andy

Powered by blists - more mailing lists