[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170531123957.GA10351@leverpostej>
Date: Wed, 31 May 2017 13:39:57 +0100
From: Mark Rutland <mark.rutland@....com>
To: David Howells <dhowells@...hat.com>
Cc: keyrings@...r.kernel.org, linux-security-module@...r.kernel.org,
linux-kernel@...r.kernel.org, David Windsor <dwindsor@...il.com>,
Elena Reshetova <elena.reshetova@...el.com>,
Hans Liljestrand <ishkamiel@...il.com>,
James Morris <james.l.morris@...cle.com>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] KEYS: fix refcount_inc() on zero
On Wed, May 31, 2017 at 01:20:44PM +0100, David Howells wrote:
> Mark Rutland <mark.rutland@....com> wrote:
>
> > This patch uses refcount_inc_not_zero() to perform the peek and
> > increment atomically.
>
> I think the helper is unnecessary. Better to adjust the comment if you really
> want to explain it. Anyone editing the code should be that this is inside a
> critical section.
>
> > A helper with lockdep annotation is added to document why this is safe.
>
> This doesn't explain why this is safe.
>
> > + /*
> > + * Pretend it doesn't exist if it is awaiting deletion. This races with
> > + * key_put(), but we can peek at the key until we drop key_serial_lock.
> > */
>
> With your change, there is no race with key_put() - so the second sentence is
> unnecessary.
Fair enough, on all counts.
> I've adjusted your patch - see attached.
That looks fine to me, thanks!
Mark.
>
> David
> ---
> commit f66bf831c45306ebbc28aecd407e238983457251
> Author: Mark Rutland <mark.rutland@....com>
> Date: Fri May 26 18:37:34 2017 +0100
>
> KEYS: fix refcount_inc() on zero
>
> If a key's refcount is dropped to zero between key_lookup() peeking at
> the refcount and subsequently attempting to increment it, refcount_inc()
> will see a zero refcount. Here, refcount_inc() will WARN_ONCE(), and
> will *not* increment the refcount, which will remain zero.
>
> Once key_lookup() drops key_serial_lock, it is possible for the key to
> be freed behind our back.
>
> This patch uses refcount_inc_not_zero() to perform the peek and increment
> atomically.
>
> Fixes: fff292914d3a2f1e ("security, keys: convert key.usage from atomic_t to refcount_t")
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Signed-off-by: David Howells <dhowells@...hat.com>
> Cc: David Windsor <dwindsor@...il.com>
> Cc: Elena Reshetova <elena.reshetova@...el.com>
> Cc: Hans Liljestrand <ishkamiel@...il.com>
> Cc: James Morris <james.l.morris@...cle.com>
> Cc: Kees Cook <keescook@...omium.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
>
> diff --git a/security/keys/key.c b/security/keys/key.c
> index 455c04d80bbb..d84ee2a87da6 100644
> --- a/security/keys/key.c
> +++ b/security/keys/key.c
> @@ -660,14 +660,11 @@ struct key *key_lookup(key_serial_t id)
> goto error;
>
> found:
> - /* pretend it doesn't exist if it is awaiting deletion */
> - if (refcount_read(&key->usage) == 0)
> - goto not_found;
> -
> - /* this races with key_put(), but that doesn't matter since key_put()
> - * doesn't actually change the key
> + /* A key is allowed to be looked up only if someone still owns a
> + * reference to it - otherwise it's awaiting the gc.
> */
> - __key_get(key);
> + if (!refcount_inc_not_zero(&key->usage))
> + goto not_found;
>
> error:
> spin_unlock(&key_serial_lock);
Powered by blists - more mailing lists