[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170512162259.GF18818@leverpostej>
Date: Fri, 12 May 2017 17:22:59 +0100
From: Mark Rutland <mark.rutland@....com>
To: David Howells <dhowells@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Elena Reshetova <elena.reshetova@...el.com>,
keyrings@...r.kernel.org, Kees Cook <keescook@...omium.org>,
Hans Liljestrand <ishkamiel@...il.com>,
David Windsor <dwindsor@...il.com>,
James Morris <james.l.morris@...cle.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: next-20170510 refcount_inc() on zero / use-after-free in
key_lookup()
On Fri, May 12, 2017 at 04:29:39PM +0100, David Howells wrote:
> Mark Rutland <mark.rutland@....com> wrote:
>
> > From a quick look at key_lookup(), the following looks very suspicious:
> >
> > found:
> > /* pretend it doesn't exist if it is awaiting deletion */
> > if (refcount_read(&key->usage) == 0)
> > goto not_found;
> >
> > /* this races with key_put(), but that doesn't matter since key_put()
> > * doesn't actually change the key
> > */
> > __key_get(key);
> >
> > ... as if we can race with key_put(), we can see a zero refcount here,
> > and the race *does* matter.
>
> No, it doesn't.
>
> If key_put() reduces a refcount to 0, it doesn't do anything other than poke
> the gc thread:
>
> void key_put(struct key *key)
> {
> if (key) {
> key_check(key);
>
> if (refcount_dec_and_test(&key->usage))
> schedule_work(&key_gc_work);
> }
> }
>
> in particular, no indication of the reduced key is passed.
>
> The gc thread scans the entire key serial tree under the key_serial_lock
> looking for keys that are no longer ref'd. No one else is allowed to remove
> keys from the tree. This means that the gc thread can safely leave a cursor
> pointing into the midst of the tree with no locks held whilst it yields to the
> scheduler.
>
> The code you quoted above in key_lookup() is inside the key_serial_lock, so it
> prevents the gc thread from culling a key when it resurrects it.
>
> So the problem isn't the key code, it's the refcount code.
Sure, there's no actual use-after-free here.
Sorry for the misleading title.
> As I've said before, the refcount code needs an increment op that permits
> inc-from-0. In this case, it's perfectly okay.
Given that there's currently an attempt to bail out on a zero refcount courtesy
of the refcount_read(), can't we do something like the below?
Thanks,
Mark.
diff --git a/include/linux/key.h b/include/linux/key.h
index 78e25aa..1e68ae2 100644
--- a/include/linux/key.h
+++ b/include/linux/key.h
@@ -248,6 +248,14 @@ extern struct key *key_alloc(struct key_type *type,
extern void key_invalidate(struct key *key);
extern void key_put(struct key *key);
+static inline struct key *__key_get_notzero(struct key *key)
+{
+ if (refcount_inc_not_zero(&key->usage))
+ return key;
+
+ return NULL;
+}
+
static inline struct key *__key_get(struct key *key)
{
refcount_inc(&key->usage);
diff --git a/security/keys/key.c b/security/keys/key.c
index 455c04d..f375cc6 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -661,14 +661,9 @@ struct key *key_lookup(key_serial_t id)
found:
/* pretend it doesn't exist if it is awaiting deletion */
- if (refcount_read(&key->usage) == 0)
+ if (!__key_get_notzero(key))
goto not_found;
- /* this races with key_put(), but that doesn't matter since key_put()
- * doesn't actually change the key
- */
- __key_get(key);
-
error:
spin_unlock(&key_serial_lock);
return key;
Powered by blists - more mailing lists