[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170126153955.GD6515@twins.programming.kicks-ass.net>
Date: Thu, 26 Jan 2017 16:39:55 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
syzkaller <syzkaller@...glegroups.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: perf: use-after-free in perf_event_for_each
On Mon, Jan 23, 2017 at 02:30:12PM +0100, Dmitry Vyukov wrote:
> Hello,
>
> The following program triggers use-after-free in perf_event_for_each:
> https://gist.githubusercontent.com/dvyukov/f1c354a8356e42f4d0b3d912e1bec956/raw/31d7ecdf6dc2c7327b80ef8581a39c823bbe405d/gistfile1.txt
>
> BUG: KASAN: use-after-free in perf_event_for_each_child+0x15f/0x180
> kernel/events/core.c:4495 at addr ffff8800680ec248
The below seems to fix things for me.
---
Subject: perf: Fix use-after-free
Dmitry reported a KASAN use-after-free.
XXX: do a changelog
Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/events/core.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 896aa53..8c1ad98 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1474,7 +1474,6 @@ ctx_group_list(struct perf_event *event, struct perf_event_context *ctx)
static void
list_add_event(struct perf_event *event, struct perf_event_context *ctx)
{
-
lockdep_assert_held(&ctx->lock);
WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
@@ -1629,6 +1628,8 @@ static void perf_group_attach(struct perf_event *event)
{
struct perf_event *group_leader = event->group_leader, *pos;
+ lockdep_assert_held(&event->ctx->lock);
+
/*
* We can have double attach due to group movement in perf_event_open.
*/
@@ -1702,6 +1703,8 @@ static void perf_group_detach(struct perf_event *event)
struct perf_event *sibling, *tmp;
struct list_head *list = NULL;
+ lockdep_assert_held(&event->ctx->lock);
+
/*
* We can have double detach due to exit/hot-unplug + close.
*/
@@ -1900,9 +1903,29 @@ __perf_remove_from_context(struct perf_event *event,
*/
static void perf_remove_from_context(struct perf_event *event, unsigned long flags)
{
- lockdep_assert_held(&event->ctx->mutex);
+ struct perf_event_context *ctx = event->ctx;
+
+ lockdep_assert_held(&ctx->mutex);
event_function_call(event, __perf_remove_from_context, (void *)flags);
+
+ /*
+ * The above event_function_call() can NO-OP when it hits
+ * TASK_TOMBSTONE. In that case we must already have been detached
+ * from the context (by perf_event_exit_event()) but the grouping
+ * might still be in-tact.
+ */
+ WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT);
+ if ((flags & DETACH_GROUP) &&
+ (event->attach_state & PERF_ATTACH_GROUP)) {
+ /*
+ * Since in that case we cannot possibly be scheduled, simply
+ * detach now.
+ */
+ raw_spin_lock_irq(&ctx->lock);
+ perf_group_detach(event);
+ raw_spin_unlock_irq(&ctx->lock);
+ }
}
/*
Powered by blists - more mailing lists