[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171206154957.GB3367@danjae.aot.lge.com>
Date: Thu, 7 Dec 2017 00:49:57 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Fengguang Wu <fengguang.wu@...el.com>,
linux-kernel@...r.kernel.org, Wang Nan <wangnan0@...wei.com>,
Ingo Molnar <mingo@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Will Deacon <will.deacon@....com>, lkp@...org,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
kernel-team@....com
Subject: Re: BUG: KASAN: slab-out-of-bounds in perf_callchain_user+0x494/0x530
On Wed, Dec 06, 2017 at 04:45:44PM +0100, Peter Zijlstra wrote:
> On Wed, Dec 06, 2017 at 11:31:30PM +0900, Namhyung Kim wrote:
>
> > > There's also a race against put_callchain_buffers() there, consider:
> > >
> > >
> > > get_callchain_buffers() put_callchain_buffers()
> > > mutex_lock();
> > > inc()
> > > dec_and_test() // false
> > >
> > > dec() // 0
> > >
> > >
> > > And the buffers leak.
> >
> > Hmm.. did you mean that get_callchain_buffers() returns an error?
>
> Yes, get_callchain_buffers() fails, but while doing so it has a
> temporary increment on the count.
>
> > AFAICS it cannot fail when it sees count > 1 (and callchain_cpus_
> > entries is allocated).
>
> It can with your patch. We only test event_max_stack against the sysctl
> after incrementing.
So, are you ok with this?
Thanks,
Namhyung
diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 1b2be63c8528..ee0ba22d3993 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -137,8 +137,11 @@ int get_callchain_buffers(int event_max_stack)
err = alloc_callchain_buffers();
exit:
- if (err)
- atomic_dec(&nr_callchain_events);
+ if (err) {
+ /* might race with put_callchain_buffers() */
+ if (atomic_dec_and_test(&nr_callchain_events))
+ release_callchain_buffers();
+ }
mutex_unlock(&callchain_mutex);
Powered by blists - more mailing lists