[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160906143608.GF3318@worktop.controleur.wifipass.org>
Date: Tue, 6 Sep 2016 16:36:08 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
"x86@...nel.org" <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: perf: out-of-bounds write in perf_callchain_store
On Tue, Sep 06, 2016 at 03:42:40PM +0200, Dmitry Vyukov wrote:
> Hello,
>
> The following program trigger an out-of-bounds write in
> perf_callchain_store (if run in a parallel loop):
>
> https://gist.githubusercontent.com/dvyukov/c05d883e776a353a1d063b670f50bde6/raw/1c8906b1aacfbd8a0cc0b5cf0cc4d0535345e497/gistfile1.txt
>
>
> BUG: KASAN: slab-out-of-bounds in perf_callchain_user+0xe65/0xfc0 at
> addr ffff88003e162840
> Write of size 8 by task syz-executor/22516
> CPU: 0 PID: 22516 Comm: syz-executor Not tainted 4.8.0-rc5-next-20160905+ #14
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> ffffffff886b6fe0 ffff88003ec07738 ffffffff82db81a9 ffffffff00000000
> fffffbfff10d6dfc ffff88003e800a00 ffff88003e161740 ffff88003e163740
> 0000000000000001 ffff88003e162840 ffff88003ec07760 ffffffff8180b2ec
> Call Trace:
> [<ffffffff8180b9c7>] __asan_report_store8_noabort+0x17/0x20
> mm/kasan/report.c:332
> [< inline >] perf_callchain_store include/linux/perf_event.h:1146
> [<ffffffff81014925>] perf_callchain_user+0xe65/0xfc0 arch/x86/events/core.c:2441
> [<ffffffff816c5f48>] get_perf_callchain+0x448/0x680 kernel/events/callchain.c:235
> [<ffffffff816c62cd>] perf_callchain+0x14d/0x1a0 kernel/events/callchain.c:191
Urgh, that callchain code is a pain with that context/entries
separation. But I can't see an obvious overrun there.
But WTF is max_contexts a sysctl? that doesn't seen to make any kind of
sense.
Acme, can you untangle that stuff and spot the fail?
Powered by blists - more mailing lists