[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150612193525.GG6850@kernel.org>
Date: Fri, 12 Jun 2015 16:35:25 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@...hat.com>, Ingo Molnar <mingo@...hat.com>,
Li Zhang <zhlcindy@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf, tools: Fix crash in 'perf trace'
Em Thu, Jun 11, 2015 at 11:00:04PM -0700, Sukadev Bhattiprolu escreveu:
> >From 6669ed960a3ee4f9a02790f60b6a73ffc82fd6de Mon Sep 17 00:00:00 2001
> From: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
> Date: Fri, 12 Jun 2015 01:28:36 -0400
> Subject: [PATCH] perf, tools: Fix crash in perf trace
>
> I get following crash on multiple systems and across several releases
> (at least since v3.18).
Trying it in perf/core I get:
util/evlist.c: In function ‘perf_evlist__mmap_read’:
util/evlist.c:645:6: error: wrong type argument to unary exclamation
mark
if (!md->refcnt)
Trying after changing it to !atomic_read(&md->refcnt)
And it fixes the segfault that I reproduced, but this still looks
strange, i.e. if it hit zero, then it should not have been used at this
point anymore... Will look at it again in the weekend. :-\
- Arnaldo
> Core was generated by `/tmp/perf trace sleep 0.2 '.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
> 195 u64 head = ACCESS_ONCE(pc->data_head);
> (gdb) bt
> #0 perf_mmap__read_head (mm=0x3fff9bf30070) at util/evlist.h:195
> #1 perf_evlist__mmap_read (evlist=0x10027f11910, idx=<optimized out>)
> at util/evlist.c:637
> #2 0x000000001003ce4c in trace__run (argv=<optimized out>,
> argc=<optimized out>, trace=0x3fffd7b28288) at builtin-trace.c:2259
> #3 cmd_trace (argc=<optimized out>, argv=<optimized out>,
> prefix=<optimized out>) at builtin-trace.c:2799
> #4 0x00000000100657b8 in run_builtin (p=0x10176798 <commands+480>, argc=3,
> argv=0x3fffd7b2b550) at perf.c:370
> #5 0x00000000100063e8 in handle_internal_command (argv=0x3fffd7b2b550, argc=3)
> at perf.c:429
> #6 run_argv (argv=0x3fffd7b2af70, argcp=0x3fffd7b2af7c) at perf.c:473
> #7 main (argc=3, argv=0x3fffd7b2b550) at perf.c:588
>
> The problem seems to be a race condition, when the application has just
> exited. Some/all fds associated with the perf-events (tracepoints) go
> into a POLLHUP/ POLLERR state and the mmap region associated with those
> events are unmapped (in perf_evlist__filter_pollfd()).
>
> But we go back and do a perf_evlist__mmap_read() which assumes that the
> mmaps are still valid and we hit the crash.
>
> If the mapping for an event is released, its refcnt is 0 (and ->base
> is NULL), so ensure we have non-zero refcount before accessing the map.
>
> Note that perf-record has a similar logic but unlike perf-trace, the
> record__mmap_read_all() checks the evlist->mmap[i].base before accessing
> the map.
>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
> ---
> tools/perf/util/evlist.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index 080be93..084535b 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -634,11 +634,18 @@ static struct perf_evsel *perf_evlist__event2evsel(struct perf_evlist *evlist,
> union perf_event *perf_evlist__mmap_read(struct perf_evlist *evlist, int idx)
> {
> struct perf_mmap *md = &evlist->mmap[idx];
> - u64 head = perf_mmap__read_head(md);
> + u64 head;
> u64 old = md->prev;
> unsigned char *data = md->base + page_size;
> union perf_event *event = NULL;
>
> + /*
> + * Check if event was unmapped due to a POLLHUP/POLLERR.
> + */
> + if (!md->refcnt)
> + return NULL;
> +
> + head = perf_mmap__read_head(md);
> if (evlist->overwrite) {
> /*
> * If we're further behind than half the buffer, there's a chance
> --
> 2.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists