[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190714204432.GA8120@krava>
Date: Sun, 14 Jul 2019 22:44:32 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Numfor Mbiziwo-Tiapo <nums@...gle.com>
Cc: peterz@...radead.org, mingo@...hat.com, acme@...nel.org,
alexander.shishkin@...ux.intel.com, namhyung@...nel.org,
songliubraving@...com, mbd@...com, linux-kernel@...r.kernel.org,
irogers@...gle.com, eranian@...gle.com
Subject: Re: [PATCH] Fix perf stat repeat segfault
On Wed, Jul 10, 2019 at 01:45:40PM -0700, Numfor Mbiziwo-Tiapo wrote:
> When perf stat is called with event groups and the repeat option,
> a segfault occurs because the cpu ids are stored on each iteration
> of the repeat, when they should only be stored on the first iteration,
> which causes a buffer overflow.
>
> This can be replicated by running (from the tip directory):
>
> make -C tools/perf
>
> then running:
>
> tools/perf/perf stat -e '{cycles,instructions}' -r 10 ls
>
> Since run_idx keeps track of the current iteration of the repeat,
> only storing the cpu ids on the first iteration (when run_idx < 1)
> fixes this issue.
>
> Signed-off-by: Numfor Mbiziwo-Tiapo <nums@...gle.com>
> ---
> tools/perf/builtin-stat.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 63a3afc7f32b..92d6694367e4 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -378,9 +378,10 @@ static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *inf
> workload_exec_errno = info->si_value.sival_int;
> }
>
> -static bool perf_evsel__should_store_id(struct perf_evsel *counter)
> +static bool perf_evsel__should_store_id(struct perf_evsel *counter, int run_idx)
> {
> - return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;
> + return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID
> + && run_idx < 1;
we create counters for every iteration, so this can't be
based on iteration
I think that's just a workaround for memory corruption,
that's happening for repeating groupped events stats,
I'll check on this
jirka
> }
>
> static bool is_target_alive(struct target *_target,
> @@ -503,7 +504,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> if (l > stat_config.unit_width)
> stat_config.unit_width = l;
>
> - if (perf_evsel__should_store_id(counter) &&
> + if (perf_evsel__should_store_id(counter, run_idx) &&
> perf_evsel__store_ids(counter, evsel_list))
> return -1;
> }
> --
> 2.22.0.410.gd8fdbe21b5-goog
>
Powered by blists - more mailing lists