[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fVk0JjSFm=tmJA+nqySCvBi9CzrbxrzpFdyzeLXZdHd7Q@mail.gmail.com>
Date: Tue, 23 Aug 2022 09:33:18 -0700
From: Ian Rogers <irogers@...gle.com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Kan Liang <kan.liang@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
On Tue, Aug 23, 2022 at 6:34 AM Arnaldo Carvalho de Melo
<acme@...nel.org> wrote:
>
> Em Mon, Aug 22, 2022 at 02:33:51PM -0700, Ian Rogers escreveu:
> > If a weak group is broken then the reset_group flag remains set for
> > the next run. Having reset_group set means the counter isn't created
> > and ultimately a segfault.
> >
> > A simple reproduction of this is:
> > perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> > which will be added as a test in the next patch.
>
> So doing this on that existing BPF related loop may solve the problem,
> but for someone looking just at the source code, without any comment,
> may be cryptic, no?
>
> And then the fixes tags talks about affinity, adding a bit more
> confusion, albeit being the part that does the weak logic :-\
>
> Can we have a comment just before:
>
> + counter->reset_group = false;
>
> Stating that this is needed only when using -r?
It is possible to add a comment but thinking about it, it would have
said pretty much what the code was doing and so I skipped it. I'm wary
of comments that capture too much of the implementation as they are
prone to becoming stale. Logically this function is just iterating
over the evlist creating counters, but on top of that we have the
affinity optimization. The BPF code didn't need that and so has its
own evlist iteration. We could add another loop just to clear
reset_group, that didn't seem to make sense. It's unfortunate how that
relates to the fixes tag but I don't think we should optimize for that
case.
Thanks,
Ian
> - Arnaldo
>
> > Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> > Signed-off-by: Ian Rogers <irogers@...gle.com>
> > ---
> > tools/perf/builtin-stat.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > index 7fb81a44672d..54cd29d07ca8 100644
> > --- a/tools/perf/builtin-stat.c
> > +++ b/tools/perf/builtin-stat.c
> > @@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > }
> >
> > evlist__for_each_entry(evsel_list, counter) {
> > + counter->reset_group = false;
> > if (bpf_counter__load(counter, &target))
> > return -1;
> > if (!evsel__is_bpf(counter))
> > --
> > 2.37.2.609.g9ff673ca1a-goog
>
> --
>
> - Arnaldo
Powered by blists - more mailing lists