[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fVv-f2JpWxOrHUFa73P-6z8JAR-+dcmL8MfYgLhpxe4zA@mail.gmail.com>
Date: Fri, 13 May 2022 09:32:33 -0700
From: Ian Rogers <irogers@...gle.com>
To: kan.liang@...ux.intel.com
Cc: acme@...nel.org, mingo@...hat.com, jolsa@...nel.org,
namhyung@...nel.org, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org, peterz@...radead.org,
zhengjun.xing@...ux.intel.com, adrian.hunter@...el.com,
ak@...ux.intel.com, eranian@...gle.com
Subject: Re: [PATCH 2/4] perf stat: Always keep perf metrics topdown events in
a group
On Fri, May 13, 2022 at 8:16 AM <kan.liang@...ux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@...ux.intel.com>
>
> If any member in a group has a different cpu mask than the other
> members, the current perf stat disables group. when the perf metrics
> topdown events are part of the group, the below <not supported> error
> will be triggered.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
> Performance counter stats for 'system wide':
>
> 141,465,174 slots
> <not supported> topdown-retiring
> 1,605,330,334 uncore_imc_free_running_0/dclk/
>
> The perf metrics topdown events must always be grouped with a slots
> event as leader.
>
> With the patch, the topdown events aren't broken from the group for the
> splitting.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
> Performance counter stats for 'system wide':
>
> 346,110,588 slots
> 124,608,256 topdown-retiring
> 1,606,869,976 uncore_imc_free_running_0/dclk/
>
> 1.003877592 seconds time elapsed
Nice! This is based on:
https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
You may end up with a group with the leader having a group count of 1
(itself). I explicitly zeroed that in the change above, but this may
be unnecessary. Maybe we should move this code to helper functions for
sharing and consistency on what the leader count should be.
Thanks,
Ian
> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
> ---
> tools/perf/builtin-stat.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index a96f106dc93a..af2248868a4f 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -272,8 +272,11 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
> }
>
> for_each_group_evsel(pos, leader) {
> - evsel__set_leader(pos, pos);
> - pos->core.nr_members = 0;
> + if (!evsel__must_be_in_group(pos) && pos != leader) {
> + evsel__set_leader(pos, pos);
> + pos->core.nr_members = 0;
> + leader->core.nr_members--;
> + }
> }
> evsel->core.leader->nr_members = 0;
> }
> --
> 2.35.1
>
Powered by blists - more mailing lists