[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230719001836.198363-2-irogers@google.com>
Date: Tue, 18 Jul 2023 17:18:34 -0700
From: Ian Rogers <irogers@...gle.com>
To: Andi Kleen <ak@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Zhengjun Xing <zhengjun.xing@...ux.intel.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v1 1/3] perf parse-events: Extra care around force grouped events
Perf metric (topdown) events on Intel Icelake+ machines require a
group, however, they may be next to events that don't require a group.
Consider:
cycles,slots,topdown-fe-bound
The cycles event needn't be grouped but slots and topdown-fe-bound
need grouping. Prior to this change, as slots and topdown-fe-bound
need a group forcing and all events share the same PMU, slots and
topdown-fe-bound would be forced into a group with cycles. This is a
bug on two fronts, cycles wasn't supposed to be grouped and cycles
can't be a group leader with a perf metric event.
This change adds recognition that cycles isn't force grouped and so it
shouldn't be force grouped with slots and topdown-fe-bound.
Fixes: a90cc5a9eeab ("perf evsel: Don't let evsel__group_pmu_name() traverse unsorted group")
Signed-off-by: Ian Rogers <irogers@...gle.com>
---
tools/perf/util/parse-events.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 5dcfbf316bf6..f10760ac1781 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -2141,7 +2141,7 @@ static int parse_events__sort_events_and_fix_groups(struct list_head *list)
int idx = 0, unsorted_idx = -1;
struct evsel *pos, *cur_leader = NULL;
struct perf_evsel *cur_leaders_grp = NULL;
- bool idx_changed = false;
+ bool idx_changed = false, cur_leader_force_grouped = false;
int orig_num_leaders = 0, num_leaders = 0;
int ret;
@@ -2182,7 +2182,7 @@ static int parse_events__sort_events_and_fix_groups(struct list_head *list)
const struct evsel *pos_leader = evsel__leader(pos);
const char *pos_pmu_name = pos->group_pmu_name;
const char *cur_leader_pmu_name, *pos_leader_pmu_name;
- bool force_grouped = arch_evsel__must_be_in_group(pos);
+ bool pos_force_grouped = arch_evsel__must_be_in_group(pos);
/* Reset index and nr_members. */
if (pos->core.idx != idx)
@@ -2198,7 +2198,8 @@ static int parse_events__sort_events_and_fix_groups(struct list_head *list)
cur_leader = pos;
cur_leader_pmu_name = cur_leader->group_pmu_name;
- if ((cur_leaders_grp != pos->core.leader && !force_grouped) ||
+ if ((cur_leaders_grp != pos->core.leader &&
+ (!pos_force_grouped || !cur_leader_force_grouped)) ||
strcmp(cur_leader_pmu_name, pos_pmu_name)) {
/* Event is for a different group/PMU than last. */
cur_leader = pos;
@@ -2208,9 +2209,14 @@ static int parse_events__sort_events_and_fix_groups(struct list_head *list)
* group.
*/
cur_leaders_grp = pos->core.leader;
+ /*
+ * Avoid forcing events into groups with events that
+ * don't need to be in the group.
+ */
+ cur_leader_force_grouped = pos_force_grouped;
}
pos_leader_pmu_name = pos_leader->group_pmu_name;
- if (strcmp(pos_leader_pmu_name, pos_pmu_name) || force_grouped) {
+ if (strcmp(pos_leader_pmu_name, pos_pmu_name) || pos_force_grouped) {
/*
* Event's PMU differs from its leader's. Groups can't
* span PMUs, so update leader from the group/PMU
--
2.41.0.487.g6d72f3e995-goog
Powered by blists - more mailing lists