[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CO6PR11MB5635CCEE99828F084D0DBAF8EE7B2@CO6PR11MB5635.namprd11.prod.outlook.com>
Date: Wed, 24 Jan 2024 16:59:19 +0000
From: "Wang, Weilin" <weilin.wang@...el.com>
To: Ian Rogers <irogers@...gle.com>
CC: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Alexander Shishkin
<alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, "Namhyung
Kim" <namhyung@...nel.org>, "Hunter, Adrian" <adrian.hunter@...el.com>, "Kan
Liang" <kan.liang@...ux.intel.com>, "linux-perf-users@...r.kernel.org"
<linux-perf-users@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "Taylor, Perry" <perry.taylor@...el.com>,
"Alt, Samantha" <samantha.alt@...el.com>, "Biggers, Caleb"
<caleb.biggers@...el.com>, Mark Rutland <mark.rutland@....com>, Yang Jihong
<yangjihong1@...wei.com>
Subject: RE: [RFC PATCH v3 07/18] perf stat: Add functions to set counter
bitmaps for hardware-grouping method
> -----Original Message-----
> From: Ian Rogers <irogers@...gle.com>
> Sent: Tuesday, January 23, 2024 9:18 PM
> To: Wang, Weilin <weilin.wang@...el.com>
> Cc: Peter Zijlstra <peterz@...radead.org>; Ingo Molnar <mingo@...hat.com>;
> Arnaldo Carvalho de Melo <acme@...nel.org>; Alexander Shishkin
> <alexander.shishkin@...ux.intel.com>; Jiri Olsa <jolsa@...nel.org>; Namhyung
> Kim <namhyung@...nel.org>; Hunter, Adrian <adrian.hunter@...el.com>;
> Kan Liang <kan.liang@...ux.intel.com>; linux-perf-users@...r.kernel.org;
> linux-kernel@...r.kernel.org; Taylor, Perry <perry.taylor@...el.com>; Alt,
> Samantha <samantha.alt@...el.com>; Biggers, Caleb
> <caleb.biggers@...el.com>; Mark Rutland <mark.rutland@....com>; Yang
> Jihong <yangjihong1@...wei.com>
> Subject: Re: [RFC PATCH v3 07/18] perf stat: Add functions to set counter
> bitmaps for hardware-grouping method
>
> On Tue, Dec 12, 2023 at 3:03 PM <weilin.wang@...el.com> wrote:
> >
> > From: Weilin Wang <weilin.wang@...el.com>
> >
> > Add metricgroup__event_info data structure to represent an event in the
> > metric grouping context; the list of counters and the PMU name an event
> > should be collected with.
> >
> > Add functions to parse event counter info from pmu-events and generate a
> > list of metricgroup__event_info data to prepare grouping.
> >
> > Signed-off-by: Weilin Wang <weilin.wang@...el.com>
> > ---
> > tools/perf/util/metricgroup.c | 196
> +++++++++++++++++++++++++++++++++-
> > tools/perf/util/metricgroup.h | 27 +++++
> > 2 files changed, 220 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
> > index 8d1143ee898c..24268882d355 100644
> > --- a/tools/perf/util/metricgroup.c
> > +++ b/tools/perf/util/metricgroup.c
> > @@ -1432,6 +1432,182 @@ static int build_combined_expr_ctx(const
> struct list_head *metric_list,
> > return ret;
> > }
> >
> > +/**
> > + * set_counter_bitmap - The counter bit mapping: [8-15,0-7], e.g. the GP0
> is the
> > + * 8th bit and GP7 is the 1st bit in this 16-bits bitmap. It is helpful for
> > + * assigning GP4-7 before GP0-3 because some events can be collected
> using GP0-3
> > + * only on some platforms.
> > + */
> > +static int set_counter_bitmap(int pos, unsigned long *bitmap)
> > +{
> > + if (pos >= NR_COUNTERS || pos < 0)
> > + return -EINVAL;
> > + if (pos <= 7)
> > + pos = TRANSFER_FIRST_BYTE(pos);
> > + else
> > + pos = TRANSFER_SEC_BYTE(pos);
> > + *bitmap |= 1ul << pos;
>
> __set_bit may be more canonical here, in case we ever had more than 64
> counters.
>
> > + return 0;
> > +}
> > +
> > +static int parse_fixed_counter(const char *counter,
> > + unsigned long *bitmap,
> > + bool *fixed)
> > +{
> > + int ret = -ENOENT;
> > + //TODO: this pattern is different on some other platforms
> > + const char *pattern = "Fixed counter ";
> > + int pos = 0;
> > +
> > + if (!strncmp(counter, pattern, strlen(pattern))) {
> > + pos = atoi(counter + strlen(pattern));
> > + ret = set_counter_bitmap(pos, bitmap);
> > + if (ret)
> > + return ret;
> > + *fixed = true;
> > + return 0;
> > + }
> > + return ret;
> > +}
>
> It'd be nice if the counter APIs that are specific to a PMU could be
> part of the struct pmu API.
>
> > +
> > +/**
> > + * parse_counter - Parse event counter info from pmu-events and set up
> bitmap
> > + * accordingly.
> > + *
> > + * @counter: counter info string to be parsed.
> > + * @bitmap: bitmap to set based on counter info parsed.
> > + * @fixed: is set to true if the event uses fixed counter.
> > + */
> > +static int parse_counter(const char *counter,
> > + unsigned long *bitmap,
> > + bool *fixed)
> > +{
> > + int ret = 0;
> > + char *p;
> > + char *tok;
> > + int pos = 0;
> > +
> > + ret = parse_fixed_counter(counter, bitmap, fixed);
> > + // ret==0 means matched with fixed counter
> > + if (ret == 0)
> > + return ret;
> > +
> > + p = strdup(counter);
> > + tok = strtok(p, ",");
> > + if (!tok)
> > + return -ENOENT;
> > +
> > + while (tok) {
> > + pos = atoi(tok);
> > + ret = set_counter_bitmap(pos, bitmap);
> > + if (ret)
> > + return ret;
> > + tok = strtok(NULL, ",");
> > + }
> > + return 0;
> > +}
> > +
> > +static struct metricgroup__event_info *event_info__new(const char
> *name,
> > + const char *pmu_name,
> > + const char *counter,
> > + bool free_counter)
> > +{
> > + int ret = 0;
> > + char *bit_buf = malloc(NR_COUNTERS);
> > + bool fixed_counter = false;
> > + struct metricgroup__event_info *e;
> > +
> > + e = zalloc(sizeof(*e));
> > + if (!e)
> > + return NULL;
> > + if (!pmu_name)
> > + pmu_name = "core";
> > +
> > + e->name = name;
> > + e->free_counter = free_counter;
> > + e->pmu_name = strdup(pmu_name);
> > + if (free_counter) {
> > + ret = set_counter_bitmap(0, e->counters);
> > + if (ret)
> > + return NULL;
> > + } else {
> > + ret = parse_counter(counter, e->counters, &fixed_counter);
> > + if (ret)
> > + return NULL;
> > + e->fixed_counter = fixed_counter;
> > + }
> > +
> > + bitmap_scnprintf(e->counters, NR_COUNTERS, bit_buf,
> NR_COUNTERS);
> > + pr_debug("Event %s requires pmu %s counter: %s bitmap %s,
> [pmu=%s]\n",
> > + e->name, e->pmu_name, counter, bit_buf, pmu_name);
> > +
> > + return e;
> > +}
> > +
> > +struct metricgroup__add_metric_event_data {
> > + struct list_head *list;
> > + /* pure event name, exclude umask and other info*/
> > + const char *event_name;
> > + /* event name and umask if applicable*/
> > + const char *event_id;
> > +};
> > +
> > +static int metricgroup__add_metric_event_callback(const struct pmu_event
> *pe,
> > + const struct pmu_events_table *table
> __maybe_unused,
> > + void *data)
> > +{
> > + struct metricgroup__event_info *event;
> > + struct metricgroup__add_metric_event_data *d = data;
> > +
> > + if (!strcasecmp(pe->name, d->event_name)) {
> > + event = event_info__new(d->event_id, pe->pmu, pe->counter,
> /*free_counter=*/false);
> > + if (!event)
> > + return -ENOMEM;
> > + list_add(&event->nd, d->list);
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * get_metricgroup_events - Find counter requirement of events from the
> > + * pmu_events table
> > + * @full_id: the full event identifiers.
> > + * @table: pmu_events table that is searched for event data.
> > + * @event_info_list: the list that the new event counter info added to.
> > + */
> > +static int get_metricgroup_events(const char *full_id,
> > + const struct pmu_events_table *table,
> > + struct list_head *event_info_list)
> > +{
> > + LIST_HEAD(list);
> > + int ret = 0;
> > + const char *id;
> > + const char *rsep, *sep = strchr(full_id, '@');
> > +
> > + if (sep) {
> > + rsep = strchr(full_id, ',');
> > + id = strndup(sep + 1, rsep - sep - 1);
> > + if (ret)
> > + goto out;
> > + } else {
> > + id = full_id;
> > + }
> > + {
> > + struct metricgroup__add_metric_event_data data = {
> > + .list = &list,
> > + .event_name = id,
> > + .event_id = full_id,
> > + };
> > + ret = pmu_events_table_for_each_event(table,
> > + metricgroup__add_metric_event_callback, &data);
> > + }
> > +
> > +out:
> > + list_splice(&list, event_info_list);
> > + return ret;
> > +}
> > +
> > /**
> > * hw_aware_build_grouping - Build event groupings by reading counter
> > * requirement of the events and counter available on the system from
> > @@ -1445,9 +1621,25 @@ static int hw_aware_build_grouping(struct
> expr_parse_ctx *ctx __maybe_unused,
> > const char *modifier __maybe_unused)
> > {
> > int ret = 0;
> > + struct hashmap_entry *cur;
> > + LIST_HEAD(pmu_info_list);
> > + LIST_HEAD(event_info_list);
> > + size_t bkt;
> > + const struct pmu_events_table *etable =
> perf_pmu__find_events_table(NULL);
> > +
> > +#define RETURN_IF_NON_ZERO(x) do { if (x) return x; } while (0)
> > + hashmap__for_each_entry(ctx->ids, cur, bkt) {
> > + const char *id = cur->pkey;
> > +
> > + pr_debug("found event %s\n", id);
> > +
> > + ret = get_metricgroup_events(id, etable, &event_info_list);
> > + if (ret)
> > + return ret;
> > + }
> >
> > - pr_debug("This is a placeholder\n");
> > return ret;
> > +#undef RETURN_IF_NON_ZERO
> > }
> >
> > static void group_str_free(struct metricgroup__group_strs *g)
> > @@ -1521,8 +1713,6 @@ static int hw_aware_parse_ids(struct perf_pmu
> *fake_pmu,
> > *out_evlist = parsed_evlist;
> > parsed_evlist = NULL;
> > err_out:
> > - parse_events_error__exit(&parse_error);
> > - evlist__delete(parsed_evlist);
> > metricgroup__free_grouping_strs(&groupings);
> > return ret;
> > }
> > diff --git a/tools/perf/util/metricgroup.h b/tools/perf/util/metricgroup.h
> > index 89809df85644..3704545c9a11 100644
> > --- a/tools/perf/util/metricgroup.h
> > +++ b/tools/perf/util/metricgroup.h
> > @@ -5,6 +5,7 @@
> > #include <linux/list.h>
> > #include <linux/rbtree.h>
> > #include <stdbool.h>
> > +#include <linux/bitmap.h>
> > #include "pmu-events/pmu-events.h"
> > #include "strbuf.h"
> >
> > @@ -67,6 +68,32 @@ struct metric_expr {
> > int runtime;
> > };
> >
>
> If this is just used in metricgroup.c then it'd be better to keep the
> definitions there for the sake of encapsulation.
I will move these definitions to metricgroup.c.
>
> > +/* Maximum number of counters per PMU*/
> > +#define NR_COUNTERS 16
>
> Fixed and/or generic?
>
> > +/*
> > + * Transfer bit position in the bitmap to ensure start assigning counter from
> > + * the last GP counter to the first.
> > + * bit15 <---> bit0
> > + * [GP8-GP15] [GP0-GP7]
> > + */
> > +#define TRANSFER_FIRST_BYTE(pos) (7 - pos)
> > +#define TRANSFER_SEC_BYTE(pos) (23 - pos)
>
> Not sure why this bit flipping is necessary, if fixed counters are
> separated out to their own bitmap does it help?
>
Kan also commented on this part. I've made some updates on bitmap related functions
and going to send that out in next version.
The goal of this flipping was to ensure to assign the highest indexed available counter first.
I did not find a find_last_and_bit() so used find_first_and_bit() and then this flipping.
> > +
> > +/**
> > + * An event used in a metric. This info is for metric grouping.
> > + */
> > +struct metricgroup__event_info {
> > + struct list_head nd;
> > + /** The name of the event. */
> > + const char *name;
> > + /** The name of the pmu the event be collected on. */
> > + const char *pmu_name;
>
> Could we use the "struct pmu*" here?
>
> > + bool fixed_counter;
> > + bool free_counter;
>
> Missing comment for *_counter.
>
> Thanks,
> Ian
>
> > + /** The counters the event allowed to be collected on. */
> > + DECLARE_BITMAP(counters, NR_COUNTERS);
> > +};
> > +
> > /**
> > * Each group is one node in the group string list.
> > */
> > --
> > 2.39.3
> >
Powered by blists - more mailing lists