lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fV8fUBZihx-7wQf+DgOqoii5VVMs+Y3kFmg+5JdkD0NQA@mail.gmail.com>
Date: Sat, 23 Mar 2024 22:20:05 -0700
From: Ian Rogers <irogers@...gle.com>
To: weilin.wang@...el.com
Cc: Kan Liang <kan.liang@...ux.intel.com>, Namhyung Kim <namhyung@...nel.org>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, linux-perf-users@...r.kernel.org, 
	linux-kernel@...r.kernel.org, Perry Taylor <perry.taylor@...el.com>, 
	Samantha Alt <samantha.alt@...el.com>, Caleb Biggers <caleb.biggers@...el.com>, 
	Mark Rutland <mark.rutland@....com>
Subject: Re: [RFC PATCH v4 09/15] perf stat: Add function to handle special
 events in hardware-grouping

On Thu, Feb 8, 2024 at 7:14 PM <weilin.wang@...el.com> wrote:
>
> From: Weilin Wang <weilin.wang@...el.com>
>
> There are some special events like topdown events and TSC that are not
> described in pmu-event JSON files. Add support to handle this type of
> events. This should be considered as a temporary solution because including
> these events in JSON files would be a better solution.
>
> Signed-off-by: Weilin Wang <weilin.wang@...el.com>
> ---
>  tools/perf/util/metricgroup.c | 38 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 37 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
> index 660c6b9b5fa7..a0579b0f81e5 100644
> --- a/tools/perf/util/metricgroup.c
> +++ b/tools/perf/util/metricgroup.c
> @@ -160,6 +160,20 @@ struct metric {
>
>  /* Maximum number of counters per PMU*/
>  #define NR_COUNTERS    16
> +/* Special events that are not described in pmu-event JSON files.
> + * topdown-* and TSC use dedicated registers, set as free
> + * counter for grouping purpose

msr/tsc/ is a software event where reading the value is done by rdtsc.
Unlike tool events like duration_time we want msr/tsc/ in the group
with the other hardware events so its running/enabled time scaling
matches.

To some extent the topdown- events do exist in the json as
"TOPDOWN.*". Looking at
tools/perf/pmu-events/arch/x86/tigerlake/pipeline.json I see just
TOPDOWN.BACKEND_BOUND_SLOTS. Perhaps we can add the rest rather than
have a workaround here?

If topdown events are in json and msr/tsc/ is treated like a software
event, as we do here:
https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git/tree/tools/perf/util/parse-events.c?h=perf-tools-next#n1920
perhaps we don't need the special events category?

Thanks,
Ian

> + */
> +enum special_events {
> +       TOPDOWN = 0,
> +       TSC     = 1,
> +       SPECIAL_EVENT_MAX,
> +};
> +
> +static const char *const special_event_names[SPECIAL_EVENT_MAX] = {
> +       "topdown-",
> +       "TSC",
> +};
>
>  /**
>   * An event used in a metric. This info is for metric grouping.
> @@ -2102,6 +2116,15 @@ static int create_grouping(struct list_head *pmu_info_list,
>         return ret;
>  };
>
> +static bool is_special_event(const char *id)
> +{
> +       for (int i = 0; i < SPECIAL_EVENT_MAX; i++) {
> +               if (!strncmp(id, special_event_names[i], strlen(special_event_names[i])))
> +                       return true;
> +       }
> +       return false;
> +}
> +
>  /**
>   * hw_aware_build_grouping - Build event groupings by reading counter
>   * requirement of the events and counter available on the system from
> @@ -2126,6 +2149,17 @@ static int hw_aware_build_grouping(struct expr_parse_ctx *ctx __maybe_unused,
>         hashmap__for_each_entry(ctx->ids, cur, bkt) {
>                 const char *id = cur->pkey;
>
> +               if (is_special_event(id)) {
> +                       struct metricgroup__event_info *event;
> +
> +                       event = event_info__new(id, "default_core", "0",
> +                                               /*free_counter=*/true);
> +                       if (!event)
> +                               goto err_out;
> +
> +                       list_add(&event->nd, &event_info_list);
> +                       continue;
> +               }
>                 ret = get_metricgroup_events(id, etable, &event_info_list);
>                 if (ret)
>                         goto err_out;
> @@ -2597,8 +2631,10 @@ int metricgroup__parse_groups(struct evlist *perf_evlist,
>                 ret = hw_aware_parse_groups(perf_evlist, pmu, str,
>                             metric_no_threshold, user_requested_cpu_list, system_wide,
>                             /*fake_pmu=*/NULL, metric_events, table);
> -               if (!ret)
> +               if (!ret) {
> +                       pr_info("Hardware aware grouping completed\n");
>                         return 0;
> +               }
>         }
>
>         return parse_groups(perf_evlist, pmu, str, metric_no_group, metric_no_merge,
> --
> 2.42.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ