lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 30 May 2024 23:40:53 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: weilin.wang@...el.com
Cc: Ian Rogers <irogers@...gle.com>, Arnaldo Carvalho de Melo <acme@...nel.org>, 
	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, Kan Liang <kan.liang@...ux.intel.com>, 
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Perry Taylor <perry.taylor@...el.com>, Samantha Alt <samantha.alt@...el.com>, 
	Caleb Biggers <caleb.biggers@...el.com>
Subject: Re: [RFC PATCH v10 3/8] perf stat: Fork and launch perf record when
 perf stat needs to get retire latency value for a metric.

On Tue, May 28, 2024 at 11:43 PM <weilin.wang@...el.com> wrote:
>
> From: Weilin Wang <weilin.wang@...el.com>
>
> When retire_latency value is used in a metric formula, evsel would fork a perf
> record process with "-e" and "-W" options. Perf record will collect required
> retire_latency values in parallel while perf stat is collecting counting values.
>
> At the point of time that perf stat stops counting, evsel would stop perf record
> by sending sigterm signal to perf record process. Sampled data will be process
> to get retire latency value.
>
> Another thread is required to synchronize between perf stat and perf record
> when we pass data through pipe.
>
> Signed-off-by: Weilin Wang <weilin.wang@...el.com>
> ---
[SNIP]
> +int tpebs_set_evsel(struct evsel *evsel, int cpu_map_idx, int thread)
> +{
> +       struct perf_counts_values *count;
> +       struct tpebs_retire_lat *t;
> +       bool found = false;
> +       __u64 val;
> +       int ret;
> +
> +       /* Non reitre_latency evsel should never enter this function. */
> +       if (!evsel__is_retire_lat(evsel))
> +               return -1;
> +
> +       ret = tpebs_stop();
> +       if (ret)
> +               return ret;
> +
> +       count = perf_counts(evsel->counts, cpu_map_idx, thread);
> +
> +       list_for_each_entry(t, &tpebs_results, nd) {
> +               if (!strcmp(t->tpebs_name, evsel->name) || !strcmp(t->tpebs_name, evsel->metric_id)) {
> +                       found = true;
> +                       break;
> +               }
> +       }
> +
> +       /* Set ena and run to non-zero */
> +       count->ena = count->run = 1;
> +       count->lost = 0;
> +
> +       if (!found) {
> +               /*
> +                * Set default value or 0 when retire_latency for this event is
> +                * not found from sampling data (enable_tpebs_recording not set
> +                * or 0 sample recorded).
> +                */
> +               val = 0;
> +               return 0;
> +       }
> +
> +       /*
> +        * Only set retire_latency value to the first CPU and thread.
> +        */
> +       if (cpu_map_idx == 0 && thread == 0) {
> +       /* Lost precision when casting from double to __u64. Any improvement? */

As I said before I think you can set t->val * 1000 and then
set the evsel->scale to 1e3 or 1e-3.

Thanks,
Namhyung


> +               val = t->val;
> +       } else
> +               val = 0;
> +
> +       count->val = val;
> +       return 0;
> +}
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ