[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7ciYxZ0Fw8FhCP64qzCNYKMrV-L6npWeNjkMnY64196MQg@mail.gmail.com>
Date: Fri, 19 Jul 2024 17:27:18 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: "Wang, Weilin" <weilin.wang@...el.com>
Cc: Ian Rogers <irogers@...gle.com>, Arnaldo Carvalho de Melo <acme@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
"Hunter, Adrian" <adrian.hunter@...el.com>, Kan Liang <kan.liang@...ux.intel.com>,
"linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Taylor, Perry" <perry.taylor@...el.com>,
"Alt, Samantha" <samantha.alt@...el.com>, "Biggers, Caleb" <caleb.biggers@...el.com>
Subject: Re: [RFC PATCH v17 3/8] perf stat: Fork and launch perf record when
perf stat needs to get retire latency value for a metric.
On Thu, Jul 18, 2024 at 4:55 PM Wang, Weilin <weilin.wang@...el.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Wang, Weilin
> > Sent: Wednesday, July 17, 2024 11:28 PM
> > To: Namhyung Kim <namhyung@...nel.org>
> > Cc: Ian Rogers <irogers@...gle.com>; Arnaldo Carvalho de Melo
> > <acme@...nel.org>; Peter Zijlstra <peterz@...radead.org>; Ingo Molnar
> > <mingo@...hat.com>; Alexander Shishkin
> > <alexander.shishkin@...ux.intel.com>; Jiri Olsa <jolsa@...nel.org>; Hunter,
> > Adrian <adrian.hunter@...el.com>; Kan Liang <kan.liang@...ux.intel.com>;
> > linux-perf-users@...r.kernel.org; linux-kernel@...r.kernel.org; Taylor, Perry
> > <perry.taylor@...el.com>; Alt, Samantha <samantha.alt@...el.com>; Biggers,
> > Caleb <caleb.biggers@...el.com>
> > Subject: RE: [RFC PATCH v17 3/8] perf stat: Fork and launch perf record when
> > perf stat needs to get retire latency value for a metric.
> >
> >
> >
> > > -----Original Message-----
> > > From: Namhyung Kim <namhyung@...nel.org>
> > > Sent: Wednesday, July 17, 2024 10:56 PM
> > > To: Wang, Weilin <weilin.wang@...el.com>
> > > Cc: Ian Rogers <irogers@...gle.com>; Arnaldo Carvalho de Melo
> > > <acme@...nel.org>; Peter Zijlstra <peterz@...radead.org>; Ingo Molnar
> > > <mingo@...hat.com>; Alexander Shishkin
> > > <alexander.shishkin@...ux.intel.com>; Jiri Olsa <jolsa@...nel.org>; Hunter,
> > > Adrian <adrian.hunter@...el.com>; Kan Liang <kan.liang@...ux.intel.com>;
> > > linux-perf-users@...r.kernel.org; linux-kernel@...r.kernel.org; Taylor, Perry
> > > <perry.taylor@...el.com>; Alt, Samantha <samantha.alt@...el.com>;
> > Biggers,
> > > Caleb <caleb.biggers@...el.com>
> > > Subject: Re: [RFC PATCH v17 3/8] perf stat: Fork and launch perf record
> > when
> > > perf stat needs to get retire latency value for a metric.
> > >
> > > On Fri, Jul 12, 2024 at 03:09:25PM -0400, weilin.wang@...el.com wrote:
> > > > From: Weilin Wang <weilin.wang@...el.com>
> > > >
> > > > When retire_latency value is used in a metric formula, evsel would fork a
> > perf
> > > > record process with "-e" and "-W" options. Perf record will collect required
> > > > retire_latency values in parallel while perf stat is collecting counting values.
> > > >
> > > > At the point of time that perf stat stops counting, evsel would stop perf
> > > record
> > > > by sending sigterm signal to perf record process. Sampled data will be
> > > process
> > > > to get retire latency value. Another thread is required to synchronize
> > > between
> > > > perf stat and perf record when we pass data through pipe.
> > > >
> > > > Retire_latency evsel is not opened for perf stat so that there is no counter
> > > > wasted on it. This commit includes code suggested by Namhyung to adjust
> > > reading
> > > > size for groups that include retire_latency evsels.
> > > >
> > > > Signed-off-by: Weilin Wang <weilin.wang@...el.com>
[SNIP]
> > > > + /*
> > > > + * Prepare perf record for sampling event retire_latency before fork
> > > and
> > > > + * prepare workload
> > > > + */
> > > > + evlist__for_each_entry(evsel_list, evsel) {
> > > > + int i;
> > > > + char *name;
> > > > + struct tpebs_retire_lat *new;
> > > > +
> > > > + if (!evsel->retire_lat)
> > > > + continue;
> > > > +
> > > > + pr_debug("tpebs: Retire_latency of event %s is required\n",
> > > evsel->name);
> > > > + for (i = strlen(evsel->name) - 1; i > 0; i--) {
> > > > + if (evsel->name[i] == 'R')
> > > > + break;
> > > > + }
> > > > + if (i <= 0 || evsel->name[i] != 'R') {
> > > > + ret = -1;
> > > > + goto err;
> > > > + }
> > > > +
> > > > + name = strdup(evsel->name);
> > > > + if (!name) {
> > > > + ret = -ENOMEM;
> > > > + goto err;
> > > > + }
> > > > + name[i] = 'p';
> > > > +
> > > > + new = zalloc(sizeof(*new));
> > > > + if (!new) {
> > > > + ret = -1;
> > > > + zfree(name);
> > > > + goto err;
> > > > + }
> > > > + new->name = name;
> > > > + new->tpebs_name = evsel->name;
> > > > + list_add_tail(&new->nd, &tpebs_results);
> > > > + tpebs_event_size += 1;
> > > > + }
> > > > +
> > > > + if (tpebs_event_size > 0) {
> > > > + struct pollfd pollfd = { .events = POLLIN, };
> > > > + int control_fd[2], ack_fd[2], len;
> > > > + char ack_buf[8];
> > > > +
> > > > + /*Create control and ack fd for --control*/
> > > > + if (pipe(control_fd) < 0) {
> > > > + pr_err("tpebs: Failed to create control fifo");
> > > > + ret = -1;
> > > > + goto out;
> > > > + }
> > > > + if (pipe(ack_fd) < 0) {
> > > > + pr_err("tpebs: Failed to create control fifo");
> > > > + ret = -1;
> > > > + goto out;
> > > > + }
> > > > +
> > > > + ret = start_perf_record(control_fd, ack_fd, cpumap_buf);
> > > > + if (ret)
> > > > + goto out;
> > > > + tpebs_pid = tpebs_cmd->pid;
> > > > + if (pthread_create(&tpebs_reader_thread, NULL,
> > > __sample_reader, tpebs_cmd)) {
> > > > + kill(tpebs_cmd->pid, SIGTERM);
> > > > + close(tpebs_cmd->out);
> > > > + pr_err("Could not create thread to process sample
> > > data.\n");
> > > > + ret = -1;
> > > > + goto out;
> > > > + }
> > > > + /* Wait for perf record initialization.*/
> > > > + len = strlen("enable");
> > > > + ret = write(control_fd[1], "enable", len);
> > >
> > > Can we use EVLIST_CTL_CMD_ENABLE_TAG instead?
> > >
> > >
> > > > + if (ret != len) {
> > > > + pr_err("perf record control write control message
> > > failed\n");
> > > > + goto out;
> > > > + }
> > > > +
> > > > + /* wait for an ack */
> > > > + pollfd.fd = ack_fd[0];
> > > > +
> > > > + /*
> > > > + * We need this poll to ensure the ack_fd PIPE will not hang
> > > > + * when perf record failed for any reason. The timeout value
> > > > + * 3000ms is an empirical selection.
> > > > + */
> > >
> > > Oh, you changed it to 3 sec. But I think it's ok as we don't wait for
> > > that long for the normal cases.
> >
> > Hi Namhyung,
> >
> > I found it's more reliable to use 3 secs because in some of my test cases 2 secs
> > are not enough for perf record reach the point of sending ACK back.
>
> Does this 3sec wait looks good to you? Please let me know if you have other suggestions.
I have no idea or preference. It'd be ok if it works for you..
But I'm just curious in what situation 2 seconds was not enough.
Thanks,
Namhyung
> > >
> > > > + if (!poll(&pollfd, 1, 3000)) {
> > > > + pr_err("tpebs failed: perf record ack timeout\n");
> > > > + ret = -1;
> > > > + goto out;
> > > > + }
> > > > +
> > > > + if (!(pollfd.revents & POLLIN)) {
> > > > + pr_err("tpebs failed: did not received an ack\n");
> > > > + ret = -1;
> > > > + goto out;
> > > > + }
> > > > +
> > > > + ret = read(ack_fd[0], ack_buf, sizeof(ack_buf));
> > > > + if (ret > 0)
> > > > + ret = strcmp(ack_buf, "ack\n");
> > >
> > > Same for EVLIST_CTL_CMD_ACK_TAG.
> > >
> > >
> > > > + else {
> > > > + pr_err("tpebs: perf record control ack failed\n");
> > > > + goto out;
> > > > + }
> > > > +out:
> > > > + close(control_fd[0]);
> > > > + close(control_fd[1]);
> > > > + close(ack_fd[0]);
> > > > + close(ack_fd[1]);
> > > > + }
> > > > +err:
> > > > + if (ret)
> > > > + tpebs_delete();
> > > > + return ret;
> > > > +}
> > > > +
> > > > +
> > > > +int tpebs_set_evsel(struct evsel *evsel, int cpu_map_idx, int thread)
> > > > +{
> > > > + __u64 val;
> > > > + bool found = false;
> > > > + struct tpebs_retire_lat *t;
> > > > + struct perf_counts_values *count;
> > > > +
> > > > + /* Non reitre_latency evsel should never enter this function. */
> > > > + if (!evsel__is_retire_lat(evsel))
> > > > + return -1;
> > > > +
> > > > + /*
> > > > + * Need to stop the forked record to ensure get sampled data from the
> > > > + * PIPE to process and get non-zero retire_lat value for hybrid.
> > > > + */
> > > > + tpebs_stop();
> > > > + count = perf_counts(evsel->counts, cpu_map_idx, thread);
> > > > +
> > > > + list_for_each_entry(t, &tpebs_results, nd) {
> > > > + if (t->tpebs_name == evsel->name || (evsel->metric_id
> > > && !strcmp(t->tpebs_name, evsel->metric_id))) {
> > >
> > > This line is too long, please break.
> > >
> > > Thanks,
> > > Namhyung
> > >
> > >
> > > > + found = true;
> > > > + break;
> > > > + }
> > > > + }
> > > > +
> > > > + /* Set ena and run to non-zero */
> > > > + count->ena = count->run = 1;
> > > > + count->lost = 0;
> > > > +
> > > > + if (!found) {
> > > > + /*
> > > > + * Set default value or 0 when retire_latency for this event is
> > > > + * not found from sampling data (record_tpebs not set or 0
> > > > + * sample recorded).
> > > > + */
> > > > + count->val = 0;
> > > > + return 0;
> > > > + }
> > > > +
> > > > + /*
> > > > + * Only set retire_latency value to the first CPU and thread.
> > > > + */
> > > > + if (cpu_map_idx == 0 && thread == 0)
> > > > + val = rint(t->val);
> > > > + else
> > > > + val = 0;
> > > > +
> > > > + count->val = val;
> > > > + return 0;
> > > > +}
> > > > +
> > > > +static void tpebs_retire_lat__delete(struct tpebs_retire_lat *r)
> > > > +{
> > > > + zfree(&r->name);
> > > > + free(r);
> > > > +}
> > > > +
> > > > +
> > > > +/*
> > > > + * tpebs_delete - delete tpebs related data and stop the created thread
> > and
> > > > + * process by calling tpebs_stop().
> > > > + *
> > > > + * This function is called from evlist_delete() and also from builtin-stat
> > > > + * stat_handle_error(). If tpebs_start() is called from places other then
> > perf
> > > > + * stat, need to ensure tpebs_delete() is also called to safely free mem and
> > > > + * close the data read thread and the forked perf record process.
> > > > + *
> > > > + * This function is also called in evsel__close() to be symmetric with
> > > > + * tpebs_start() being called in evsel__open(). We will update this call site
> > > > + * when move tpebs_start() to evlist level.
> > > > + */
> > > > +void tpebs_delete(void)
> > > > +{
> > > > + struct tpebs_retire_lat *r, *rtmp;
> > > > +
> > > > + if (tpebs_pid == -1)
> > > > + return;
> > > > +
> > > > + tpebs_stop();
> > > > +
> > > > + list_for_each_entry_safe(r, rtmp, &tpebs_results, nd) {
> > > > + list_del_init(&r->nd);
> > > > + tpebs_retire_lat__delete(r);
> > > > + }
> > > > +
> > > > + if (tpebs_cmd) {
> > > > + free(tpebs_cmd);
> > > > + tpebs_cmd = NULL;
> > > > + }
> > > > +}
> > > > diff --git a/tools/perf/util/intel-tpebs.h b/tools/perf/util/intel-tpebs.h
> > > > new file mode 100644
> > > > index 000000000000..766b3fbd79f1
> > > > --- /dev/null
> > > > +++ b/tools/perf/util/intel-tpebs.h
> > > > @@ -0,0 +1,35 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0-only */
> > > > +/*
> > > > + * intel_tpebs.h: Intel TEPBS support
> > > > + */
> > > > +#ifndef INCLUDE__PERF_INTEL_TPEBS_H__
> > > > +#define INCLUDE__PERF_INTEL_TPEBS_H__
> > > > +
> > > > +#include "stat.h"
> > > > +#include "evsel.h"
> > > > +
> > > > +#ifdef HAVE_ARCH_X86_64_SUPPORT
> > > > +
> > > > +extern bool tpebs_recording;
> > > > +int tpebs_start(struct evlist *evsel_list);
> > > > +void tpebs_delete(void);
> > > > +int tpebs_set_evsel(struct evsel *evsel, int cpu_map_idx, int thread);
> > > > +
> > > > +#else
> > > > +
> > > > +static inline int tpebs_start(struct evlist *evsel_list __maybe_unused)
> > > > +{
> > > > + return 0;
> > > > +}
> > > > +
> > > > +static inline void tpebs_delete(void) {};
> > > > +
> > > > +static inline int tpebs_set_evsel(struct evsel *evsel __maybe_unused,
> > > > + int cpu_map_idx __maybe_unused,
> > > > + int thread __maybe_unused)
> > > > +{
> > > > + return 0;
> > > > +}
> > > > +
> > > > +#endif
> > > > +#endif
> > > > --
> > > > 2.43.0
> > > >
Powered by blists - more mailing lists