[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y5iPsjF/lEsEldU8@kernel.org>
Date: Tue, 13 Dec 2022 11:44:02 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: James Clark <james.clark@....com>
Cc: Adrián Herrera Arcila <adrian.herrera@....com>,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
leo.yan@...aro.org, songliubraving@...com, peterz@...radead.org,
mingo@...hat.com, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
namhyung@...nel.org
Subject: Re: [PATCH 2/2] perf stat: fix unexpected delay behaviour
Em Mon, Aug 01, 2022 at 09:20:37AM +0100, James Clark escreveu:
>
>
> On 29/07/2022 17:12, Adrián Herrera Arcila wrote:
> > The described --delay behaviour is to delay the enablement of events, but
> > not the execution of the command, if one is passed, which is incorrectly
> > the current behaviour.
> >
> > This patch decouples the enablement from the delay, and enables events
> > before or after launching the workload dependent on the options passed
> > by the user. This code structure is inspired by that in perf-record, and
> > tries to be consistent with it.
> >
> > Link: https://lore.kernel.org/linux-perf-users/7BFD066E-B0A8-49D4-B635-379328F0CF4C@fb.com
> > Fixes: d0a0a511493d ("perf stat: Fix forked applications enablement of counters")
> > Signed-off-by: Adrián Herrera Arcila <adrian.herrera@....com>
> > ---
> > tools/perf/builtin-stat.c | 56 ++++++++++++++++++++++-----------------
> > 1 file changed, 32 insertions(+), 24 deletions(-)
>
> Looks good to me. Fixes the counter delay issue and the code is pretty
> similar to perf record now. Although I would wait for Leo's or Song's
> comment as well because they were involved.
I think I didn't notice Leo's ack, it still applies, so I'm doing it
now.
- Arnaldo
> Reviewed-by: James Clark <james.clark@....com>
>
> >
> > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > index 318ffd119dad..f98c823b16dd 100644
> > --- a/tools/perf/builtin-stat.c
> > +++ b/tools/perf/builtin-stat.c
> > @@ -559,7 +559,7 @@ static bool handle_interval(unsigned int interval, int *times)
> > return false;
> > }
> >
> > -static int enable_counters(void)
> > +static int enable_bpf_counters(void)
> > {
> > struct evsel *evsel;
> > int err;
> > @@ -572,28 +572,6 @@ static int enable_counters(void)
> > if (err)
> > return err;
> > }
> > -
> > - if (stat_config.initial_delay < 0) {
> > - pr_info(EVLIST_DISABLED_MSG);
> > - return 0;
> > - }
> > -
> > - if (stat_config.initial_delay > 0) {
> > - pr_info(EVLIST_DISABLED_MSG);
> > - usleep(stat_config.initial_delay * USEC_PER_MSEC);
> > - }
> > -
> > - /*
> > - * We need to enable counters only if:
> > - * - we don't have tracee (attaching to task or cpu)
> > - * - we have initial delay configured
> > - */
> > - if (!target__none(&target) || stat_config.initial_delay) {
> > - if (!all_counters_use_bpf)
> > - evlist__enable(evsel_list);
> > - if (stat_config.initial_delay > 0)
> > - pr_info(EVLIST_ENABLED_MSG);
> > - }
> > return 0;
> > }
> >
> > @@ -966,10 +944,24 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > return err;
> > }
> >
> > - err = enable_counters();
> > + err = enable_bpf_counters();
> > if (err)
> > return -1;
> >
> > + /*
> > + * Enable events manually here if perf-stat is run:
> > + * 1. with a target (any of --all-cpus, --cpu, --pid or --tid)
> > + * 2. without measurement delay (no --delay)
> > + * 3. without all events associated to BPF
> > + *
> > + * This is because if run with a target, events are not enabled
> > + * on exec if a workload is passed, and because there is no delay
> > + * we ensure to enable them before the workload starts
> > + */
> > + if (!target__none(&target) && !stat_config.initial_delay &&
> > + !all_counters_use_bpf)
> > + evlist__enable(evsel_list);
> > +
> > /* Exec the command, if any */
> > if (forks)
> > evlist__start_workload(evsel_list);
> > @@ -977,6 +969,22 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > t0 = rdclock();
> > clock_gettime(CLOCK_MONOTONIC, &ref_time);
> >
> > + /*
> > + * If a measurement delay was specified, start it, and if positive,
> > + * enable events manually after. We respect the delay even if all
> > + * events are associated to BPF
> > + */
> > + if (stat_config.initial_delay) {
> > + /* At this point, events are guaranteed disabled */
> > + pr_info(EVLIST_DISABLED_MSG);
> > + if (stat_config.initial_delay > 0) {
> > + usleep(stat_config.initial_delay * USEC_PER_MSEC);
> > + if (!all_counters_use_bpf)
> > + evlist__enable(evsel_list);
> > + pr_info(EVLIST_ENABLED_MSG);
> > + }
> > + }
> > +
> > if (forks) {
> > if (interval || timeout || evlist__ctlfd_initialized(evsel_list))
> > status = dispatch_events(forks, timeout, interval, ×);
--
- Arnaldo
Powered by blists - more mailing lists