lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAP-5=fV7hhHYAgDp1kR=gjJb2jdrR7GfgKi1mvob4OavhcyHmg@mail.gmail.com>
Date: Thu, 9 Oct 2025 06:03:53 -0700
From: Ian Rogers <irogers@...gle.com>
To: Tengda Wu <wutengda@...weicloud.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, linux-perf-users@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] perf bpf_counter: Fix opening of "any"(-1) CPU events

On Thu, Oct 9, 2025 at 12:52 AM Tengda Wu <wutengda@...weicloud.com> wrote:
> On 2025/10/9 0:23, Ian Rogers wrote:
> > The bperf BPF counter code doesn't handle "any"(-1) CPU events, always
> > wanting to aggregate a count against a CPU, which avoids the need for
> > atomics so let's not change that. Force evsels used for BPF counters
> > to require a CPU when not in system-wide mode so that the "any"(-1)
> > value isn't used during map propagation and evsel's CPU map matches
> > that of the PMU.
> >
> > Fixes: b91917c0c6fa ("perf bpf_counter: Fix handling of cpumap fixing hybrid")
> > Signed-off-by: Ian Rogers <irogers@...gle.com>
> > ---
> >  tools/perf/builtin-stat.c     | 13 +++++++++++++
> >  tools/perf/util/bpf_counter.c |  1 +
> >  2 files changed, 14 insertions(+)
> >
> > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > index 7006f848f87a..0fc6884c1bf1 100644
> > --- a/tools/perf/builtin-stat.c
> > +++ b/tools/perf/builtin-stat.c
> > @@ -2540,6 +2540,7 @@ int cmd_stat(int argc, const char **argv)
> >       unsigned int interval, timeout;
> >       const char * const stat_subcommands[] = { "record", "report" };
> >       char errbuf[BUFSIZ];
> > +     struct evsel *counter;
> >
> >       setlocale(LC_ALL, "");
> >
> > @@ -2797,6 +2798,18 @@ int cmd_stat(int argc, const char **argv)
> >
> >       evlist__warn_user_requested_cpus(evsel_list, target.cpu_list);
> >
> > +     evlist__for_each_entry(evsel_list, counter) {
> > +             /*
> > +              * Setup BPF counters to require CPUs as any(-1) isn't
> > +              * supported. evlist__create_maps below will propagate this
> > +              * information to the evsels. Note, evsel__is_bperf isn't yet
> > +              * set up, and this change must happen early, so directly use
> > +              * the bpf_counter variable.
> > +              */
> > +             if (counter->bpf_counter)
> > +                     counter->core.requires_cpu = true;
> > +     }
> > +
> >       if (evlist__create_maps(evsel_list, &target) < 0) {
> >               if (target__has_task(&target)) {
> >                       pr_err("Problems finding threads of monitor\n");
> > diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
> > index ca5d01b9017d..d3e5933b171b 100644
> > --- a/tools/perf/util/bpf_counter.c
> > +++ b/tools/perf/util/bpf_counter.c
> > @@ -495,6 +495,7 @@ static int bperf_reload_leader_program(struct evsel *evsel, int attr_map_fd,
> >        * following evsel__open_per_cpu call
> >        */
> >       evsel->leader_skel = skel;
> > +     assert(!perf_cpu_map__has_any_cpu_or_is_empty(evsel->core.cpus));
> >       evsel__open(evsel, evsel->core.cpus, evsel->core.threads);
> >
> >  out:
>
>
> I must point out that `requires_cpu + evsel__open(evsel, evsel->core.cpus, evsel->core.threads)`
> is not equivalent to the original `evsel__open_per_cpu(evsel, all_cpu_map, -1)`. The former
> specifies a pid, while the latter does not. This will lead to inaccurate final event counting.
>
>
> For `evsel__open_per_cpu(evsel, all_cpu_map, -1)`:
>
> $ ./perf stat -vv --bpf-counters -e task-clock ./perf test -w sqrtloop
> sys_perf_event_open: pid -1  cpu 0  group_fd -1  flags 0x8 = 13
> sys_perf_event_open: pid -1  cpu 1  group_fd -1  flags 0x8 = 14
> sys_perf_event_open: pid -1  cpu 2  group_fd -1  flags 0x8 = 15
> [...]
>  Performance counter stats for './perf test -w sqrtloop':
>
>      1,016,156,671      task-clock                       #    1.000 CPUs utilized
>
>        1.016294745 seconds time elapsed
>
>        1.005710000 seconds user
>        0.010637000 seconds sys
>
>
> For `requires_cpu + evsel__open(evsel, evsel->core.cpus, evsel->core.threads)`:
>
> $ ./perf stat -vv --bpf-counters -e task-clock ./perf test -w sqrtloop
> sys_perf_event_open: pid 75099  cpu 0  group_fd -1  flags 0x8 = 13
> sys_perf_event_open: pid 75099  cpu 1  group_fd -1  flags 0x8 = 14
> sys_perf_event_open: pid 75099  cpu 2  group_fd -1  flags 0x8 = 15
> [...]
>  Performance counter stats for './perf test -w sqrtloop':
>
>         16,184,507      task-clock                       #    0.016 CPUs utilized
>
>        1.018540734 seconds time elapsed
>
>        1.009143000 seconds user
>        0.009497000 seconds sys
>
>
> As you can see, after specifying a pid, the task-clock count has significantly decreased.
> So to correct the counting, we may also need to keep the pid as -1 without specifying it.

Yeah, it look like the running time is off and so the count is being scaled:
```
$ perf stat -e task-clock:b,task-clock /tmp/perf/perf test -w noploop

 Performance counter stats for '/tmp/perf/perf test -w noploop':

     3,776,663,297      task-clock:b                     #    3.701
CPUs utilized               (26.96%)
     1,017,400,438      task-clock                       #    0.997
CPUs utilized

       1.020467405 seconds time elapsed

       1.008409000 seconds user
       0.012004000 seconds sys
```
will fix in v3.

Thanks,
Ian

> Thanks,
> Tengda
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ