lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210120163031.GU12699@kernel.org>
Date:   Wed, 20 Jan 2021 13:30:31 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     Song Liu <songliubraving@...com>
Cc:     open list <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Jiri Olsa <jolsa@...hat.com>, Kernel Team <Kernel-team@...com>
Subject: FIX Re: [PATCH v7 3/3] perf-stat: enable counting events for BPF
 programs

Em Wed, Jan 20, 2021 at 10:50:13AM -0300, Arnaldo Carvalho de Melo escreveu:
> So sizeof(struct bpf_perf_event_value) == 24 and it is a per-cpu array, the
> machine has 24 cpus, why is the kernel thinking it has more and end up zeroing
> entries after the 24 cores? Some percpu map subtlety (or obvious thing ;-\) I'm
> missing?
> 
> Checking lookups into per cpu maps in sample code now...
 
(gdb) run stat -b 244 -I 1000 -e cycles
Starting program: /root/bin/perf stat -b 244 -I 1000 -e cycles
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
libbpf: elf: skipping unrecognized data section(9) .eh_frame
libbpf: elf: skipping relo section(15) .rel.eh_frame for section(9) .eh_frame

Breakpoint 1, bpf_program_profiler__read (evsel=0xce02c0) at util/bpf_counter.c:217
217		if (list_empty(&evsel->bpf_counter_list))
(gdb) p num_
num_cpu              num_groups           num_leaps            num_print_iv         num_stmts            num_transitions      num_warnings_issued
num_cpu_bpf          num_ifs              num_print_interval   num_srcfiles         num_to_str           num_types
(gdb) p num_cpu
$1 = 24
(gdb) p num_cpu_bpf
$2 = 32
(gdb)

Humm, why?

But then libbpf and the sample/bpf/ code use it this way:


diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 8c977f038f497fc1..7dd3d57aba4f620c 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -207,7 +207,8 @@ static int bpf_program_profiler__enable(struct evsel *evsel)
 static int bpf_program_profiler__read(struct evsel *evsel)
 {
 	int num_cpu = evsel__nr_cpus(evsel);
-	struct bpf_perf_event_value values[num_cpu];
+	int num_cpu_bpf = libbpf_num_possible_cpus();
+	struct bpf_perf_event_value values[num_cpu > num_cpu_bpf ? num_cpu : num_cpu_bpf];
 	struct bpf_counter *counter;
 	int reading_map_fd;
 	__u32 key = 0;

-------------------------------------------------------------

[root@...e ~]# cat /sys/devices/system/cpu/possible
0-31
[root@...e ~]#

I bet that in your test systems evsel__nr_cpus(evsel) matches
/sys/devices/system/cpu/possible and thus you don't see the problem.

evsel__nr_cpus(evsel) uses what is in:

[acme@...e perf]$ cat /sys/devices/system/cpu/online
0-23
[acme@...e perf]$

So that is the reason for the problem and the fix is to use
libbpf_num_possible_cpus(), I'll bolt that into the patch that
introduced that code.

- Arnaldo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ