lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJPc2NvJqLOGaIKl@google.com>
Date: Wed, 6 Aug 2025 15:53:12 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Ilya Leoshkevich <iii@...ux.ibm.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
	Daniel Borkmann <daniel@...earbox.net>,
	Andrii Nakryiko <andrii@...nel.org>,
	Ian Rogers <irogers@...gle.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>, bpf@...r.kernel.org,
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-s390@...r.kernel.org, Thomas Richter <tmricht@...ux.ibm.com>,
	Jiri Olsa <jolsa@...nel.org>, Heiko Carstens <hca@...ux.ibm.com>,
	Vasily Gorbik <gor@...ux.ibm.com>,
	Alexander Gordeev <agordeev@...ux.ibm.com>
Subject: Re: [PATCH v4 2/2] perf bpf-filter: Enable events manually

Hello,

On Wed, Aug 06, 2025 at 01:40:35PM +0200, Ilya Leoshkevich wrote:
> On s390, and, in general, on all platforms where the respective event
> supports auxiliary data gathering, the command:
> 
>    # ./perf record -u 0 -aB --synth=no -- ./perf test -w thloop
>    [ perf record: Woken up 1 times to write data ]
>    [ perf record: Captured and wrote 0.011 MB perf.data ]
>    # ./perf report --stats | grep SAMPLE
>    #
> 
> does not generate samples in the perf.data file. On x86 the command:
> 
>   # sudo perf record -e intel_pt// -u 0 ls
> 
> is broken too.
> 
> Looking at the sequence of calls in 'perf record' reveals this
> behavior:
> 
> 1. The event 'cycles' is created and enabled:
> 
>    record__open()
>    +-> evlist__apply_filters()
>        +-> perf_bpf_filter__prepare()
> 	   +-> bpf_program.attach_perf_event()
> 	       +-> bpf_program.attach_perf_event_opts()
> 	           +-> __GI___ioctl(..., PERF_EVENT_IOC_ENABLE, ...)
> 
>    The event 'cycles' is enabled and active now. However the event's
>    ring-buffer to store the samples generated by hardware is not
>    allocated yet.
> 
> 2. The event's fd is mmap()ed to create the ring buffer:
> 
>    record__open()
>    +-> record__mmap()
>        +-> record__mmap_evlist()
> 	   +-> evlist__mmap_ex()
> 	       +-> perf_evlist__mmap_ops()
> 	           +-> mmap_per_cpu()
> 	               +-> mmap_per_evsel()
> 	                   +-> mmap__mmap()
> 	                       +-> perf_mmap__mmap()
> 	                           +-> mmap()
> 
>    This allocates the ring buffer for the event 'cycles'. With mmap()
>    the kernel creates the ring buffer:
> 
>    perf_mmap(): kernel function to create the event's ring
>    |            buffer to save the sampled data.
>    |
>    +-> ring_buffer_attach(): Allocates memory for ring buffer.
>        |        The PMU has auxiliary data setup function. The
>        |        has_aux(event) condition is true and the PMU's
>        |        stop() is called to stop sampling. It is not
>        |        restarted:
>        |
>        |        if (has_aux(event))
>        |                perf_event_stop(event, 0);
>        |
>        +-> cpumsf_pmu_stop():
> 
>    Hardware sampling is stopped. No samples are generated and saved
>    anymore.
> 
> 3. After the event 'cycles' has been mapped, the event is enabled a
>    second time in:
> 
>    __cmd_record()
>    +-> evlist__enable()
>        +-> __evlist__enable()
> 	   +-> evsel__enable_cpu()
> 	       +-> perf_evsel__enable_cpu()
> 	           +-> perf_evsel__run_ioctl()
> 	               +-> perf_evsel__ioctl()
> 	                   +-> __GI___ioctl(., PERF_EVENT_IOC_ENABLE, .)
> 
>    The second
> 
>       ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
> 
>    is just a NOP in this case. The first invocation in (1.) sets the
>    event::state to PERF_EVENT_STATE_ACTIVE. The kernel functions
> 
>    perf_ioctl()
>    +-> _perf_ioctl()
>        +-> _perf_event_enable()
>            +-> __perf_event_enable()
> 
>    return immediately because event::state is already set to
>    PERF_EVENT_STATE_ACTIVE.
> 
> This happens on s390, because the event 'cycles' offers the possibility
> to save auxilary data. The PMU callbacks setup_aux() and free_aux() are
> defined. Without both callback functions, cpumsf_pmu_stop() is not
> invoked and sampling continues.
> 
> To remedy this, remove the first invocation of
> 
>    ioctl(..., PERF_EVENT_IOC_ENABLE, ...).
> 
> in step (1.) Create the event in step (1.) and enable it in step (3.)
> after the ring buffer has been mapped.
> 
> Output after:
> 
>  # ./perf record -aB --synth=no -u 0 -- ./perf test -w thloop 2
>  [ perf record: Woken up 3 times to write data ]
>  [ perf record: Captured and wrote 0.876 MB perf.data ]
>  # ./perf  report --stats | grep SAMPLE
>               SAMPLE events:      16200  (99.5%)
>               SAMPLE events:      16200
>  #
> 
> The software event succeeded both before and after the patch:
> 
>  # ./perf record -e cpu-clock -aB --synth=no -u 0 -- \
> 					  ./perf test -w thloop 2
>  [ perf record: Woken up 7 times to write data ]
>  [ perf record: Captured and wrote 2.870 MB perf.data ]
>  # ./perf  report --stats | grep SAMPLE
>               SAMPLE events:      53506  (99.8%)
>               SAMPLE events:      53506
>  #
> 
> Fixes: b4c658d4d63d61 ("perf target: Remove uid from target")
> Suggested-by: Jiri Olsa <jolsa@...nel.org>
> Tested-by: Thomas Richter <tmricht@...ux.ibm.com>
> Co-developed-by: Thomas Richter <tmricht@...ux.ibm.com>
> Signed-off-by: Thomas Richter <tmricht@...ux.ibm.com>
> Signed-off-by: Ilya Leoshkevich <iii@...ux.ibm.com>

Acked-by: Namhyung Kim <namhyung@...nel.org>

Thanks,
Namhyung

> ---
>  tools/perf/util/bpf-filter.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/perf/util/bpf-filter.c b/tools/perf/util/bpf-filter.c
> index d0e013eeb0f7..a0b11f35395f 100644
> --- a/tools/perf/util/bpf-filter.c
> +++ b/tools/perf/util/bpf-filter.c
> @@ -451,6 +451,8 @@ int perf_bpf_filter__prepare(struct evsel *evsel, struct target *target)
>  	struct bpf_link *link;
>  	struct perf_bpf_filter_entry *entry;
>  	bool needs_idx_hash = !target__has_cpu(target);
> +	DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts,
> +			    .dont_enable = true);
>  
>  	entry = calloc(MAX_FILTERS, sizeof(*entry));
>  	if (entry == NULL)
> @@ -522,7 +524,8 @@ int perf_bpf_filter__prepare(struct evsel *evsel, struct target *target)
>  	prog = skel->progs.perf_sample_filter;
>  	for (x = 0; x < xyarray__max_x(evsel->core.fd); x++) {
>  		for (y = 0; y < xyarray__max_y(evsel->core.fd); y++) {
> -			link = bpf_program__attach_perf_event(prog, FD(evsel, x, y));
> +			link = bpf_program__attach_perf_event_opts(prog, FD(evsel, x, y),
> +								   &pe_opts);
>  			if (IS_ERR(link)) {
>  				pr_err("Failed to attach perf sample-filter program\n");
>  				ret = PTR_ERR(link);
> -- 
> 2.50.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ