lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220902181934.1082647-1-namhyung@kernel.org>
Date:   Fri,  2 Sep 2022 11:19:34 -0700
From:   Namhyung Kim <namhyung@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Jiri Olsa <jolsa@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Stephane Eranian <eranian@...gle.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Ian Rogers <irogers@...gle.com>
Subject: [PATCH] perf/core: Increase lost_samples count only for samples

The event->lost_samples count is intended to count (lost) sample records
but it's also counted for non-sample records like PERF_RECORD_MMAP etc.
This can be a problem when a sampling event tracks those side-band
events together.

As overflow handler for user events only calls perf_output_begin_
{for,back}ward() before writing to the ring buffer, we can pass an
additional flag to indicate that it's writing a sample record.

Fixes: 119a784c8127 ("perf/core: Add a new read format to get a number of lost samples")
Signed-off-by: Namhyung Kim <namhyung@...nel.org>
---
 kernel/events/ring_buffer.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index 726132039c38..5f38ee4edbdb 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -149,7 +149,7 @@ static __always_inline int
 __perf_output_begin(struct perf_output_handle *handle,
 		    struct perf_sample_data *data,
 		    struct perf_event *event, unsigned int size,
-		    bool backward)
+		    bool backward, bool sample)
 {
 	struct perf_buffer *rb;
 	unsigned long tail, offset, head;
@@ -174,7 +174,8 @@ __perf_output_begin(struct perf_output_handle *handle,
 	if (unlikely(rb->paused)) {
 		if (rb->nr_pages) {
 			local_inc(&rb->lost);
-			atomic64_inc(&event->lost_samples);
+			if (sample)
+				atomic64_inc(&event->lost_samples);
 		}
 		goto out;
 	}
@@ -256,7 +257,8 @@ __perf_output_begin(struct perf_output_handle *handle,
 
 fail:
 	local_inc(&rb->lost);
-	atomic64_inc(&event->lost_samples);
+	if (sample)
+		atomic64_inc(&event->lost_samples);
 	perf_output_put_handle(handle);
 out:
 	rcu_read_unlock();
@@ -268,14 +270,14 @@ int perf_output_begin_forward(struct perf_output_handle *handle,
 			      struct perf_sample_data *data,
 			      struct perf_event *event, unsigned int size)
 {
-	return __perf_output_begin(handle, data, event, size, false);
+	return __perf_output_begin(handle, data, event, size, false, true);
 }
 
 int perf_output_begin_backward(struct perf_output_handle *handle,
 			       struct perf_sample_data *data,
 			       struct perf_event *event, unsigned int size)
 {
-	return __perf_output_begin(handle, data, event, size, true);
+	return __perf_output_begin(handle, data, event, size, true, true);
 }
 
 int perf_output_begin(struct perf_output_handle *handle,
@@ -284,7 +286,7 @@ int perf_output_begin(struct perf_output_handle *handle,
 {
 
 	return __perf_output_begin(handle, data, event, size,
-				   unlikely(is_write_backward(event)));
+				   unlikely(is_write_backward(event)), false);
 }
 
 unsigned int perf_output_copy(struct perf_output_handle *handle,
-- 
2.37.2.789.g6183377224-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ