[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAM9d7ciLL0Hd1qB1jmfJzWms4d4soo9CXu89uXxm=jF7gUWPEw@mail.gmail.com>
Date: Thu, 15 Sep 2022 09:41:18 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
Stephane Eranian <eranian@...gle.com>,
Andi Kleen <ak@...ux.intel.com>,
Ian Rogers <irogers@...gle.com>
Subject: Re: [PATCH] perf/core: Increase lost_samples count only for samples
Hi Peter,
On Fri, Sep 2, 2022 at 11:19 AM Namhyung Kim <namhyung@...nel.org> wrote:
>
> The event->lost_samples count is intended to count (lost) sample records
> but it's also counted for non-sample records like PERF_RECORD_MMAP etc.
> This can be a problem when a sampling event tracks those side-band
> events together.
>
> As overflow handler for user events only calls perf_output_begin_
> {for,back}ward() before writing to the ring buffer, we can pass an
> additional flag to indicate that it's writing a sample record.
Could you please take a look?
Thanks,
Namhyung
>
> Fixes: 119a784c8127 ("perf/core: Add a new read format to get a number of lost samples")
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> ---
> kernel/events/ring_buffer.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> index 726132039c38..5f38ee4edbdb 100644
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -149,7 +149,7 @@ static __always_inline int
> __perf_output_begin(struct perf_output_handle *handle,
> struct perf_sample_data *data,
> struct perf_event *event, unsigned int size,
> - bool backward)
> + bool backward, bool sample)
> {
> struct perf_buffer *rb;
> unsigned long tail, offset, head;
> @@ -174,7 +174,8 @@ __perf_output_begin(struct perf_output_handle *handle,
> if (unlikely(rb->paused)) {
> if (rb->nr_pages) {
> local_inc(&rb->lost);
> - atomic64_inc(&event->lost_samples);
> + if (sample)
> + atomic64_inc(&event->lost_samples);
> }
> goto out;
> }
> @@ -256,7 +257,8 @@ __perf_output_begin(struct perf_output_handle *handle,
>
> fail:
> local_inc(&rb->lost);
> - atomic64_inc(&event->lost_samples);
> + if (sample)
> + atomic64_inc(&event->lost_samples);
> perf_output_put_handle(handle);
> out:
> rcu_read_unlock();
> @@ -268,14 +270,14 @@ int perf_output_begin_forward(struct perf_output_handle *handle,
> struct perf_sample_data *data,
> struct perf_event *event, unsigned int size)
> {
> - return __perf_output_begin(handle, data, event, size, false);
> + return __perf_output_begin(handle, data, event, size, false, true);
> }
>
> int perf_output_begin_backward(struct perf_output_handle *handle,
> struct perf_sample_data *data,
> struct perf_event *event, unsigned int size)
> {
> - return __perf_output_begin(handle, data, event, size, true);
> + return __perf_output_begin(handle, data, event, size, true, true);
> }
>
> int perf_output_begin(struct perf_output_handle *handle,
> @@ -284,7 +286,7 @@ int perf_output_begin(struct perf_output_handle *handle,
> {
>
> return __perf_output_begin(handle, data, event, size,
> - unlikely(is_write_backward(event)));
> + unlikely(is_write_backward(event)), false);
> }
>
> unsigned int perf_output_copy(struct perf_output_handle *handle,
> --
> 2.37.2.789.g6183377224-goog
>
Powered by blists - more mailing lists