[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANgQc9jF_wimvUsC0ck7o5w3oEwJ5g7DS0DRQqNm57X-WdGz=Q@mail.gmail.com>
Date: Thu, 16 Mar 2023 17:32:53 +0100
From: Timo Beckers <timo@...line.eu>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
"open list:PERFORMANCE EVENTS SUBSYSTEM"
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] perf core: apply calculated padding to PERF_SAMPLE_RAW output
On Tue, 14 Mar 2023 at 20:27, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, May 19, 2020 at 03:26:16PM +0200, Timo Beckers wrote:
> > Zero the amount of padding bytes determined in perf_prepare_sample().
> > This prevents garbage being read from the ring buffer after it has wrapped
> > the page boundary at least once.
>
> But it's user garbage, right?
Hey Peter, correct. Not a security issue, but rather a usability one. (IMO)
It would be nice if the receiver could verify if the trailing bytes
are all zeroes
after interpreting the payload. (I deal with Go interop; C<->Go struct
alignment behaviour differs subtly, so this helps debugging)
I know the ship has sailed and it's been like this for a long time, but getting
it fixed in a non-invasive way would be neat if the performance penalty is
not too steep. I think Jiri was playing around with some benchmarks.
> And they should be unconsumed anyway.
Well, not quite. perf_event_sample.size contains the size of .data including
padding, so the reader always needs to copy out the full event, which
potentially includes garbage. .data is completely opaque from a generic
perf reader POV, so it can't automatically trim it or choose not to read it.
Haven't looked at the kernel side in a while, but maybe setting .size to
the length of the input on the bpf side would be a better solution? Then no
zeroing needs to be done. I assume there's no strong need to increase
.size in 8-byte aligned steps, as I currently see values like 4, 12,
20, 28, etc.
Please correct me if I'm wrong.
Thanks,
Timo
>
> > Signed-off-by: Timo Beckers <timo@...line.eu>
> > ---
> > kernel/events/core.c | 12 ++++++++++--
> > 1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 80cf996a7f19..d4e0b003ece0 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -6807,8 +6807,16 @@ void perf_output_sample(struct perf_output_handle *handle,
> > break;
> > frag = frag->next;
> > } while (1);
> > - if (frag->pad)
> > - __output_skip(handle, NULL, frag->pad);
> > + /*
> > + * The padding value is determined in
> > + * perf_prepare_sample() and is not
> > + * expected to exceed u64.
> > + */
> > + if (frag->pad) {
> > + u64 zero = 0;
> > +
> > + __output_copy(handle, &zero, frag->pad);
> > + }
> > } else {
> > struct {
> > u32 size;
> > --
> > 2.26.2
> >
Powered by blists - more mailing lists