[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADDJ8CVjOe4E-2uAP=vXdSFrn3+8cj4s6yDO-qqS2S0E55kjJw@mail.gmail.com>
Date: Thu, 31 Mar 2022 22:47:24 -0700
From: Denis Nikitin <denik@...omium.org>
To: James Clark <james.clark@....com>
Cc: acme@...nel.org, linux-perf-users@...r.kernel.org,
jolsa@...nel.org, alexey.budankov@...ux.intel.com,
alexander.shishkin@...ux.intel.com, namhyung@...nel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf session: Remap buf if there is no space for event
Hi James,
Thanks for your review.
On Thu, Mar 31, 2022 at 7:20 AM James Clark <james.clark@....com> wrote:
>
>
>
> On 30/03/2022 04:11, Denis Nikitin wrote:
> > If a perf event doesn't fit into remaining buffer space return NULL to
> > remap buf and fetch the event again.
> > Keep the logic to error out on inadequate input from fuzzing.
> >
> > This fixes perf failing on ChromeOS (with 32b userspace):
> >
> > $ perf report -v -i perf.data
> > ...
> > prefetch_event: head=0x1fffff8 event->header_size=0x30, mmap_size=0x2000000: fuzzed or compressed perf.data?
> > Error:
> > failed to process sample
> >
> > Fixes: 57fc032ad643 ("perf session: Avoid infinite loop when seeing invalid header.size")
> > Signed-off-by: Denis Nikitin <denik@...omium.org>
>
> Hi Denis,
>
> I tested this and it does fix the issue with a 32bit build. One concern is that the calculation to
> see if it will fit in the next map is dependent on the implementation of reader__mmap(). I think it
> would be possible for that to change slightly and then you could still get an infinite loop.
>
> But I can't really see a better way to do it, and it's unlikely for reader__mmap() to be modified
> to map data in a way to waste part of the buffer so it's probably fine.
>
> Maybe you could extract a function to calculate where the new offset would be in the buffer and share
> it between here and reader__mmap(). That would also make it more obvious what the 'head % page_size'
> bit is for.
Good point. I will send a separate patch to handle this.
Thanks,
Denis
>
> Either way:
>
> Reviewed-by: James Clark <james.clark@....com>
>
Powered by blists - more mailing lists