[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cgEpzGUyWrA0vuTKMfj6ojaeDEFCzCDzB3CjRn-TtTtuA@mail.gmail.com>
Date: Wed, 29 Nov 2023 17:25:47 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Nick Terrell <terrelln@...com>,
Kan Liang <kan.liang@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Kajol Jain <kjain@...ux.ibm.com>,
Athira Rajeev <atrajeev@...ux.vnet.ibm.com>,
Huacai Chen <chenhuacai@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Vincent Whitchurch <vincent.whitchurch@...s.com>,
"Steinar H. Gunderson" <sesse@...gle.com>,
Liam Howlett <liam.howlett@...cle.com>,
Miguel Ojeda <ojeda@...nel.org>,
Colin Ian King <colin.i.king@...il.com>,
Dmitrii Dolgov <9erthalion6@...il.com>,
Yang Jihong <yangjihong1@...wei.com>,
Ming Wang <wangming01@...ngson.cn>,
James Clark <james.clark@....com>,
K Prateek Nayak <kprateek.nayak@....com>,
Sean Christopherson <seanjc@...gle.com>,
Leo Yan <leo.yan@...aro.org>,
Ravi Bangoria <ravi.bangoria@....com>,
German Gomez <german.gomez@....com>,
Changbin Du <changbin.du@...wei.com>,
Paolo Bonzini <pbonzini@...hat.com>, Li Dong <lidong@...o.com>,
Sandipan Das <sandipan.das@....com>,
liuwenyu <liuwenyu7@...wei.com>, linux-kernel@...r.kernel.org,
linux-perf-users@...r.kernel.org,
Guilherme Amadio <amadio@...too.org>
Subject: Re: [PATCH v5 02/50] libperf: Lazily allocate/size mmap event copy
On Mon, Nov 27, 2023 at 2:09 PM Ian Rogers <irogers@...gle.com> wrote:
>
> The event copy in the mmap is used to have storage to read an
> event. Not all users of mmaps read the events, such as perf
> record. The amount of buffer was also statically set to
> PERF_SAMPLE_MAX_SIZE rather than the amount necessary from the
> header's event size. Switch to a model where the event_copy is
> reallocated if too small to the event's size. This adds the potential
> for the event to move, so if a copy of the event pointer were stored
> it could be broken. All the current users do:
>
> while(event = perf_mmap__read_event()) { ... }
>
> and so they would be broken due to the event being overwritten if they
> had stored the pointer. Manual inspection and address sanitizer
> testing also shows the event pointer not being stored.
>
> Signed-off-by: Ian Rogers <irogers@...gle.com>
Acked-by: Namhyung Kim <namhyung@...nel.org>
Thanks,
Namhyung
> ---
> tools/lib/perf/include/internal/mmap.h | 3 ++-
> tools/lib/perf/mmap.c | 21 ++++++++++++++++++---
> 2 files changed, 20 insertions(+), 4 deletions(-)
>
> diff --git a/tools/lib/perf/include/internal/mmap.h b/tools/lib/perf/include/internal/mmap.h
> index 5a062af8e9d8..5f08cab61ece 100644
> --- a/tools/lib/perf/include/internal/mmap.h
> +++ b/tools/lib/perf/include/internal/mmap.h
> @@ -33,7 +33,8 @@ struct perf_mmap {
> bool overwrite;
> u64 flush;
> libperf_unmap_cb_t unmap_cb;
> - char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
> + void *event_copy;
> + size_t event_copy_sz;
> struct perf_mmap *next;
> };
>
> diff --git a/tools/lib/perf/mmap.c b/tools/lib/perf/mmap.c
> index 2184814b37dd..c829db7bf1fa 100644
> --- a/tools/lib/perf/mmap.c
> +++ b/tools/lib/perf/mmap.c
> @@ -19,6 +19,7 @@
> void perf_mmap__init(struct perf_mmap *map, struct perf_mmap *prev,
> bool overwrite, libperf_unmap_cb_t unmap_cb)
> {
> + /* Assume fields were zero initialized. */
> map->fd = -1;
> map->overwrite = overwrite;
> map->unmap_cb = unmap_cb;
> @@ -51,13 +52,19 @@ int perf_mmap__mmap(struct perf_mmap *map, struct perf_mmap_param *mp,
>
> void perf_mmap__munmap(struct perf_mmap *map)
> {
> - if (map && map->base != NULL) {
> + if (!map)
> + return;
> +
> + free(map->event_copy);
> + map->event_copy = NULL;
> + map->event_copy_sz = 0;
> + if (map->base) {
> munmap(map->base, perf_mmap__mmap_len(map));
> map->base = NULL;
> map->fd = -1;
> refcount_set(&map->refcnt, 0);
> }
> - if (map && map->unmap_cb)
> + if (map->unmap_cb)
> map->unmap_cb(map);
> }
>
> @@ -223,9 +230,17 @@ static union perf_event *perf_mmap__read(struct perf_mmap *map,
> */
> if ((*startp & map->mask) + size != ((*startp + size) & map->mask)) {
> unsigned int offset = *startp;
> - unsigned int len = min(sizeof(*event), size), cpy;
> + unsigned int len = size, cpy;
> void *dst = map->event_copy;
>
> + if (size > map->event_copy_sz) {
> + dst = realloc(map->event_copy, size);
> + if (!dst)
> + return NULL;
> + map->event_copy = dst;
> + map->event_copy_sz = size;
> + }
> +
> do {
> cpy = min(map->mask + 1 - (offset & map->mask), len);
> memcpy(dst, &data[offset & map->mask], cpy);
> --
> 2.43.0.rc1.413.gea7ed67945-goog
>
Powered by blists - more mailing lists