[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <618a8e5d-e5e9-4565-beb8-96194fe0f9c9@arm.com>
Date: Thu, 8 May 2025 16:16:34 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Yabin Cui <yabinc@...gle.com>, Suzuki K Poulose <suzuki.poulose@....com>,
Mike Leach <mike.leach@...aro.org>, James Clark <james.clark@...aro.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>, Mark Rutland <mark.rutland@....com>,
Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Liang Kan <kan.liang@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Cc: coresight@...ts.linaro.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v4] perf: Allocate non-contiguous AUX pages by default
On 5/7/25 23:43, Yabin Cui wrote:
> perf always allocates contiguous AUX pages based on aux_watermark.
> However, this contiguous allocation doesn't benefit all PMUs. For
> instance, ARM SPE and TRBE operate with virtual pages, and Coresight
> ETR allocates a separate buffer. For these PMUs, allocating contiguous
> AUX pages unnecessarily exacerbates memory fragmentation. This
> fragmentation can prevent their use on long-running devices.
>
> This patch modifies the perf driver to be memory-friendly by default,
> by allocating non-contiguous AUX pages. For PMUs requiring contiguous
> pages (Intel BTS and some Intel PT), the existing
> PERF_PMU_CAP_AUX_NO_SG capability can be used. For PMUs that don't
> require but can benefit from contiguous pages (some Intel PT), a new
> capability, PERF_PMU_CAP_AUX_PREFER_LARGE, is added to maintain their
> existing behavior.
>
> Signed-off-by: Yabin Cui <yabinc@...gle.com>
> ---
> Changes since v3:
> Add comments and a local variable to explain max_order value
> changes in rb_alloc_aux().
>
> Changes since v2:
> Let NO_SG imply PREFER_LARGE. So PMUs don't need to set both flags.
> Then the only place needing PREFER_LARGE is intel/pt.c.
>
> Changes since v1:
> In v1, default is preferring contiguous pages, and add a flag to
> allocate non-contiguous pages. In v2, default is allocating
> non-contiguous pages, and add a flag to prefer contiguous pages.
>
> v1 patchset:
> perf,coresight: Reduce fragmentation with non-contiguous AUX pages for
> cs_etm
>
> arch/x86/events/intel/pt.c | 2 ++
> include/linux/perf_event.h | 1 +
> kernel/events/ring_buffer.c | 33 ++++++++++++++++++++++++---------
> 3 files changed, 27 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
> index fa37565f6418..25ead919fc48 100644
> --- a/arch/x86/events/intel/pt.c
> +++ b/arch/x86/events/intel/pt.c
> @@ -1863,6 +1863,8 @@ static __init int pt_init(void)
>
> if (!intel_pt_validate_hw_cap(PT_CAP_topa_multiple_entries))
> pt_pmu.pmu.capabilities = PERF_PMU_CAP_AUX_NO_SG;
> + else
> + pt_pmu.pmu.capabilities = PERF_PMU_CAP_AUX_PREFER_LARGE;
>
> pt_pmu.pmu.capabilities |= PERF_PMU_CAP_EXCLUSIVE |
> PERF_PMU_CAP_ITRACE |
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 0069ba6866a4..56d77348c511 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -301,6 +301,7 @@ struct perf_event_pmu_context;
> #define PERF_PMU_CAP_AUX_OUTPUT 0x0080
> #define PERF_PMU_CAP_EXTENDED_HW_TYPE 0x0100
> #define PERF_PMU_CAP_AUX_PAUSE 0x0200
> +#define PERF_PMU_CAP_AUX_PREFER_LARGE 0x0400
>
> /**
> * pmu::scope
> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> index 5130b119d0ae..69c90ea1b79a 100644
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -679,7 +679,19 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
> {
> bool overwrite = !(flags & RING_BUFFER_WRITABLE);
> int node = (event->cpu == -1) ? -1 : cpu_to_node(event->cpu);
> - int ret = -ENOMEM, max_order;
> + /*
> + * True if the PMU needs a contiguous AUX buffer (CAP_AUX_NO_SG) or
> + * prefers large contiguous pages (CAP_AUX_PREFER_LARGE).
> + */
> + bool use_contiguous_pages = event->pmu->capabilities & (
> + PERF_PMU_CAP_AUX_NO_SG | PERF_PMU_CAP_AUX_PREFER_LARGE);
> + /*
> + * Initialize max_order to 0 for page allocation. This allocates single
> + * pages to minimize memory fragmentation. This is overriden if
Small nit typo -- s/overriden/overridden ^^^^
> + * use_contiguous_pages is true.
> + */
> + int max_order = 0;
> + int ret = -ENOMEM;
>
> if (!has_aux(event))
> return -EOPNOTSUPP;
> @@ -689,8 +701,8 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
>
> if (!overwrite) {
> /*
> - * Watermark defaults to half the buffer, and so does the
> - * max_order, to aid PMU drivers in double buffering.
> + * Watermark defaults to half the buffer, to aid PMU drivers
> + * in double buffering.
> */
> if (!watermark)
> watermark = min_t(unsigned long,
> @@ -698,16 +710,19 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
> (unsigned long)nr_pages << (PAGE_SHIFT - 1));
>
> /*
> - * Use aux_watermark as the basis for chunking to
> - * help PMU drivers honor the watermark.
> + * If using contiguous pages, use aux_watermark as the basis
> + * for chunking to help PMU drivers honor the watermark.
> */
> - max_order = get_order(watermark);
> + if (use_contiguous_pages)
> + max_order = get_order(watermark);
> } else {
> /*
> - * We need to start with the max_order that fits in nr_pages,
> - * not the other way around, hence ilog2() and not get_order.
> + * If using contiguous pages, we need to start with the
> + * max_order that fits in nr_pages, not the other way around,
> + * hence ilog2() and not get_order.
> */
> - max_order = ilog2(nr_pages);
> + if (use_contiguous_pages)
> + max_order = ilog2(nr_pages);
> watermark = 0;
> }
>
Reviewed-by: Anshuman Khandual <anshuman.khandual@....com>
Powered by blists - more mailing lists