[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <161858531260.29796.6094672207320806626.tip-bot2@tip-bot2>
Date: Fri, 16 Apr 2021 15:01:52 -0000
From: "tip-bot2 for Alexander Shishkin" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf: Cap allocation order at aux_watermark
The following commit has been merged into the perf/core branch of tip:
Commit-ID: d68e6799a5c87f415d3bfa0dea49caee28ab00d1
Gitweb: https://git.kernel.org/tip/d68e6799a5c87f415d3bfa0dea49caee28ab00d1
Author: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
AuthorDate: Wed, 14 Apr 2021 18:49:54 +03:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 16 Apr 2021 16:32:39 +02:00
perf: Cap allocation order at aux_watermark
Currently, we start allocating AUX pages half the size of the total
requested AUX buffer size, ignoring the attr.aux_watermark setting. This,
in turn, makes intel_pt driver disregard the watermark also, as it uses
page order for its SG (ToPA) configuration.
Now, this can be fixed in the intel_pt PMU driver, but seeing as it's the
only one currently making use of high order allocations, there is no
reason not to fix the allocator instead. This way, any other driver
wishing to add this support would not have to worry about this.
Signed-off-by: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/20210414154955.49603-2-alexander.shishkin@linux.intel.com
---
kernel/events/ring_buffer.c | 34 ++++++++++++++++++----------------
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index bd55ccc..5286871 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -674,21 +674,26 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
if (!has_aux(event))
return -EOPNOTSUPP;
- /*
- * We need to start with the max_order that fits in nr_pages,
- * not the other way around, hence ilog2() and not get_order.
- */
- max_order = ilog2(nr_pages);
-
- /*
- * PMU requests more than one contiguous chunks of memory
- * for SW double buffering
- */
if (!overwrite) {
- if (!max_order)
- return -EINVAL;
+ /*
+ * Watermark defaults to half the buffer, and so does the
+ * max_order, to aid PMU drivers in double buffering.
+ */
+ if (!watermark)
+ watermark = nr_pages << (PAGE_SHIFT - 1);
- max_order--;
+ /*
+ * Use aux_watermark as the basis for chunking to
+ * help PMU drivers honor the watermark.
+ */
+ max_order = get_order(watermark);
+ } else {
+ /*
+ * We need to start with the max_order that fits in nr_pages,
+ * not the other way around, hence ilog2() and not get_order.
+ */
+ max_order = ilog2(nr_pages);
+ watermark = 0;
}
rb->aux_pages = kcalloc_node(nr_pages, sizeof(void *), GFP_KERNEL,
@@ -743,9 +748,6 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
rb->aux_overwrite = overwrite;
rb->aux_watermark = watermark;
- if (!rb->aux_watermark && !rb->aux_overwrite)
- rb->aux_watermark = nr_pages << (PAGE_SHIFT - 1);
-
out:
if (!ret)
rb->aux_pgoff = pgoff;
Powered by blists - more mailing lists