[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-45c815f06b80031659c63d7b93e580015d6024dd@git.kernel.org>
Date: Thu, 21 Jan 2016 10:54:25 -0800
From: tip-bot for Alexander Shishkin <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: acme@...radead.org, dsahern@...il.com,
alexander.shishkin@...ux.intel.com, hpa@...or.com,
peterz@...radead.org, jolsa@...hat.com, tglx@...utronix.de,
torvalds@...ux-foundation.org, mingo@...nel.org, acme@...hat.com,
linux-kernel@...r.kernel.org, markus.t.metzger@...el.com,
eranian@...gle.com, namhyung@...nel.org, vincent.weaver@...ne.edu
Subject: [tip:perf/urgent] perf:
Synchronously free aux pages in case of allocation failure
Commit-ID: 45c815f06b80031659c63d7b93e580015d6024dd
Gitweb: http://git.kernel.org/tip/45c815f06b80031659c63d7b93e580015d6024dd
Author: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
AuthorDate: Tue, 19 Jan 2016 17:14:29 +0200
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 21 Jan 2016 18:54:27 +0100
perf: Synchronously free aux pages in case of allocation failure
We are currently using asynchronous deallocation in the error path in
AUX mmap code, which is unnecessary and also presents a problem for users
that wish to probe for the biggest possible buffer size they can get:
they'll get -EINVAL on all subsequent attemts to allocate a smaller
buffer before the asynchronous deallocation callback frees up the pages
from the previous unsuccessful attempt.
Currently, gdb does that for allocating AUX buffers for Intel PT traces.
More specifically, overwrite mode of AUX pmus that don't support hardware
sg (some implementations of Intel PT, for instance) is limited to only
one contiguous high order allocation for its buffer and there is no way
of knowing its size without trying.
This patch changes error path freeing to be synchronous as there won't
be any contenders for the AUX pages at that point.
Reported-by: Markus Metzger <markus.t.metzger@...el.com>
Signed-off-by: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Arnaldo Carvalho de Melo <acme@...radead.org>
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>
Cc: David Ahern <dsahern@...il.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vince Weaver <vincent.weaver@...ne.edu>
Cc: vince@...ter.net
Link: http://lkml.kernel.org/r/1453216469-9509-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/events/ring_buffer.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index adfdc05..1faad2c 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -459,6 +459,25 @@ static void rb_free_aux_page(struct ring_buffer *rb, int idx)
__free_page(page);
}
+static void __rb_free_aux(struct ring_buffer *rb)
+{
+ int pg;
+
+ if (rb->aux_priv) {
+ rb->free_aux(rb->aux_priv);
+ rb->free_aux = NULL;
+ rb->aux_priv = NULL;
+ }
+
+ if (rb->aux_nr_pages) {
+ for (pg = 0; pg < rb->aux_nr_pages; pg++)
+ rb_free_aux_page(rb, pg);
+
+ kfree(rb->aux_pages);
+ rb->aux_nr_pages = 0;
+ }
+}
+
int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
pgoff_t pgoff, int nr_pages, long watermark, int flags)
{
@@ -547,30 +566,11 @@ out:
if (!ret)
rb->aux_pgoff = pgoff;
else
- rb_free_aux(rb);
+ __rb_free_aux(rb);
return ret;
}
-static void __rb_free_aux(struct ring_buffer *rb)
-{
- int pg;
-
- if (rb->aux_priv) {
- rb->free_aux(rb->aux_priv);
- rb->free_aux = NULL;
- rb->aux_priv = NULL;
- }
-
- if (rb->aux_nr_pages) {
- for (pg = 0; pg < rb->aux_nr_pages; pg++)
- rb_free_aux_page(rb, pg);
-
- kfree(rb->aux_pages);
- rb->aux_nr_pages = 0;
- }
-}
-
void rb_free_aux(struct ring_buffer *rb)
{
if (atomic_dec_and_test(&rb->aux_refcount))
Powered by blists - more mailing lists