[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <157140288949.29376.10061367480857136332.tip-bot2@tip-bot2>
Date: Fri, 18 Oct 2019 12:48:09 -0000
From: "tip-bot2 for Yunfeng Ye" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Yunfeng Ye <yeyunfeng@...wei.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, <jolsa@...hat.co>,
<acme@...nel.org>, <mingo@...hat.com>, <mark.rutland@....com>,
<namhyung@...nel.org>, <alexander.shishkin@...ux.intel.com>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf/ring_buffer: Matching the memory allocate and
free, in rb_alloc()
The following commit has been merged into the perf/core branch of tip:
Commit-ID: d7e78706e43107fa269fe34b1a69e653f5ec9f2c
Gitweb: https://git.kernel.org/tip/d7e78706e43107fa269fe34b1a69e653f5ec9f2c
Author: Yunfeng Ye <yeyunfeng@...wei.com>
AuthorDate: Mon, 14 Oct 2019 16:15:57 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Thu, 17 Oct 2019 21:31:55 +02:00
perf/ring_buffer: Matching the memory allocate and free, in rb_alloc()
Currently perf_mmap_alloc_page() is used to allocate memory in
rb_alloc(), but using free_page() to free memory in the failure path.
It's better to use perf_mmap_free_page() instead.
Signed-off-by: Yunfeng Ye <yeyunfeng@...wei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: <jolsa@...hat.co>
Cc: <acme@...nel.org>
Cc: <mingo@...hat.com>
Cc: <mark.rutland@....com>
Cc: <namhyung@...nel.org>
Cc: <alexander.shishkin@...ux.intel.com>
Link: https://lkml.kernel.org/r/575c7e8c-90c7-4e3a-b41d-f894d8cdbd7f@huawei.com
---
kernel/events/ring_buffer.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index abc145c..246c83a 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -754,6 +754,14 @@ static void *perf_mmap_alloc_page(int cpu)
return page_address(page);
}
+static void perf_mmap_free_page(void *addr)
+{
+ struct page *page = virt_to_page(addr);
+
+ page->mapping = NULL;
+ __free_page(page);
+}
+
struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
{
struct ring_buffer *rb;
@@ -788,9 +796,9 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
fail_data_pages:
for (i--; i >= 0; i--)
- free_page((unsigned long)rb->data_pages[i]);
+ perf_mmap_free_page(rb->data_pages[i]);
- free_page((unsigned long)rb->user_page);
+ perf_mmap_free_page(rb->user_page);
fail_user_page:
kfree(rb);
@@ -799,14 +807,6 @@ fail:
return NULL;
}
-static void perf_mmap_free_page(void *addr)
-{
- struct page *page = virt_to_page(addr);
-
- page->mapping = NULL;
- __free_page(page);
-}
-
void rb_free(struct ring_buffer *rb)
{
int i;
Powered by blists - more mailing lists