[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240110012234.3793639-1-kaleshsingh@google.com>
Date: Tue, 9 Jan 2024 17:22:33 -0800
From: Kalesh Singh <kaleshsingh@...gle.com>
To: minchan@...nel.org, akpm@...ux-foundation.org, lmark@...eaurora.org
Cc: surenb@...gle.com, android-mm@...gle.com, kernel-team@...roid.com,
Kalesh Singh <kaleshsingh@...gle.com>, Georgi Djakov <djakov@...nel.org>,
Liam Mark <quic_lmark@...cinc.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH] mm/cma: Fix placement of trace_cma_alloc_start/finish
The current placement of trace_cma_alloc_start/finish misses the
fail cases: !cma || !cma->count || !cma->bitmap.
trace_cma_alloc_finish is also not emitted for the failure case
where bitmap_count > bitmap_maxno.
Fix these missed cases by moving the start event before the failure
checks and moving the finish event to the out label.
Fixes: 7bc1aec5e287 ("mm: cma: add trace events for CMA alloc perf testing")
Cc: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Liam Mark <lmark@...eaurora.org>
Signed-off-by: Kalesh Singh <kaleshsingh@...gle.com>
---
mm/cma.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 2b2494fd6b59..8341f1217a85 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -436,6 +436,9 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
unsigned long i;
struct page *page = NULL;
int ret = -ENOMEM;
+ const char *name = cma ? cma->name : NULL;
+
+ trace_cma_alloc_start(name, count, align);
if (!cma || !cma->count || !cma->bitmap)
goto out;
@@ -446,8 +449,6 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
if (!count)
goto out;
- trace_cma_alloc_start(cma->name, count, align);
-
mask = cma_bitmap_aligned_mask(cma, align);
offset = cma_bitmap_aligned_offset(cma, align);
bitmap_maxno = cma_bitmap_maxno(cma);
@@ -496,8 +497,6 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
start = bitmap_no + mask + 1;
}
- trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret);
-
/*
* CMA can allocate multiple page blocks, which results in different
* blocks being marked with different tags. Reset the tags to ignore
@@ -516,6 +515,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
pr_debug("%s(): returned %p\n", __func__, page);
out:
+ trace_cma_alloc_finish(name, pfn, page, count, align, ret);
if (page) {
count_vm_event(CMA_ALLOC_SUCCESS);
cma_sysfs_account_success_pages(cma, count);
base-commit: 0dd3ee31125508cd67f7e7172247f05b7fd1753a
--
2.43.0.472.g3155946c3a-goog
Powered by blists - more mailing lists