[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230419083851.2555096-1-sergii.piatakov@globallogic.com>
Date: Wed, 19 Apr 2023 11:38:51 +0300
From: Sergii Piatakov <sergii.piatakov@...ballogic.com>
To: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Steffen Zachaeus <steffen.zachaeus@...next.com>,
Gotthard Voellmeke <gotthard.voellmeke@...esas.com>,
Yaroslav Parkhomenko <yaroslav.parkhomenko@...ballogic.com>,
Sergii Piatakov <sergii.piatakov@...ballogic.com>
Subject: [PATCH mm/cma] mm/cma: retry allocation of dedicated area on EBUSY
Sometimes continuous page range can't be successfully allocated, because
some pages in the range may not pass the isolation test. In this case,
the CMA allocator gets an EBUSY error and retries allocation again (in
the slightly shifted range). During this procedure, a user may see
messages like:
alloc_contig_range: [70000, 80000) PFNs busy
But in most cases, everything will be OK, because isolation test failure
is a recoverable issue and the CMA allocator takes care of it (retrying
allocation again and again).
This approach works well while a small piece of memory is allocated from
a big CMA region. But there are cases when the caller needs to allocate
the entire CMA region at once.
For example, when a module requires a lot of CMA memory and a region
with the requested size is binded to the module in the DTS file. When
the module tries to allocate the entire its own region at once and the
isolation test fails, the situation will be different than usual due to
the following:
- it is not possible to allocate pages in another range from the CMA
region (because the module requires the whole range from the
beginning to the end);
- the module (from the client's point of view) doesn't expect its
request will be rejected (because it has its own dedicated CMA region
declared in the DTS).
This issue should be handled on the CMA allocator layer as this is the
lowest layer when the reason for failure can be distinguished. Because
the allocator doesn't return an error code, but instead it just returns
a pointer (to a page structure). And when the caller gets a NULL it
can't realize what kind of problem happens inside (EBUSY, ENOMEM, or
something else).
To avoid cases when CMA region has enough room to allocate the requested
pages, but returns NULL due to failed isolation test it is proposed:
- add a separate branch to handle cases when the entire region is
requested;
- as an initial solution, retry allocation several times (in the setup
where the issue was observed this solution helps).
Signed-off-by: Sergii Piatakov <sergii.piatakov@...ballogic.com>
---
mm/cma.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index a7263aa02c92..37e2bc34391b 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -431,6 +431,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
unsigned long i;
struct page *page = NULL;
int ret = -ENOMEM;
+ int retry = 0;
if (!cma || !cma->count || !cma->bitmap)
goto out;
@@ -487,8 +488,26 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
count, align);
- /* try again with a bit different memory target */
- start = bitmap_no + mask + 1;
+
+ /*
+ * The region has enough free space, but it can't be provided right now
+ * because the underlying layer is busy and can't perform allocation.
+ * Here we have different options depending on each particular case.
+ */
+
+ if (!start && !offset && bitmap_maxno == bitmap_count) {
+ /*
+ * If the whole region is requested it means that:
+ * - there is no room to retry the allocation in another range;
+ * - most likely somebody tries to allocate a dedicated CMA region.
+ * So in this case we can just retry allocation several times with the
+ * same parameters.
+ */
+ if (retry++ >= 5/*maxretry*/)
+ break;
+ } else
+ /* In other cases try again with a bit different memory target */
+ start = bitmap_no + mask + 1;
}
trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret);
--
2.25.1
Powered by blists - more mailing lists