[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200306150102.3e77354b@imladris.surriel.com>
Date: Fri, 6 Mar 2020 15:01:02 -0500
From: Rik van Riel <riel@...riel.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...com, Roman Gushchin <guro@...com>,
Qian Cai <cai@....pw>, Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Anshuman Khandual <anshuman.khandual@....com>
Subject: [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for
movable allocations
Posting this one for Roman so I can deal with any upstream feedback and
create a v2 if needed, while scratching my head over the next piece of
this puzzle :)
---8<---
From: Roman Gushchin <guro@...com>
Currently a cma area is barely used by the page allocator because
it's used only as a fallback from movable, however kswapd tries
hard to make sure that the fallback path isn't used.
This results in a system evicting memory and pushing data into swap,
while lots of CMA memory is still available. This happens despite the
fact that alloc_contig_range is perfectly capable of moving any movable
allocations out of the way of an allocation.
To effectively use the cma area let's alter the rules: if the zone
has more free cma pages than the half of total free pages in the zone,
use cma pageblocks first and fallback to movable blocks in the case of
failure.
Signed-off-by: Rik van Riel <riel@...riel.com>
Co-developed-by: Rik van Riel <riel@...riel.com>
Signed-off-by: Roman Gushchin <guro@...com>
---
mm/page_alloc.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3c4eb750a199..0fb3c1719625 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
{
struct page *page;
+ /*
+ * Balance movable allocations between regular and CMA areas by
+ * allocating from CMA when over half of the zone's free memory
+ * is in the CMA area.
+ */
+ if (migratetype == MIGRATE_MOVABLE &&
+ zone_page_state(zone, NR_FREE_CMA_PAGES) >
+ zone_page_state(zone, NR_FREE_PAGES) / 2) {
+ page = __rmqueue_cma_fallback(zone, order);
+ if (page)
+ return page;
+ }
retry:
page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page)) {
--
2.24.1
Powered by blists - more mailing lists