[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e83100ae-d687-3b4e-8256-aee242191ada@suse.cz>
Date: Tue, 25 Aug 2020 11:43:37 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: js1304@...il.com, Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...nel.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
kernel-team@....com, Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH for v5.9] mm/page_alloc: handle a missing case for
memalloc_nocma_{save/restore} APIs
On 8/25/20 6:59 AM, js1304@...il.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>
>
> memalloc_nocma_{save/restore} APIs can be used to skip page allocation
> on CMA area, but, there is a missing case and the page on CMA area could
> be allocated even if APIs are used. This patch handles this case to fix
> the potential issue.
>
> Missing case is an allocation from the pcplist. MIGRATE_MOVABLE pcplist
> could have the pages on CMA area so we need to skip it if ALLOC_CMA isn't
> specified.
>
> This patch implements this behaviour by checking allocated page from
> the pcplist rather than skipping an allocation from the pcplist entirely.
> Skipping the pcplist entirely would result in a mismatch between watermark
> check and actual page allocation.
Are you sure? I think a mismatch exists already. Pages can be on the pcplist but
they are not considered as free in the watermark check. So passing watermark
check means there should be also pages on free lists. So skipping pcplists would
be safe, no?
> And, it requires to break current code
> layering that order-0 page is always handled by the pcplist. I'd prefer
> to avoid it so this patch uses different way to skip CMA page allocation
> from the pcplist.
Well it would be much simpler and won't affect most of allocations. Better than
flushing pcplists IMHO.
Something like this?
----8<----
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e2bab486fea..15787ffb1708 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3361,7 +3361,10 @@ struct page *rmqueue(struct zone *preferred_zone,
unsigned long flags;
struct page *page;
- if (likely(order == 0)) {
+ if (likely(order == 0) &&
+ (!IS_ENABLED(CONFIG_CMA) ||
+ migratetype != MIGRATE_MOVABLE ||
+ alloc_flags & ALLOC_CMA)) {
page = rmqueue_pcplist(preferred_zone, zone, gfp_flags,
migratetype, alloc_flags);
goto out;
Powered by blists - more mailing lists