[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230421125453.np6b5hirktkj6ji5@techsingularity.net>
Date: Fri, 21 Apr 2023 13:54:53 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, Kaiyang Zhao <kaiyang2@...cmu.edu>,
Vlastimil Babka <vbabka@...e.cz>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [RFC PATCH 06/26] mm: page_alloc: consolidate free page
accounting
On Tue, Apr 18, 2023 at 03:12:53PM -0400, Johannes Weiner wrote:
> Free page accounting currently happens a bit too high up the call
> stack, where it has to deal with guard pages, compaction capturing,
> block stealing and even page isolation. This is subtle and fragile,
> and makes it difficult to hack on the code.
>
> Push the accounting down to where pages enter and leave the physical
> freelists, where all these higher-level exceptions are of no concern.
>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
I didn't look too closely at this one as I'm scanning through to see how
the overall series works and this is mostly a mechanical patch.
However, it definitely breaks build
> @@ -843,7 +843,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
> early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
>
> static inline bool set_page_guard(struct zone *zone, struct page *page,
> - unsigned int order, int migratetype)
> + unsigned int order
> {
> if (!debug_guardpage_enabled())
> return false;
Here
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists