[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230421150808.GC320347@cmpxchg.org>
Date: Fri, 21 Apr 2023 11:08:08 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: linux-mm@...ck.org, Kaiyang Zhao <kaiyang2@...cmu.edu>,
Vlastimil Babka <vbabka@...e.cz>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [RFC PATCH 06/26] mm: page_alloc: consolidate free page
accounting
On Fri, Apr 21, 2023 at 01:54:53PM +0100, Mel Gorman wrote:
> On Tue, Apr 18, 2023 at 03:12:53PM -0400, Johannes Weiner wrote:
> > Free page accounting currently happens a bit too high up the call
> > stack, where it has to deal with guard pages, compaction capturing,
> > block stealing and even page isolation. This is subtle and fragile,
> > and makes it difficult to hack on the code.
> >
> > Push the accounting down to where pages enter and leave the physical
> > freelists, where all these higher-level exceptions are of no concern.
> >
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
>
> I didn't look too closely at this one as I'm scanning through to see how
> the overall series works and this is mostly a mechanical patch.
> However, it definitely breaks build
>
> > @@ -843,7 +843,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
> > early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
> >
> > static inline bool set_page_guard(struct zone *zone, struct page *page,
> > - unsigned int order, int migratetype)
> > + unsigned int order
> > {
> > if (!debug_guardpage_enabled())
> > return false;
Oops, this is under a config I didn't test. Will fix. Thanks.
Powered by blists - more mailing lists