[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tencent_BAE640F74B4B0434E930276D1AD12E67FB08@qq.com>
Date: Fri, 30 Jan 2026 08:29:05 +0800
From: "shengminghu512" <shengminghu512@...com>
To: "Andrew Morton" <akpm@...ux-foundation.org>
Cc: "vbabka" <vbabka@...e.cz>, "surenb" <surenb@...gle.com>, "mhocko" <mhocko@...e.com>, "jackmanb" <jackmanb@...gle.com>, "hannes" <hannes@...xchg.org>, "ziy" <ziy@...dia.com>, "linux-mm" <linux-mm@...ck.org>, "linux-kernel" <linux-kernel@...r.kernel.org>, "hu.shengming" <hu.shengming@....com.cn>, "zhang.run" <zhang.run@....com.cn>
Subject: Re: [PATCH linux-next] mm/page_alloc: avoid overcounting bulk allocin watermark check
> On Thu, 29 Jan 2026 22:38:14 +0800 "shengminghu512" <shengminghu512@...com> wrote:
>
> > From: Shengming Hu <hu.shengming@....com.cn>
> >
> > alloc_pages_bulk_noprof() only fills NULL slots and already tracks how many
> > entries are pre-populated via nr_populated.
> >
> > The fast watermark check was adding nr_pages unconditionally, which can
> > overestimate the demand. Use (nr_pages - nr_populated) instead, as an
> > upper bound on the remaining pages this call can still allocate without
> > scanning the whole array.
>
> Thanks.
>
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -5130,7 +5130,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> >
> > cond_accept_memory(zone, 0, alloc_flags);
> > retry_this_zone:
> > - mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages;
> > + mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages - nr_populated;
> > if (zone_watermark_fast(zone, 0, mark,
> > zonelist_zone_idx(ac.preferred_zoneref),
> > alloc_flags, gfp)) {
>
> So that little optimization hasn't been working for four years?
Yeah, looks like it’s been conservative for a long time :)
It didn’t break correctness, but it likely made the fast watermark check
less effective by overestimating demand (counting already-populated entries
again), so we’d drop out of `zone_watermark_fast()` earlier and hit the
slow path more often.
--
With Best Regards,
Shengming
Powered by blists - more mailing lists