[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251014131527.2682236-1-joshua.hahnjy@gmail.com>
Date: Tue, 14 Oct 2025 06:15:27 -0700
From: Joshua Hahn <joshua.hahnjy@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Chris Mason <clm@...com>,
Kiryl Shutsemau <kirill@...temov.name>,
Brendan Jackman <jackmanb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Zi Yan <ziy@...dia.com>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
kernel-team@...a.com
Subject: Re: [PATCH v4 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit
On Tue, 14 Oct 2025 11:38:00 +0200 Vlastimil Babka <vbabka@...e.cz> wrote:
> On 10/13/25 21:08, Joshua Hahn wrote:
> > Before returning, free_frozen_page_commit calls free_pcppages_bulk using
> > nr_pcp_free to determine how many pages can appropritately be freed,
> > based on the tunable parameters stored in pcp. While this number is an
> > accurate representation of how many pages should be freed in total, it
> > is not an appropriate number of pages to free at once using
> > free_pcppages_bulk, since we have seen the value consistently go above
> > 2000 in the Meta fleet on larger machines.
> >
> > As such, perform batched page freeing in free_pcppages_bulk by using
> > pcp->batch member. In order to ensure that other processes are not
> > starved of the zone lock, free both the zone lock and pcp lock to yield to
> > other threads.
> >
> > Note that because free_frozen_page_commit now performs a spinlock inside the
> > function (and can fail), the function may now return with a freed pcp.
> > To handle this, return true if the pcp is locked on exit and false otherwise.
> >
> > In addition, since free_frozen_page_commit must now be aware of what UP
> > flags were stored at the time of the spin lock, and because we must be
> > able to report new UP flags to the callers, add a new unsigned long*
> > parameter UP_flags to keep track of this.
[...snip...]
> > @@ -2861,15 +2871,47 @@ static void free_frozen_page_commit(struct zone *zone,
> > * Do not attempt to take a zone lock. Let pcp->count get
> > * over high mark temporarily.
> > */
> > - return;
> > + return true;
> > }
> >
> > high = nr_pcp_high(pcp, zone, batch, free_high);
> > if (pcp->count < high)
> > - return;
> > + return true;
> > +
> > + to_free = nr_pcp_free(pcp, batch, high, free_high);
> > + if (to_free == 0)
> > + return true;
Hello Vlastimil, thank you for your patience and review on this iteration!
> I think this is an unnecessary shortcut. The while() condition covers this
> and it's likely rare enough that we don't gain anything (if the goal was to
> skip the ZONE_BELOW_HIGH check below).
Agreed.
> > +
> > + while (to_free > 0 && pcp->count >= high) {
>
> The "&& pcp->count >= high" is AFAICS still changing how much we free
> compared to before the patch. I.e. we might terminate as soon as freeing
> "to_free_batched" in some iteration gets us below "high", while previously
> we would free the whole "to_free" and get way further below the "high".
This is true, and I also see now what you had meant in your feedback on the
previous iteration.
> It should be changed to "&& pcp->count > 0" intended only to prevent useless
> iterations that decrement to_free by to_free_batched while
> free_pcppages_bulk() does nothing.
This makes sense. Sorry, I think I missed your point in the previous version,
but I think now I see what you were trying to say about the count. Previously
when we were re-calculating high every iteration, I thought it made some sense
to make the check again, since we might want to terminate early. But I do
agree that it doesn't really make sense to do this; we want to preserve the
behavior of the original code. I do have one comment below as well:
> > + to_free_batched = min(to_free, batch);
> > + free_pcppages_bulk(zone, to_free_batched, pcp, pindex);
> > + to_free -= to_free_batched;
> > + if (pcp->count >= high) {
Here, I think I should change this in the next version to also just check
for the same condition in the while loop (i.e. to_free > 0 && pcp->count > 0)
The idea is that if we have another iteration, we will re-lock. Otherwise, we
can just ignore the case inside the if statement. I think if it is left as
a check for pcp->count >= high, then there will be a weird case for when
0 < pcp->count <= high, where we continue to call free_pcppages_bulk but
do not re-lock.
So unfortunately, I will have to check for the same condition of the
while loop in the if statement : -( I'll send a new version with the changes;
I don't expect there to be a drastic performance change, since I think the
early termination case would have only applied if there was a race condition
that freed the pcp remotely.
Thank you as always, Vlastimil. I hope you have a great day!
Joshua
Powered by blists - more mailing lists