[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250926154834.2327823-1-joshua.hahnjy@gmail.com>
Date: Fri, 26 Sep 2025 08:48:33 -0700
From: Joshua Hahn <joshua.hahnjy@...il.com>
To: Brendan Jackman <jackmanb@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Chris Mason <clm@...com>,
Kiryl Shutsemau <kirill@...temov.name>,
Michal Hocko <mhocko@...e.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Zi Yan <ziy@...dia.com>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
kernel-team@...a.com
Subject: Re: [PATCH v2 2/4] mm/page_alloc: Perform appropriate batching in drain_pages_zone
On Fri, 26 Sep 2025 14:01:43 +0000 Brendan Jackman <jackmanb@...gle.com> wrote:
> On Wed Sep 24, 2025 at 8:44 PM UTC, Joshua Hahn wrote:
> > drain_pages_zone completely drains a zone of its pcp free pages by
> > repeatedly calling free_pcppages_bulk until pcp->count reaches 0.
> > In this loop, it already performs batched calls to ensure that
> > free_pcppages_bulk isn't called to free too many pages at once, and
> > relinquishes & reacquires the lock between each call to prevent
> > lock starvation from other processes.
> >
> > However, the current batching does not prevent lock starvation. The
> > current implementation creates batches of
> > pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX, which has been seen in
> > Meta workloads to be up to 64 << 5 == 2048 pages.
> >
> > While it is true that CONFIG_PCP_BATCH_SCALE_MAX is a config and
> > indeed can be adjusted by the system admin to be any number from
> > 0 to 6, it's default value of 5 is still too high to be reasonable for
> > any system.
> >
> > Instead, let's create batches of pcp->batch pages, which gives a more
> > reasonable 64 pages per call to free_pcppages_bulk. This gives other
> > processes a chance to grab the lock and prevents starvation. Each
> > individual call to drain_pages_zone may take longer, but we avoid the
> > worst case scenario of completely starving out other system-critical
> > threads from acquiring the pcp lock while 2048 pages are freed
> > one-by-one.
Hello Brendan, thank you for your review!
> Hey Joshua, do you know why pcp->batch is a factor here at all? Until
> now I never really noticed it. I thought that this field was a kinda
> dynamic auto-tuning where we try to make the pcplists a more aggressive
> cache when they're being used a lot and then shrink them down when the
> allocator is under less load. But I don't have a good intuition for why
> that's relevant to drain_pages_zone(). Something to do with the amount
> of lock contention we expect?
>From my understanding, pcp->batch is a value that can be used to batch
both allocation and freeing operations. For instance, drain_zone_pages
uses pcp->batch to ensure that we don't free too many pages at once,
which would lead to things like lock contention (I will address the
similarity between drain_zone_pages and drain_pages_zone at the end).
As for the purpose of batch and how its value is determined, I got my
understanding from this comment in zone_batchsize:
* ... The batch
* size is striking a balance between allocation latency
* and zone lock contention.
And based on this comment, I think a symmetric argument can be made for
freeing by just s/allocation latency/freeing latency above. My understanding
was that if we are allocating at a higher factor, we should also be freeing
at a higher factor to clean up those allocations faster as well, and it seems
like this is reflected in decay_pcp_high, where a higher batch means we
lower pcp->high to try and free up more pages.
Please let me know if my understanding of this area is incorrect here!
> Unless I'm just being stupid here, maybe a chance to add commentary.
I can definitely add some more context in the next version for this patch.
Actually, you are right -- reading back in my patch description, I've
motivated why we want batching, but not why pcp->batch is a good candidate
for this value. I'll definitely go back and clean it up!
> >
> > Signed-off-by: Joshua Hahn <joshua.hahnjy@...il.com>
> > ---
> > mm/page_alloc.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 77e7d9a5f149..b861b647f184 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2623,8 +2623,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
> > spin_lock(&pcp->lock);
> > count = pcp->count;
> > if (count) {
> > - int to_drain = min(count,
> > - pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
> > + int to_drain = min(count, pcp->batch);
>
> We actually don't need the min() here as free_pcppages_bulk() does that
> anyway. Not really related to the commit but maybe worth tidying that
> up.
Please correct me if I am missing something, but I think we still need the
min() here, since it takes the min of count and pcp->batch, while the
min in free_pcppages_bulk takes the min of the above result and pcp->count.
>From what I can understand, the goal of the min() in free_pcppages_bulk
is to ensure that we don't try to free more pages than exist in the pcp
(hence the min with count), while the goal of my min() is to not free
too many pages at once.
> Also, it seems if we drop the BATCH_SCALE_MAX logic the inside of the
> loop is now very similar to drain_zone_pages(), maybe time to have them
> share some code and avoid the confusing name overlap? drain_zone_pages()
> reads pcp->count without the lock or READ_ONCE() though, I assume that's
> coming from an assumption that pcp is owned by the current CPU and
> that's the only one that modifies it? Even if that's accurate it seems
> like an unnecessary optimisation to me.
This makes a lot of sense to me. To be honest, I had a lot of confusion
as to why these functions were different as well, so combining these
two functions into one definitely sonds like a great change.
I'll make these revisions in the next version. Thank you for your valuable
feedback, this was very helpful! I hope you have a great day : -)
Joshua
Powered by blists - more mailing lists