lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DD2SCJF0CLN5.1824PA58HFFZF@google.com>
Date: Fri, 26 Sep 2025 14:01:43 +0000
From: Brendan Jackman <jackmanb@...gle.com>
To: Joshua Hahn <joshua.hahnjy@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Johannes Weiner <hannes@...xchg.org>
Cc: Chris Mason <clm@...com>, Kiryl Shutsemau <kirill@...temov.name>, Michal Hocko <mhocko@...e.com>, 
	Suren Baghdasaryan <surenb@...gle.com>, Vlastimil Babka <vbabka@...e.cz>, Zi Yan <ziy@...dia.com>, 
	<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>, <kernel-team@...a.com>
Subject: Re: [PATCH v2 2/4] mm/page_alloc: Perform appropriate batching in drain_pages_zone

On Wed Sep 24, 2025 at 8:44 PM UTC, Joshua Hahn wrote:
> drain_pages_zone completely drains a zone of its pcp free pages by
> repeatedly calling free_pcppages_bulk until pcp->count reaches 0.
> In this loop, it already performs batched calls to ensure that
> free_pcppages_bulk isn't called to free too many pages at once, and
> relinquishes & reacquires the lock between each call to prevent
> lock starvation from other processes.
>
> However, the current batching does not prevent lock starvation. The
> current implementation creates batches of
> pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX, which has been seen in
> Meta workloads to be up to 64 << 5 == 2048 pages.
>
> While it is true that CONFIG_PCP_BATCH_SCALE_MAX is a config and
> indeed can be adjusted by the system admin to be any number from
> 0 to 6, it's default value of 5 is still too high to be reasonable for
> any system.
>
> Instead, let's create batches of pcp->batch pages, which gives a more
> reasonable 64 pages per call to free_pcppages_bulk. This gives other
> processes a chance to grab the lock and prevents starvation. Each
> individual call to drain_pages_zone may take longer, but we avoid the
> worst case scenario of completely starving out other system-critical
> threads from acquiring the pcp lock while 2048 pages are freed
> one-by-one.

Hey Joshua, do you know why pcp->batch is a factor here at all? Until
now I never really noticed it. I thought that this field was a kinda
dynamic auto-tuning where we try to make the pcplists a more aggressive
cache when they're being used a lot and then shrink them down when the
allocator is under less load. But I don't have a good intuition for why
that's relevant to drain_pages_zone(). Something to do with the amount
of lock contention we expect?

Unless I'm just being stupid here, maybe a chance to add commentary.

>
> Signed-off-by: Joshua Hahn <joshua.hahnjy@...il.com>
> ---
>  mm/page_alloc.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 77e7d9a5f149..b861b647f184 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2623,8 +2623,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
>  		spin_lock(&pcp->lock);
>  		count = pcp->count;
>  		if (count) {
> -			int to_drain = min(count,
> -				pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
> +			int to_drain = min(count, pcp->batch);

We actually don't need the min() here as free_pcppages_bulk() does that
anyway. Not really related to the commit but maybe worth tidying that
up.

Also, it seems if we drop the BATCH_SCALE_MAX logic the inside of the
loop is now very similar to drain_zone_pages(), maybe time to have them
share some code and avoid the confusing name overlap? drain_zone_pages()
reads pcp->count without the lock or READ_ONCE() though, I assume that's
coming from an assumption that pcp is owned by the current CPU and
that's the only one that modifies it? Even if that's accurate it seems
like an unnecessary optimisation to me.

Cheers,
Brendan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ