lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aJ5IP1gKV1bkayj4@quatroqueijos.cascardo.eti.br>
Date: Thu, 14 Aug 2025 17:34:07 -0300
From: Thadeu Lima de Souza Cascardo <cascardo@...lia.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	Suren Baghdasaryan <surenb@...gle.com>,
	Michal Hocko <mhocko@...e.com>,
	Brendan Jackman <jackmanb@...gle.com>, Zi Yan <ziy@...dia.com>,
	Mel Gorman <mgorman@...hsingularity.net>, kernel-dev@...lia.com,
	Helen Koike <koike@...lia.com>,
	Matthew Wilcox <willy@...radead.org>, NeilBrown <neilb@...e.de>,
	Thierry Reding <thierry.reding@...il.com>
Subject: Re: [PATCH] mm/page_alloc: only set ALLOC_HIGHATOMIC for __GPF_HIGH
 allocations

On Thu, Aug 14, 2025 at 04:12:11PM -0400, Johannes Weiner wrote:
> Hello Thadeu,
> 
> On Thu, Aug 14, 2025 at 02:22:45PM -0300, Thadeu Lima de Souza Cascardo wrote:
> > Commit 524c48072e56 ("mm/page_alloc: rename ALLOC_HIGH to
> > ALLOC_MIN_RESERVE") is the start of a series that explains how __GFP_HIGH,
> > which implies ALLOC_MIN_RESERVE, is going to be used instead of
> > __GFP_ATOMIC for high atomic reserves.
> > 
> > Commit eb2e2b425c69 ("mm/page_alloc: explicitly record high-order atomic
> > allocations in alloc_flags") introduced ALLOC_HIGHATOMIC for such
> > allocations of order higher than 0. It still used __GFP_ATOMIC, though.
> > 
> > Then, commit 1ebbb21811b7 ("mm/page_alloc: explicitly define how __GFP_HIGH
> > non-blocking allocations accesses reserves") just turned that check for
> > !__GFP_DIRECT_RECLAIM, ignoring that high atomic reserves were expected to
> > test for __GFP_HIGH.
> 
> It indeed looks accidental. From the cover letter,
> 
>     High-order atomic allocations are explicitly handled with the caveat that
>     no __GFP_ATOMIC flag means that any high-order allocation that specifies
>     GFP_HIGH and cannot enter direct reclaim will be treated as if it was
>     GFP_ATOMIC.
> 
> it sounds like the intent was what your patch does, and not to extend
> those privileges to anybody who is !gfp_direct_reclaim.
> 
> > This leads to high atomic reserves being added for high-order GFP_NOWAIT
> > allocations and others that clear __GFP_DIRECT_RECLAIM, which is
> > unexpected. Later, those reserves lead to 0-order allocations going to the
> > slow path and starting reclaim.
> 
> Can you please provide more background on the workload and the
> environment in which you observed this?
> 
> Which GFP_NOWAIT requests you saw participating in the reserves etc.
> 
> I would feel better with Mel or Vlastimil chiming in as well, but your
> fix looks correct to me.

Thanks for the review, Johannes.

This has been observed in a browser/desktop environment test, where we have
noticed some memory pressure regression. This change alone does not make
the regression go away entirely, but it improves it.

I noticed some unix skb allocation going on and I found this at
net/core/skbuff.c:alloc_skb_with_frags:

			page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) |
					   __GFP_COMP |
					   __GFP_NOWARN,
					   order);

But I tested this at a simple VM with the most simple workload (no swap,
writing to tmpfs) and it triggered with xarrays. At lib/xarray.c:xas_alloc:

		gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN;

		if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
			gfp |= __GFP_ACCOUNT;

		node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp);

Where radix_tree_node_cachep, on that VM, uses a 4-page slab.

I tested with something like:

			if (order > 0) {
				WARN_ON_ONCE(!(alloc_flags & ALLOC_MIN_RESERVE));
				alloc_flags |= ALLOC_HIGHATOMIC;
			}

Thanks.
Cascardo.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ