lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251024142137.739555-1-clm@meta.com>
Date: Fri, 24 Oct 2025 07:21:35 -0700
From: Chris Mason <clm@...a.com>
To: Vlastimil Babka <vbabka@...e.cz>
CC: Chris Mason <clm@...a.com>, Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...two.org>,
        David Rientjes <rientjes@...gle.com>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Harry Yoo <harry.yoo@...cle.com>, Uladzislau Rezki <urezki@...il.com>,
        "Liam R. Howlett"
	<Liam.Howlett@...cle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        "Sebastian
 Andrzej Siewior" <bigeasy@...utronix.de>,
        Alexei Starovoitov
	<ast@...nel.org>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, <linux-rt-devel@...ts.linux.dev>,
        <bpf@...r.kernel.org>, <kasan-dev@...glegroups.com>
Subject: Re: [PATCH RFC 02/19] slab: handle pfmemalloc slabs properly with sheaves

On Thu, 23 Oct 2025 15:52:24 +0200 Vlastimil Babka <vbabka@...e.cz> wrote:

> When a pfmemalloc allocation actually dips into reserves, the slab is
> marked accordingly and non-pfmemalloc allocations should not be allowed
> to allocate from it. The sheaves percpu caching currently doesn't follow
> this rule, so implement it before we expand sheaves usage to all caches.
> 
> Make sure objects from pfmemalloc slabs don't end up in percpu sheaves.
> When freeing, skip sheaves when freeing an object from pfmemalloc slab.
> When refilling sheaves, use __GFP_NOMEMALLOC to override any pfmemalloc
> context - the allocation will fallback to regular slab allocations when
> sheaves are depleted and can't be refilled because of the override.
> 
> For kfree_rcu(), detect pfmemalloc slabs after processing the rcu_sheaf
> after the grace period in __rcu_free_sheaf_prepare() and simply flush
> it if any object is from pfmemalloc slabs.
> 
> For prefilled sheaves, try to refill them first with __GFP_NOMEMALLOC
> and if it fails, retry without __GFP_NOMEMALLOC but then mark the sheaf
> pfmemalloc, which makes it flushed back to slabs when returned.
>

Hi Vlastimil,

We're trying out the AI patch review automation on the BPF commits and it had
some questions about a few of these.  Since the goal is to actually test the
automation, I'm putting them in unedited, but I did try and make sure they
were valid before sending.

For this one, I'm not sure if the behavior AI is flagging is intentional or
not, so it feels like a reasonable question to me at least:

> diff --git a/mm/slub.c b/mm/slub.c
> index 4731b9e461c2..ab03f29dc3bf 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c

[ ... ]

> @@ -5331,6 +5340,26 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache *s, gfp_t gfpflags, int nod
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node_noprof);
>
> +static int __prefill_sheaf_pfmemalloc(struct kmem_cache *s,
> +				      struct slab_sheaf *sheaf, gfp_t gfp)
> +{
> +	int ret = 0;
> +
> +	ret = refill_sheaf(s, sheaf, gfp | __GFP_NOMEMALLOC);
> +
> +	if (likely(!ret || !gfp_pfmemalloc_allowed(gfp)))
> +		return ret;
> +
> +	/*
> +	 * if we are allowed to, refill sheaf with pfmemalloc but then remember
> +	 * it for when it's returned
> +	 */
> +	ret = refill_sheaf(s, sheaf, gfp);
> +	sheaf->pfmemalloc = true;
> +
> +	return ret;
> +}

[ ... ]

> @@ -5497,7 +5528,7 @@ int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
>
>  	if (likely(sheaf->capacity >= size)) {
>  		if (likely(sheaf->capacity == s->sheaf_capacity))
> -			return refill_sheaf(s, sheaf, gfp);
> +			return __prefill_sheaf_pfmemalloc(s, sheaf, gfp);
>
>  		if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sheaf->size,
>  					     &sheaf->objects[sheaf->size])) {
                                             ^^^

In kmem_cache_refill_sheaf(), does the oversize sheaf path (when
sheaf->capacity != s->sheaf_capacity) need __GFP_NOMEMALLOC too?

The commit message says "When refilling sheaves, use __GFP_NOMEMALLOC
to override any pfmemalloc context", and the normal capacity path now
calls __prefill_sheaf_pfmemalloc() which adds __GFP_NOMEMALLOC.

But this oversize path still calls __kmem_cache_alloc_bulk() with the
raw gfp flags. If the calling context is pfmemalloc-enabled (e.g.,
during swap or network operations), could pfmemalloc objects be
allocated into the sheaf? Those objects would then be returned via
kmem_cache_alloc_from_sheaf() to potentially non-pfmemalloc callers.

Similar issue exists in kmem_cache_prefill_sheaf() for the oversize
path at the function's start, though that code wasn't modified by this
patch.

>  			return -ENOMEM;
>  		}
>  		sheaf->size = sheaf->capacity;

[ ... ]



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ