[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <142aeeb5-d6d8-84ca-e7a2-ba564185c565@gentwo.org>
Date: Fri, 25 Apr 2025 10:31:45 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...two.org>
To: Vlastimil Babka <vbabka@...e.cz>
cc: Suren Baghdasaryan <surenb@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>, Uladzislau Rezki <urezki@...il.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
maple-tree@...ts.infradead.org
Subject: Re: [PATCH v4 1/9] slab: add opt-in caching layer of percpu
sheaves
On Fri, 25 Apr 2025, Vlastimil Babka wrote:
> @@ -4195,7 +4793,11 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list
> if (unlikely(object))
> goto out;
>
> - object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
> + if (s->cpu_sheaves && node == NUMA_NO_NODE)
> + object = alloc_from_pcs(s, gfpflags);
The node to use is determined in __slab_alloc_node() only based on the
memory policy etc. NUMA_NO_NODE allocations can be redirected by memory
policies and this check disables it.
> @@ -4653,7 +5483,10 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
> memcg_slab_free_hook(s, slab, &object, 1);
> alloc_tagging_slab_free_hook(s, slab, &object, 1);
>
> - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false)))
> + if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)))
> + return;
> +
> + if (!s->cpu_sheaves || !free_to_pcs(s, object))
> do_slab_free(s, slab, object, object, 1, addr);
> }
We free to pcs even if the object is remote?
Powered by blists - more mailing lists