[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17a999f3-7e6b-17d4-2caf-4912221894ec@gentwo.org>
Date: Fri, 30 May 2025 12:05:20 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...two.org>
To: Vlastimil Babka <vbabka@...e.cz>
cc: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>, Matthew Wilcox <willy@...radead.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm, slab: support NUMA policy for large kmalloc
On Thu, 29 May 2025, Vlastimil Babka wrote:
> On 5/29/25 16:57, Christoph Lameter (Ampere) wrote:
> > On Thu, 29 May 2025, Vlastimil Babka wrote:
> >
> >> The slab allocator observes the task's numa policy in various places
> >> such as allocating slab pages. Large kmalloc allocations currently do
> >> not, which seems to be an unintended omission. It is simple to correct
> >> that, so make ___kmalloc_large_node() behave the same way as
> >> alloc_slab_page().
> >
> > Large kmalloc allocation lead to the use of the page allocator which
> > implements the NUMA policies for the allocations.
> >
> > This patch is not necessary.
>
> I'm confused, as that's only true depending on which page allocator entry
> point you use. AFAICS before this series, it's using
> alloc_pages_node_noprof() which only does
>
>
> if (nid == NUMA_NO_NODE)
> nid = numa_mem_id();
>
> and no mempolicies.
That is a bug.
> I see this patch as analogical to your commit 1941b31482a6 ("Reenable NUMA
> policy support in the slab allocator")
>
> Am I missing something?
The page allocator has its own NUMA suport.
The patch to reenable NUMA support dealt with an issue within the
allocator where the memory policies were ignored.
It seems that the error was repeated for large kmalloc allocations.
Instead of respecting memory allocation policies the allocation is forced
to be local to the node.
The forcing to the node is possible with GFP_THISNODE. The default needs
to be following memory policies.
Powered by blists - more mailing lists