[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241026072119.GH9767@noisy.programming.kicks-ass.net>
Date: Sat, 26 Oct 2024 09:21:19 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Christoph Lameter (Ampere)" <cl@...two.org>
Cc: tglx@...utronix.de, axboe@...nel.dk, linux-kernel@...r.kernel.org,
mingo@...hat.com, dvhart@...radead.org, dave@...olabs.net,
andrealmeid@...lia.com, Andrew Morton <akpm@...ux-foundation.org>,
urezki@...il.com, hch@...radead.org, lstoakes@...il.com,
Arnd Bergmann <arnd@...db.de>, linux-api@...r.kernel.org,
linux-mm@...ck.org, linux-arch@...r.kernel.org,
malteskarupke@....de
Subject: Re: [PATCH v1 11/14] futex: Implement FUTEX2_NUMA
On Fri, Oct 25, 2024 at 12:36:28PM -0700, Christoph Lameter (Ampere) wrote:
>
> Sorry saw this after the other email.
>
> On Fri, 25 Oct 2024, Peter Zijlstra wrote:
>
> > > Could we follow NUMA policies like with other metadata allocations during
> > > systen call processing?
> >
> > I had a quick look at this, and since the mempolicy stuff is per vma,
> > and we don't have the vma, this is going to be terribly expensive --
> > mmap_lock and all that.
>
> There is a memory policy for the task as a whole that is used for slab
> allocations and allocations that are not vma bound in current->mempolicy.
> Use that.
> You can get a node number following the current task mempolicy by calling
> mempolicy_slab_node() and keep using that node for the future.
I'll look into the per task thing, which I'm hoping means per-process.
We need something that is mm wide consistent.
But since futexes play in the address space, I was really rather
thinking we ought to use the vma policy.
Powered by blists - more mailing lists