[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250311152045.AqadBOG9@linutronix.de>
Date: Tue, 11 Mar 2025 16:20:45 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
André Almeida <andrealmeid@...lia.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <vschneid@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v9 00/11] futex: Add support task local hash maps.
On 2025-03-03 11:54:16 [+0100], Peter Zijlstra wrote:
> After that I rebased my FUTEX2_NUMA patch on top of all this and added
> a new FUTEX2_MPOL, which is something Christoph Lameter asked for a
> while back, and something we can now actually do sanely, since we have
> lockless vma lookups working.
I'm going to keep the keep the changes mostly as-is (except the few
compile fallouts). I thing I wanted to mention in case someone has a
simple idea: We have this now:
|struct {
| unsigned long hashmask;
| unsigned int hashshift;
| struct futex_hash_bucket *queues[MAX_NUMNODES];
| } __futex_data __read_mostly __aligned(2*sizeof(long));
This MAX_NUMNODES will be set to 1 << 10 due to MAXSMP for instance on
Debian. This in turn leads to an 8KiB huge queues array which will be
largely unused on a simple machine which has no / 1 nodes. I don't have
access to machine with more than 4 nodes so _assumed_ this is the limit.
Anyway. I'm also not aware about the corner cases, say we have that many
nodes (1024) but just two CPUs. That would lead roundup_pow_of_two(0) in
futex_init().
> WDYT?
Sebastian
Powered by blists - more mailing lists