[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ac50181c-8a9d-43b8-9597-4d6d01f31f81@kernel.org>
Date: Tue, 30 Dec 2025 22:16:30 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Aaron Tomlin <atomlin@...mlin.com>, oleg@...hat.com,
akpm@...ux-foundation.org, gregkh@...uxfoundation.org, brauner@...nel.org,
mingo@...nel.org
Cc: sean@...e.io, linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [v2 PATCH 1/1] fs/proc: Expose mm_cpumask in /proc/[pid]/status
On 12/26/25 22:14, Aaron Tomlin wrote:
> This patch introduces two new fields to /proc/[pid]/status to display the
> set of CPUs, representing the CPU affinity of the process's active
> memory context, in both mask and list format: "Cpus_active_mm" and
> "Cpus_active_mm_list". The mm_cpumask is primarily used for TLB and
> cache synchronisation.
>
> Exposing this information allows userspace to easily describe the
> relationship between CPUs where a memory descriptor is "active" and the
> CPUs where the thread is allowed to execute. The primary intent is to
> provide visibility into the "memory footprint" across CPUs, which is
> invaluable for debugging performance issues related to IPI storms and
> TLB shootdowns in large-scale NUMA systems. The CPU-affinity sets the
> boundary; the mm_cpumask records the arrival; they complement each
> other.
>
> Frequent mm_cpumask changes may indicate instability in placement
> policies or excessive task migration overhead.
Just a note: I have the faint recollection that there are some
arch-specific oddities around mm_cpumask().
In particular, that some architectures never clear CPUs from the mask,
while others (e.g., x86) clear them one the TLB for them is clean.
I'd assume that all architectures at least set the CPUs once they ever
ran an MM. But are we sure about that?
$ git grep mm_cpumask | grep m68k
gives me no results and I don't see common code to ever set a cpu in
the mm_cpumask.
--
Cheers
David
Powered by blists - more mailing lists