[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6531da5d-aa50-4119-b42e-3c22dc410671@intel.com>
Date: Thu, 15 Jan 2026 13:39:27 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>,
Aaron Tomlin <atomlin@...mlin.com>, oleg@...hat.com,
akpm@...ux-foundation.org, gregkh@...uxfoundation.org, brauner@...nel.org,
mingo@...nel.org
Cc: neelx@...e.com, sean@...e.io, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [v3 PATCH 1/1] fs/proc: Expose mm_cpumask in /proc/[pid]/status
On 1/15/26 13:19, David Hildenbrand (Red Hat) wrote:
> On 1/15/26 21:54, Aaron Tomlin wrote:
>> This patch introduces two new fields to /proc/[pid]/status to display the
>> set of CPUs, representing the CPU affinity of the process's active
>> memory context, in both mask and list format: "Cpus_active_mm" and
>> "Cpus_active_mm_list". The mm_cpumask is primarily used for TLB and
>> cache synchronisation.
I don't think this is the kind of thing we want to expose as ABI. It's
too deep of an implementation detail. Any meaning derived from it could
also change on a whim.
For instance, we've changed the rules about when CPUs are put in or
taken out of mm_cpumask() over time. I think the rules might have even
depended on the idle driver that your system was using at one time. I
think Rik also just changed some rules around it in his INVLPGB patches.
I'm not denying how valuable this kind of information might be. I just
don't think it's generally useful enough to justify an ABI that we need
to maintain forever. Tracing seems like a much more appropriate way to
get the data you are after than new ABI.
Can you get the info that you're after with kprobes? Or new tracepoints?
Powered by blists - more mailing lists