[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250923175447.116782-2-longman@redhat.com>
Date: Tue, 23 Sep 2025 13:54:47 -0400
From: Waiman Long <longman@...hat.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Jonathan Corbet <corbet@....net>
Cc: linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org,
linux-doc@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Catalin Marinas <catalin.marinas@....com>,
Nico Pache <npache@...hat.com>,
Phil Auld <pauld@...hat.com>,
John Coleman <jocolema@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH 2/2] fs/proc: Show the content of task->user_cpus_ptr in /proc/<pid>/status
The task->user_cpus_ptr was introduced by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity") to
keep track of user-requested CPU affinity. With commit da019032819a
("sched: Enforce user requested affinity"), user_cpus_ptr will
persistently affect how cpus_allowed will be set. So it makes sense to
enable users to see the presence of a previously set user_cpus_ptr so
they can do something about it without getting a surprise.
Add new "Cpus_user" and "Cpus_user_list" fields to /proc/<pid>/status
output via task_cpus_allowed() as the presence of user_cpus_ptr will
affect the cpus_allowed cpumask.
Signed-off-by: Waiman Long <longman@...hat.com>
---
Documentation/filesystems/proc.rst | 2 ++
fs/proc/array.c | 9 +++++++++
2 files changed, 11 insertions(+)
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 2971551b7235..fb9e7753010c 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -311,6 +311,8 @@ It's slow but very precise.
SpeculationIndirectBranch indirect branch speculation mode
Cpus_allowed mask of CPUs on which this process may run
Cpus_allowed_list Same as previous, but in "list format"
+ Cpus_user mask of user requested CPUs from sched_setaffinity(2)
+ Cpus_user_list Same as previous, but in "list format"
Mems_allowed mask of memory nodes allowed to this process
Mems_allowed_list Same as previous, but in "list format"
voluntary_ctxt_switches number of voluntary context switches
diff --git a/fs/proc/array.c b/fs/proc/array.c
index d6a0369caa93..30ceab935e13 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -405,10 +405,19 @@ static inline void task_context_switch_counts(struct seq_file *m,
static void task_cpus_allowed(struct seq_file *m, struct task_struct *task)
{
+ cpumask_t *user_cpus = task->user_cpus_ptr;
+
seq_printf(m, "Cpus_allowed:\t%*pb\n",
cpumask_pr_args(&task->cpus_mask));
seq_printf(m, "Cpus_allowed_list:\t%*pbl\n",
cpumask_pr_args(&task->cpus_mask));
+
+ if (user_cpus) {
+ seq_printf(m, "Cpus_user:\t%*pb\n", cpumask_pr_args(user_cpus));
+ seq_printf(m, "Cpus_user_list:\t%*pbl\n", cpumask_pr_args(user_cpus));
+ } else {
+ seq_puts(m, "Cpus_user:\nCpus_user_list:\n");
+ }
}
static inline void task_core_dumping(struct seq_file *m, struct task_struct *task)
--
2.51.0
Powered by blists - more mailing lists