[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231004083648.GI27267@noisy.programming.kicks-ass.net>
Date: Wed, 4 Oct 2023 10:36:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org, Phil Auld <pauld@...hat.com>,
Brent Rowsell <browsell@...hat.com>,
Peter Hunt <pehunt@...hat.com>,
Florian Weimer <fweimer@...hat.com>
Subject: Re: [PATCH v4] sched/core: Use zero length to reset cpumasks in
sched_setaffinity()
On Tue, Oct 03, 2023 at 04:57:35PM -0400, Waiman Long wrote:
> Since commit 8f9ea86fdf99 ("sched: Always preserve the user requested
> cpumask"), user provided CPU affinity via sched_setaffinity(2) is
> perserved even if the task is being moved to a different cpuset. However,
> that affinity is also being inherited by any subsequently created child
> processes which may not want or be aware of that affinity.
>
> One way to solve this problem is to provide a way to back off from that
> user provided CPU affinity. This patch implements such a scheme by
> using an input cpumask length of 0 to signal a reset of the cpumasks
> to the default as allowed by the current cpuset. A non-NULL cpumask
> should still be provided to avoid problem with older kernel.
>
> If sched_setaffinity(2) has been called previously to set a user
> supplied cpumask, a value of 0 will be returned to indicate success.
> Otherwise, an error value of -EINVAL will be returned.
>
> We may have to update the sched_setaffinity(2) manpage to document
> this new side effect of passing in an input length of 0.
Bah.. so while this is less horrible than some of the previous hacks,
but I still think an all set mask is the sanest option.
Adding FreeBSD's CPU_FILL() to glibc() isn't the hardest thing ever, but
even without that, it's a single memset() away.
Would not the below two patches, one kernel, one glibc, be all it takes?
---
Subject: sched: Allow sched_setaffinity() to re-set the usermask
When userspace provides an all-set cpumask, take that to mean 'no
explicit affinity' and drop the usermask.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/core.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 779cdc7969c8..18124bbbb17c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8368,7 +8368,15 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
*/
user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);
if (user_mask) {
- cpumask_copy(user_mask, in_mask);
+ /*
+ * All-set user cpumask resets affinity and drops the explicit
+ * user mask.
+ */
+ cpumask_and(user_mask, in_mask, cpu_possible_mask);
+ if (cpumask_equal(user_mask, cpu_possible_mask)) {
+ kfree(user_mask);
+ user_mask = NULL;
+ }
} else if (IS_ENABLED(CONFIG_SMP)) {
return -ENOMEM;
}
---
Subject: sched: Add CPU_FILL()
Add the CPU_FILL() macros to easily create an all-set cpumask.
FreeBSD also provides this macro with this semantic.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
posix/bits/cpu-set.h | 10 ++++++++++
posix/sched.h | 2 ++
2 files changed, 12 insertions(+)
diff --git a/posix/bits/cpu-set.h b/posix/bits/cpu-set.h
index 16037eae30..c65332461f 100644
--- a/posix/bits/cpu-set.h
+++ b/posix/bits/cpu-set.h
@@ -45,6 +45,8 @@ typedef struct
#if __GNUC_PREREQ (2, 91)
# define __CPU_ZERO_S(setsize, cpusetp) \
do __builtin_memset (cpusetp, '\0', setsize); while (0)
+# define __CPU_FILL_S(setsize, cpusetp) \
+ do __builtin_memset (cpusetp, 0xFF, setsize); while (0)
#else
# define __CPU_ZERO_S(setsize, cpusetp) \
do { \
@@ -54,6 +56,14 @@ typedef struct
for (__i = 0; __i < __imax; ++__i) \
__bits[__i] = 0; \
} while (0)
+# define __CPU_FILL_S(setsize, cpusetp) \
+ do { \
+ size_t __i; \
+ size_t __imax = (setsize) / sizeof (__cpu_mask); \
+ __cpu_mask *__bits = (cpusetp)->__bits; \
+ for (__i = 0; __i < __imax; ++__i) \
+ __bits[__i] = ~0UL; \
+ } while (0)
#endif
#define __CPU_SET_S(cpu, setsize, cpusetp) \
(__extension__ \
diff --git a/posix/sched.h b/posix/sched.h
index 9b254ae840..a7f6638353 100644
--- a/posix/sched.h
+++ b/posix/sched.h
@@ -94,6 +94,7 @@ extern int __REDIRECT_NTH (sched_rr_get_interval,
# define CPU_ISSET(cpu, cpusetp) __CPU_ISSET_S (cpu, sizeof (cpu_set_t), \
cpusetp)
# define CPU_ZERO(cpusetp) __CPU_ZERO_S (sizeof (cpu_set_t), cpusetp)
+# define CPU_FILL(cpusetp) __CPU_FILL_S (sizeof (cpu_set_t), cpusetp)
# define CPU_COUNT(cpusetp) __CPU_COUNT_S (sizeof (cpu_set_t), cpusetp)
# define CPU_SET_S(cpu, setsize, cpusetp) __CPU_SET_S (cpu, setsize, cpusetp)
@@ -101,6 +102,7 @@ extern int __REDIRECT_NTH (sched_rr_get_interval,
# define CPU_ISSET_S(cpu, setsize, cpusetp) __CPU_ISSET_S (cpu, setsize, \
cpusetp)
# define CPU_ZERO_S(setsize, cpusetp) __CPU_ZERO_S (setsize, cpusetp)
+# define CPU_FILL_S(setsize, cpusetp) __CPU_FILL_S (setsize, cpusetp)
# define CPU_COUNT_S(setsize, cpusetp) __CPU_COUNT_S (setsize, cpusetp)
# define CPU_EQUAL(cpusetp1, cpusetp2) \
Powered by blists - more mailing lists