[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z3nCuQwP4qA66bcd@pavilion.home>
Date: Sun, 5 Jan 2025 00:22:33 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Will Deacon <will@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Ard Biesheuvel <ardb@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>,
"Paul E. McKenney" <paulmck@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Boqun Feng <boqun.feng@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
Uladzislau Rezki <urezki@...il.com>, rcu@...r.kernel.org,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 10/19] sched,arm64: Handle CPU isolation on last resort
fallback rq selection
Le Fri, Jan 03, 2025 at 03:27:03PM +0000, Will Deacon a écrit :
> On Wed, Dec 11, 2024 at 04:40:23PM +0100, Frederic Weisbecker wrote:
> > +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
> > +{
> > + if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
> > + return housekeeping_cpumask(HK_TYPE_TICK);
> > +
> > + if (!is_compat_thread(task_thread_info(p)))
> > + return housekeeping_cpumask(HK_TYPE_TICK);
> > +
> > + return system_32bit_el0_cpumask();
> > +}
>
> I think this is correct, but damn what we really want to ask for is the
> intersection of task_cpu_possible_mask(p) and
> housekeeping_cpumask(HK_TYPE_TICK). It's a shame to duplicate the logic
> in task_cpu_possible_mask() here because we don't want to allocate a
> temporary mask.
Yeah I know :-/
>
> Maybe we could have a helper to consolidate things a little?
>
> static inline const struct cpumask *
> __task_cpu_possible_mask(struct task_struct *p, const struct cpumask *mask)
> {
> if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
> return mask;
>
> if (!is_compat_thread(task_thread_info(p)))
> return mask;
>
> return system_32bit_el0_cpumask();
> }
>
> Then we could call that from both task_cpu_possible_mask() and
> task_cpu_fallback_mask(), but passing 'cpu_possible_mask' and
> housekeeping_cpumask(HK_TYPE_TICK) for the 'mask' argument respectively?
Good point! How is the following updated version?
---
From: Frederic Weisbecker <frederic@...nel.org>
Date: Fri, 27 Sep 2024 00:48:59 +0200
Subject: [PATCH 2/2] sched,arm64: Handle CPU isolation on last resort fallback
rq selection
When a kthread or any other task has an affinity mask that is fully
offline or unallowed, the scheduler reaffines the task to all possible
CPUs as a last resort.
This default decision doesn't mix up very well with nohz_full CPUs that
are part of the possible cpumask but don't want to be disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full.
Suggested-by: Michal Hocko <mhocko@...e.com>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/include/asm/mmu_context.h | 14 +++++++++++---
arch/arm64/kernel/cpufeature.c | 5 +++++
include/linux/mmu_context.h | 1 +
kernel/sched/core.c | 2 +-
5 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8b4e5a3cd24c..cac5efc836c0 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -671,6 +671,7 @@ static inline bool supports_clearbhb(int scope)
}
const struct cpumask *system_32bit_el0_cpumask(void);
+const struct cpumask *fallback_32bit_el0_cpumask(void);
DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
static inline bool system_supports_32bit_el0(void)
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 48b3d9553b67..0dbe3b29049b 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -271,18 +271,26 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
}
static inline const struct cpumask *
-task_cpu_possible_mask(struct task_struct *p)
+__task_cpu_possible_mask(struct task_struct *p, const struct cpumask *mask)
{
if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
- return cpu_possible_mask;
+ return mask;
if (!is_compat_thread(task_thread_info(p)))
- return cpu_possible_mask;
+ return mask;
return system_32bit_el0_cpumask();
}
+
+static inline const struct cpumask *
+task_cpu_possible_mask(struct task_struct *p)
+{
+ return __task_cpu_possible_mask(p, cpu_possible_mask);
+}
#define task_cpu_possible_mask task_cpu_possible_mask
+const struct cpumask *task_cpu_fallback_mask(struct task_struct *p);
+
void verify_cpu_asid_bits(void);
void post_ttbr_update_workaround(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3c87659c14db..a983e8660987 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1642,6 +1642,11 @@ const struct cpumask *system_32bit_el0_cpumask(void)
return cpu_possible_mask;
}
+const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
+{
+ return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
+}
+
static int __init parse_32bit_el0_param(char *str)
{
allow_mismatched_32bit_el0 = true;
diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
index bbaec80c78c5..ac01dc4eb2ce 100644
--- a/include/linux/mmu_context.h
+++ b/include/linux/mmu_context.h
@@ -24,6 +24,7 @@ static inline void leave_mm(void) { }
#ifndef task_cpu_possible_mask
# define task_cpu_possible_mask(p) cpu_possible_mask
# define task_cpu_possible(cpu, p) true
+# define task_cpu_fallback_mask(p) housekeeping_cpumask(HK_TYPE_TICK)
#else
# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
#endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95e40895a519..233b50b0e123 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3534,7 +3534,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
*
* More yuck to audit.
*/
- do_set_cpus_allowed(p, task_cpu_possible_mask(p));
+ do_set_cpus_allowed(p, task_cpu_fallback_mask(p));
state = fail;
break;
case fail:
--
2.46.0
Powered by blists - more mailing lists