[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250625191108.1646208-6-sshegde@linux.ibm.com>
Date: Thu, 26 Jun 2025 00:41:04 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, tglx@...utronix.de, yury.norov@...il.com,
maddy@...ux.ibm.com
Cc: sshegde@...ux.ibm.com, vschneid@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, kprateek.nayak@....com, huschle@...ux.ibm.com,
srikar@...ux.ibm.com, linux-kernel@...r.kernel.org,
christophe.leroy@...roup.eu, linuxppc-dev@...ts.ozlabs.org,
gregkh@...uxfoundation.org
Subject: [RFC v2 5/9] sched/rt: Don't select CPU marked as avoid for wakeup and push/pull rt task
- While wakeup don't select the CPU if it marked as avoid.
- Don't pull a task if CPU is marked as avoid.
- Don't push a task to a CPU marked as Avoid.
Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>
---
kernel/sched/rt.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 15d5855c542c..fd9df6f46135 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1549,6 +1549,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
if (!test && target != -1 && !rt_task_fits_capacity(p, target))
goto out_unlock;
+ if (cpu_avoid(target))
+ goto out_unlock;
/*
* Don't bother moving it if the destination CPU is
* not running a lower priority task.
@@ -1871,7 +1873,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
for (tries = 0; tries < RT_MAX_TRIES; tries++) {
cpu = find_lowest_rq(task);
- if ((cpu == -1) || (cpu == rq->cpu))
+ if ((cpu == -1) || (cpu == rq->cpu) || cpu_avoid(cpu))
break;
lowest_rq = cpu_rq(cpu);
@@ -1969,7 +1971,7 @@ static int push_rt_task(struct rq *rq, bool pull)
return 0;
cpu = find_lowest_rq(rq->curr);
- if (cpu == -1 || cpu == rq->cpu)
+ if (cpu == -1 || cpu == rq->cpu || cpu_avoid(cpu))
return 0;
/*
@@ -2232,6 +2234,9 @@ static void pull_rt_task(struct rq *this_rq)
if (likely(!rt_overload_count))
return;
+ if (cpu_avoid(this_rq->cpu))
+ return;
+
/*
* Match the barrier from rt_set_overloaded; this guarantees that if we
* see overloaded we must also see the rto_mask bit.
--
2.43.0
Powered by blists - more mailing lists