[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251204175405.1511340-9-srikar@linux.ibm.com>
Date: Thu, 4 Dec 2025 23:23:56 +0530
From: Srikar Dronamraju <srikar@...ux.ibm.com>
To: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
Peter Zijlstra <peterz@...radead.org>
Cc: Ben Segall <bsegall@...gle.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...nel.org>, Juri Lelli <juri.lelli@...hat.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Mel Gorman <mgorman@...e.de>, Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Shrikanth Hegde <sshegde@...ux.ibm.com>,
Srikar Dronamraju <srikar@...ux.ibm.com>,
Steven Rostedt <rostedt@...dmis.org>,
Swapnil Sapkal <swapnil.sapkal@....com>,
Thomas Huth <thuth@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
virtualization@...ts.linux.dev, Yicong Yang <yangyicong@...ilicon.com>,
Ilya Leoshkevich <iii@...ux.ibm.com>
Subject: [PATCH 08/17] sched/core: Implement CPU soft offline/online
Scheduler already supports CPU online/offline. However for cases where
scheduler has to offline a CPU temporarily, the online/offline cost is
too high. Hence here is an attempt to come-up with soft-offline that
almost looks similar to offline without actually having to do the
full-offline. Since CPUs are not to be used temporarily for a short
duration, they will continue to be part of the CPU topology.
In the soft-offline, CPU will be marked as inactive, i.e removed from
the cpu_active_mask, CPUs capacity would be reduced and non-pinned tasks
would be migrated out of the CPU's runqueue.
Similarly when onlined, CPU will be remarked as active, i.e. added to
cpu_active_mask, CPUs capacity would be restored.
Soft-offline is almost similar as 1st step of offline except rebuilding
the sched-domains. Since the other steps are not done including
rebuilding the sched-domain, the overhead of soft-offline would be less
compared to regular offline. A new cpumask is used to indicate
soft-offline is in progress and hence skips rebuilding the
sched-domains.
To push tasks out of the CPU, balance_push is modified to push tasks out
till there are runnable tasks on the runqueue or till the CPU is in dying
state.
Signed-off-by: Srikar Dronamraju <srikar@...ux.ibm.com>
---
include/linux/sched/topology.h | 1 +
kernel/sched/core.c | 44 ++++++++++++++++++++++++++++++----
2 files changed, 40 insertions(+), 5 deletions(-)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index bbcfdf12aa6e..ed45d7db3e76 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -241,4 +241,5 @@ static inline int task_node(const struct task_struct *p)
return cpu_to_node(task_cpu(p));
}
+extern void set_cpu_softoffline(int cpu, bool soft_offline);
#endif /* _LINUX_SCHED_TOPOLOGY_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 89efff1e1ead..f66fd1e925b0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8177,13 +8177,16 @@ static void balance_push(struct rq *rq)
* Only active while going offline and when invoked on the outgoing
* CPU.
*/
- if (!cpu_dying(rq->cpu) || rq != this_rq())
+ if (cpu_active(rq->cpu) || rq != this_rq())
return;
/*
- * Ensure the thing is persistent until balance_push_set(.on = false);
+ * Unless soft-offline, Ensure the thing is persistent until
+ * balance_push_set(.on = false); In case of soft-offline, just
+ * enough to push current non-pinned tasks out.
*/
- rq->balance_callback = &balance_push_callback;
+ if (cpu_dying(rq->cpu) || rq->nr_running)
+ rq->balance_callback = &balance_push_callback;
/*
* Both the cpu-hotplug and stop task are in this case and are
@@ -8392,6 +8395,8 @@ static inline void sched_smt_present_dec(int cpu)
#endif
}
+static struct cpumask cpu_softoffline_mask;
+
int sched_cpu_activate(unsigned int cpu)
{
struct rq *rq = cpu_rq(cpu);
@@ -8411,7 +8416,10 @@ int sched_cpu_activate(unsigned int cpu)
if (sched_smp_initialized) {
sched_update_numa(cpu, true);
sched_domains_numa_masks_set(cpu);
- cpuset_cpu_active();
+
+ /* For CPU soft-offline, dont need to rebuild sched-domains */
+ if (!cpumask_test_cpu(cpu, &cpu_softoffline_mask))
+ cpuset_cpu_active();
}
scx_rq_activate(rq);
@@ -8485,7 +8493,11 @@ int sched_cpu_deactivate(unsigned int cpu)
return 0;
sched_update_numa(cpu, false);
- cpuset_cpu_inactive(cpu);
+
+ /* For CPU soft-offline, dont need to rebuild sched-domains */
+ if (!cpumask_test_cpu(cpu, &cpu_softoffline_mask))
+ cpuset_cpu_inactive(cpu);
+
sched_domains_numa_masks_clear(cpu);
return 0;
}
@@ -10928,3 +10940,25 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx)
set_next_task(rq, ctx->p);
}
#endif /* CONFIG_SCHED_CLASS_EXT */
+
+void set_cpu_softoffline(int cpu, bool soft_offline)
+{
+ struct sched_domain *sd;
+
+ if (!cpu_online(cpu))
+ return;
+
+ cpumask_set_cpu(cpu, &cpu_softoffline_mask);
+
+ rcu_read_lock();
+ for_each_domain(cpu, sd)
+ update_group_capacity(sd, cpu);
+ rcu_read_unlock();
+
+ if (soft_offline)
+ sched_cpu_deactivate(cpu);
+ else
+ sched_cpu_activate(cpu);
+
+ cpumask_clear_cpu(cpu, &cpu_softoffline_mask);
+}
--
2.43.7
Powered by blists - more mailing lists