[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20241126064812.809903-2-vishalc@linux.ibm.com>
Date: Tue, 26 Nov 2024 12:18:13 +0530
From: Vishal Chourasia <vishalc@...ux.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, sshegde@...ux.ibm.com,
Vishal Chourasia <vishalc@...ux.ibm.com>
Subject: [PATCH] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
CPU controller limits are not properly enforced during CPU hotplug operations,
particularly during CPU offline. When a CPU goes offline, throttled
processes are unintentionally being unthrottled across all CPUs in the system,
allowing them to exceed their assigned quota limits.
Assigning 6.25% bandwidth limit to a cgroup in a 8 CPU system, where, workload
is running 8 threads for 20 seconds at 100% CPU utilization,
expected (user+sys) time = 10 seconds.
# cat /sys/fs/cgroup/test/cpu.max
50000 100000
# ./ebizzy -t 8 -S 20 // non-hotplug case
real 20.00 s
user 10.81 s // intented behaviour
sys 0.00 s
# ./ebizzy -t 8 -S 20 // hotplug case
real 20.00 s
user 14.43 s // Workload is able to run for 14 secs
sys 0.00 s // when it should have only run for 10 secs
During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.
Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.
With current patch applied,
# ./ebizzy -t 8 -S 20 // hotplug case
real 21.00 s
user 10.16 s // intented behaviour
sys 0.00 s
Note: hotplug operation (online, offline) was performed in while(1) loop
Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fbdca89c677f..c436e2307e6f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
list_for_each_entry_rcu(tg, &task_groups, list) {
struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
- if (!cfs_rq->runtime_enabled)
+ /* Don't unthrottle an active cfs_rq unnecessarily */
+ if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
continue;
/*
--
2.47.0
Powered by blists - more mailing lists