[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fb488379-3965-496b-8c6f-259981f3d7e5@huawei.com>
Date: Tue, 10 Dec 2024 14:55:36 +0800
From: Zhang Qiao <zhangqiao22@...wei.com>
To: Vishal Chourasia <vishalc@...ux.ibm.com>, <linux-kernel@...r.kernel.org>
CC: <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>, <sshegde@...ux.ibm.com>, <srikar@...ux.ibm.com>,
<vineethr@...ux.ibm.com>
Subject: Re: [PATCH v2] sched/fair: Fix CPU bandwidth limit bypass during CPU
hotplug
Hi Vishal,
在 2024/12/7 13:27, Vishal Chourasia 写道:
> CPU controller limits are not properly enforced during CPU hotplug
> operations, particularly during CPU offline. When a CPU goes offline,
> throttled processes are unintentionally being unthrottled across all CPUs
> in the system, allowing them to exceed their assigned quota limits.
>
I encountered a similar issue where cfs_rq is not in throttled state and the runtime_remaining still
had plenty remaining, but it was reset to 1 here, causing the runtime_remaining of cfs_rq to be
quickly depleted and the actual running time slice is smaller than the configured quota limits.
> Consider below for an example,
>
> Assigning 6.25% bandwidth limit to a cgroup
> in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> 100% CPU utilization, expected (user+sys) time = 10 seconds.
>
> $ cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> $ ./ebizzy -t 8 -S 20 // non-hotplug case
> real 20.00 s
> user 10.81 s // intended behaviour
> sys 0.00 s
>
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 20.00 s
> user 14.43 s // Workload is able to run for 14 secs
> sys 0.00 s // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> $ ./ebizzy -t 8 -S 20 // hotplug case
> real 21.00 s
> user 10.16 s // intended behaviour
> sys 0.00 s
>
> Note: hotplug operation (online, offline) was performed in while(1) loop
> Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
> Tested-by: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
>
> v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
>
> ---
> kernel/sched/fair.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f..e28a8e056ebf 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> list_for_each_entry_rcu(tg, &task_groups, list) {
> struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>
> - if (!cfs_rq->runtime_enabled)
> + /* Only unthrottle the CPU being hotplugged */
> + if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> continue;
The cpu_of(rq) is fixed value, so the ret of cpumask_test_cpu() is also a fixed value. We could
check this before traversing the task_groups list, avoiding unnecessary traversal, is right?
Something like this:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2d16c8545c71..79e9e5323112 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6687,25 +6687,29 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
rq_clock_start_loop_update(rq);
rcu_read_lock();
- list_for_each_entry_rcu(tg, &task_groups, list) {
- struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+ if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
+ list_for_each_entry_rcu(tg, &task_groups, list) {
+ struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
- if (!cfs_rq->runtime_enabled)
- continue;
+ if (!cfs_rq->runtime_enabled)
+ continue;
- /*
- * clock_task is not advancing so we just need to make sure
- * there's some valid quota amount
- */
- cfs_rq->runtime_remaining = 1;
- /*
- * Offline rq is schedulable till CPU is completely disabled
- * in take_cpu_down(), so we prevent new cfs throttling here.
- */
- cfs_rq->runtime_enabled = 0;
+ /*
+ * Offline rq is schedulable till CPU is completely disabled
+ * in take_cpu_down(), so we prevent new cfs throttling here.
+ */
+ cfs_rq->runtime_enabled = 0;
- if (cfs_rq_throttled(cfs_rq))
+ if (!cfs_rq_throttled(cfs_rq))
+ continue;
+
+ /*
+ * clock_task is not advancing so we just need to make sure
+ * there's some valid quota amount
+ */
+ cfs_rq->runtime_remaining = 1;
unthrottle_cfs_rq(cfs_rq);
+ }
}
--
Zhang Qiao
>
> /*
Powered by blists - more mailing lists