[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc570904-a9d0-4c86-b7c8-d47da6bf02dd@linux.ibm.com>
Date: Thu, 28 Nov 2024 00:01:20 +0530
From: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
To: Vishal Chourasia <vishalc@...ux.ibm.com>, linux-kernel@...r.kernel.org
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, sshegde@...ux.ibm.com,
Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
Subject: Re: [PATCH] sched/fair: Fix CPU bandwidth limit bypass during CPU
hotplug
Hi Vishal,
On 26/11/24 12:18, Vishal Chourasia wrote:
> CPU controller limits are not properly enforced during CPU hotplug operations,
> particularly during CPU offline. When a CPU goes offline, throttled
> processes are unintentionally being unthrottled across all CPUs in the system,
> allowing them to exceed their assigned quota limits.
>
> Assigning 6.25% bandwidth limit to a cgroup in a 8 CPU system, where, workload
> is running 8 threads for 20 seconds at 100% CPU utilization,
> expected (user+sys) time = 10 seconds.
>
> # cat /sys/fs/cgroup/test/cpu.max
> 50000 100000
>
> # ./ebizzy -t 8 -S 20 // non-hotplug case
> real 20.00 s
> user 10.81 s // intented behaviour
> sys 0.00 s
>
> # ./ebizzy -t 8 -S 20 // hotplug case
> real 20.00 s
> user 14.43 s // Workload is able to run for 14 secs
> sys 0.00 s // when it should have only run for 10 secs
>
> During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> is called for every active CPU to update the root domain. That ends up
> calling rq_offline_fair which un-throttles any throttled hierarchies.
>
> Unthrottling should only occur for the CPU being hotplugged to allow its
> throttled processes to become runnable and get migrated to other CPUs.
>
> With current patch applied,
> # ./ebizzy -t 8 -S 20 // hotplug case
> real 21.00 s
> user 10.16 s // intented behaviour
> sys 0.00 s
>
> Note: hotplug operation (online, offline) was performed in while(1) loop
Tested with and without this patch for the ebizzy workload as mentioned.
Without the patch:
------------------
19376 records/s
real 20.00 s
user 12.49 s
sys 0.00 s
With the patch:
---------------
17708 records/s
real 20.00 s
user 10.07 s
sys 0.00 s
Hence,
Tested-by: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
Thanks,
Madadi Vineeth Reddy
>
> Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
> ---
> kernel/sched/fair.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f..c436e2307e6f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> list_for_each_entry_rcu(tg, &task_groups, list) {
> struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>
> - if (!cfs_rq->runtime_enabled)
> + /* Don't unthrottle an active cfs_rq unnecessarily */
> + if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> continue;
>
> /*
Powered by blists - more mailing lists