lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z1gJrJ6TyotWzoCu@linux.ibm.com>
Date: Tue, 10 Dec 2024 14:58:12 +0530
From: Vishal Chourasia <vishalc@...ux.ibm.com>
To: Zhang Qiao <zhangqiao22@...wei.com>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, peterz@...radead.org,
        juri.lelli@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, vschneid@...hat.com, sshegde@...ux.ibm.com,
        srikar@...ux.ibm.com, vineethr@...ux.ibm.com
Subject: Re: [PATCH v2] sched/fair: Fix CPU bandwidth limit bypass during CPU
 hotplug

On Tue, Dec 10, 2024 at 02:55:36PM +0800, Zhang Qiao wrote:
> Hi Vishal,
> 
Thanks for looking into this!
> 
> 
> 在 2024/12/7 13:27, Vishal Chourasia 写道:
> > CPU controller limits are not properly enforced during CPU hotplug
> > operations, particularly during CPU offline. When a CPU goes offline,
> > throttled processes are unintentionally being unthrottled across all CPUs
> > in the system, allowing them to exceed their assigned quota limits.
> > 
> 
> I encountered a similar issue where cfs_rq is not in throttled state and the runtime_remaining still
> had plenty remaining, but it was reset to 1 here, causing the runtime_remaining of cfs_rq to be
> quickly depleted and the actual running time slice is smaller than the configured quota limits.
> 
Correct.
> > Consider below for an example,
> > 
> > Assigning 6.25% bandwidth limit to a cgroup
> > in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
> > 100% CPU utilization, expected (user+sys) time = 10 seconds.
> > 
> > $ cat /sys/fs/cgroup/test/cpu.max
> > 50000 100000
> > 
> > $ ./ebizzy -t 8 -S 20        // non-hotplug case
> > real 20.00 s
> > user 10.81 s                 // intended behaviour
> > sys   0.00 s
> > 
> > $ ./ebizzy -t 8 -S 20        // hotplug case
> > real 20.00 s
> > user 14.43 s                 // Workload is able to run for 14 secs
> > sys   0.00 s                 // when it should have only run for 10 secs
> > 
> > During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
> > is called for every active CPU to update the root domain. That ends up
> > calling rq_offline_fair which un-throttles any throttled hierarchies.
> > 
> > Unthrottling should only occur for the CPU being hotplugged to allow its
> > throttled processes to become runnable and get migrated to other CPUs.
> > 
> > With current patch applied,
> > $ ./ebizzy -t 8 -S 20        // hotplug case
> > real 21.00 s
> > user 10.16 s                 // intended behaviour
> > sys   0.00 s
> > 
> > Note: hotplug operation (online, offline) was performed in while(1) loop
> > Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
> > Tested-by: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
> > 
> > v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
> > 
> > ---
> >  kernel/sched/fair.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fbdca89c677f..e28a8e056ebf 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6684,7 +6684,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
> >  	list_for_each_entry_rcu(tg, &task_groups, list) {
> >  		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> >  
> > -		if (!cfs_rq->runtime_enabled)
> > +		/* Only unthrottle the CPU being hotplugged */
> > +		if (!cfs_rq->runtime_enabled || cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
> >  			continue;
> 
> The cpu_of(rq) is  fixed value, so the ret of cpumask_test_cpu() is also a fixed value. We could
> check this before traversing the task_groups list, avoiding unnecessary traversal, is right?
Yes, I will sent out another version. Thanks for pointing it out!
> 
> Something like this:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2d16c8545c71..79e9e5323112 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6687,25 +6687,29 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
>         rq_clock_start_loop_update(rq);
> 
>         rcu_read_lock();
> -       list_for_each_entry_rcu(tg, &task_groups, list) {
> -               struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> +       if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
> +               list_for_each_entry_rcu(tg, &task_groups, list) {
> +                       struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> 
> -               if (!cfs_rq->runtime_enabled)
> -                       continue;
> +                       if (!cfs_rq->runtime_enabled)
> +                               continue;
> 
> -               /*
> -                * clock_task is not advancing so we just need to make sure
> -                * there's some valid quota amount
> -                */
> -               cfs_rq->runtime_remaining = 1;
> -               /*
> -                * Offline rq is schedulable till CPU is completely disabled
> -                * in take_cpu_down(), so we prevent new cfs throttling here.
> -                */
> -               cfs_rq->runtime_enabled = 0;
> +                       /*
> +                        * Offline rq is schedulable till CPU is completely disabled
> +                        * in take_cpu_down(), so we prevent new cfs throttling here.
> +                        */
> +                       cfs_rq->runtime_enabled = 0;
> 
> -               if (cfs_rq_throttled(cfs_rq))
> +                       if (!cfs_rq_throttled(cfs_rq))
> +                               continue;
> +
> +                       /*
> +                        * clock_task is not advancing so we just need to make sure
> +                        * there's some valid quota amount
> +                        */
> +                       cfs_rq->runtime_remaining = 1;
>                         unthrottle_cfs_rq(cfs_rq);
> +               }
>         }
Only traverse the thread group list for inactive CPUs, and if the cfs_rq
is throttled then set it's runtime_remaining to 1 and unthrottle it.

- vishalc
> 
> -- 
> Zhang Qiao
> >  
> >  		/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ