lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210102346.228663-2-vishalc@linux.ibm.com>
Date: Tue, 10 Dec 2024 15:53:47 +0530
From: Vishal Chourasia <vishalc@...ux.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        vschneid@...hat.com, sshegde@...ux.ibm.com, srikar@...ux.ibm.com,
        vineethr@...ux.ibm.com, zhangqiao22@...wei.com,
        Vishal Chourasia <vishalc@...ux.ibm.com>
Subject: [PATCH v3] sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug

CPU controller limits are not properly enforced during CPU hotplug
operations, particularly during CPU offline. When a CPU goes offline,
throttled processes are unintentionally being unthrottled across all CPUs
in the system, allowing them to exceed their assigned quota limits.

Consider below for an example,

Assigning 6.25% bandwidth limit to a cgroup
in a 8 CPU system, where, workload is running 8 threads for 20 seconds at
100% CPU utilization, expected (user+sys) time = 10 seconds.

$ cat /sys/fs/cgroup/test/cpu.max
50000 100000

$ ./ebizzy -t 8 -S 20        // non-hotplug case
real 20.00 s
user 10.81 s                 // intended behaviour
sys   0.00 s

$ ./ebizzy -t 8 -S 20        // hotplug case
real 20.00 s
user 14.43 s                 // Workload is able to run for 14 secs
sys   0.00 s                 // when it should have only run for 10 secs

During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain
is called for every active CPU to update the root domain. That ends up
calling rq_offline_fair which un-throttles any throttled hierarchies.

Unthrottling should only occur for the CPU being hotplugged to allow its
throttled processes to become runnable and get migrated to other CPUs.

With current patch applied,
$ ./ebizzy -t 8 -S 20        // hotplug case
real 21.00 s
user 10.16 s                 // intended behaviour
sys   0.00 s

Note: hotplug operation (online, offline) was performed in while(1) loop

Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
Tested-by: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>

v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com
v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com
---
 kernel/sched/fair.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa0238ee4857..2faf7dff2bc8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6687,25 +6687,30 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
 	rq_clock_start_loop_update(rq);
 
 	rcu_read_lock();
-	list_for_each_entry_rcu(tg, &task_groups, list) {
-		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+	/* Traverse the thread group list only for inactive rq */
+	if (!cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) {
+		list_for_each_entry_rcu(tg, &task_groups, list) {
+			struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
 
-		if (!cfs_rq->runtime_enabled)
-			continue;
+			if (!cfs_rq->runtime_enabled)
+				continue;
 
-		/*
-		 * clock_task is not advancing so we just need to make sure
-		 * there's some valid quota amount
-		 */
-		cfs_rq->runtime_remaining = 1;
-		/*
-		 * Offline rq is schedulable till CPU is completely disabled
-		 * in take_cpu_down(), so we prevent new cfs throttling here.
-		 */
-		cfs_rq->runtime_enabled = 0;
+			/*
+			 * Offline rq is schedulable till CPU is completely disabled
+			 * in take_cpu_down(), so we prevent new cfs throttling here.
+			 */
+			cfs_rq->runtime_enabled = 0;
 
-		if (cfs_rq_throttled(cfs_rq))
+			if (!cfs_rq_throttled(cfs_rq))
+				continue;
+
+			/*
+			 * clock_task is not advancing so we just need to make sure
+			 * there's some valid quota amount
+			 */
+			cfs_rq->runtime_remaining = 1;
 			unthrottle_cfs_rq(cfs_rq);
+		}
 	}
 	rcu_read_unlock();
 
-- 
2.47.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ