lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1345625663.29604.36.camel@vlad>
Date:	Wed, 22 Aug 2012 11:54:23 +0300
From:	Vlad Zolotarov <vlad@...lemp.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>,
	"Shai Fultheim (Shai@...leMP.com)" <Shai@...leMP.com>
Subject: [PATCH] sched: optimize the locking in the rebalance_domains()

Don't perform the locking in the rebalance_domains() when it's not needed.

rebalance_domains() tried to take a "balancing" spin-lock when SD_SERIALIZE 
flag was set (which is a default configuration for NUMA aware systems)
every time it was called. This is done regardless the fact that maybe
there hasn't passed enough time since the last re-balancing in which case
there is no need to take a lock the first place.

The above creates a heavy false sharing problem on the "balancing"
spin-lock on large SMP systems: try_lock() is implemented with an
(atomic) xchng instruction which invalidates the cache line "balancing"
belongs to and therefore creates an intensive cross-NUMA-nodes traffic.

The below patch will minimize the above phenomena to the time slots it's
really needed, namely when the "interval" time period has really passed 
since the last re-balancing.

Signed-off-by: Vlad Zolotarov <vlad@...lemp.com>
Acked-by: Shai Fultheim <shai@...lemp.com>
---
 kernel/sched/fair.c |   24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c219bf8..298e201 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4754,6 +4754,13 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 		interval = msecs_to_jiffies(interval);
 		interval = clamp(interval, 1UL, max_load_balance_interval);
 
+		/*
+		 * continue to the next domain if the current domain doesn't
+		 * need to be re-balanced yet
+		 */
+		if (time_before(jiffies, sd->last_balance + interval))
+			goto out;
+
 		need_serialize = sd->flags & SD_SERIALIZE;
 
 		if (need_serialize) {
@@ -4761,16 +4768,15 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 				goto out;
 		}
 
-		if (time_after_eq(jiffies, sd->last_balance + interval)) {
-			if (load_balance(cpu, rq, sd, idle, &balance)) {
-				/*
-				 * We've pulled tasks over so either we're no
-				 * longer idle.
-				 */
-				idle = CPU_NOT_IDLE;
-			}
-			sd->last_balance = jiffies;
+		if (load_balance(cpu, rq, sd, idle, &balance)) {
+			/*
+			 * We've pulled tasks over so either we're no
+			 * longer idle.
+			 */
+			idle = CPU_NOT_IDLE;
 		}
+		sd->last_balance = jiffies;
+
 		if (need_serialize)
 			spin_unlock(&balancing);
 out:
-- 
1.7.9.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ