lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1473092813.4412.6.camel@gmail.com>
Date:   Mon, 05 Sep 2016 18:26:53 +0200
From:   Mike Galbraith <umgwanakikbuti@...il.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Rik van Riel <riel@...hat.com>
Subject: [v2 patch v3.18+ regression fix] sched: Further improve spurious
 CPU_IDLE active migrations

Coming back to this, how about this instead, only increase the group
imbalance threshold when sd_llc_size == 2.  Newer L3 equipped
processors then aren't affected.



43f4d666 partially cured uprious migrations, but when there are
completely idle groups on a lightly loaded processor, and there is
a buddy pair occupying the busiest group, we will not attempt to
migrate due to select_idle_sibling() buddy placement, leaving the
busiest queue with one task.  We skip balancing, but increment
nr_balance_failed until we kick active balancing, and bounce a
buddy pair endlessly, demolishing throughput.

Increase group imbalance threshold to two when sd_llc_size == 2 to
allow buddies to share L2 without affecting larger L3 processors.

Regression detected on X5472 box, which has 4 MC groups of 2 cores.

netperf -l 60 -H 127.0.0.1 -t UDP_STREAM -i5,1 -I 95,5
pre:
!!! WARNING
!!! Desired confidence was not achieved within the specified iterations.
!!! This implies that there was variability in the test environment that
!!! must be investigated before going further.
!!! Confidence intervals: Throughput      : 66.421%
!!!                       Local CPU util  : 0.000%
!!!                       Remote CPU util : 0.000%

Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992   65507   60.00     1779143      0    15539.49
212992           60.00     1773551           15490.65

post:
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

212992   65507   60.00     3719377      0    32486.01
212992           60.00     3717492           32469.54

Signed-off-by: Mike Galbraith <mgalbraith@...e.de>
Fixes: caeb178c sched/fair: Make update_sd_pick_busiest() return 'true' on a busier sd
Cc: <stable@...r.kernel.org> # v3.18+
---
 kernel/sched/fair.c |   17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7249,12 +7249,19 @@ static struct sched_group *find_busiest_
 		 * This cpu is idle. If the busiest group is not overloaded
 		 * and there is no imbalance between this and busiest group
 		 * wrt idle cpus, it is balanced. The imbalance becomes
-		 * significant if the diff is greater than 1 otherwise we
-		 * might end up to just move the imbalance on another group
+		 * significant if the diff is greater than 1 for most CPUs,
+		 * or 2 for older CPUs having multiple groups of 2 cores
+		 * sharing an L2, otherwise we may end up uselessly moving
+		 * the imbalance to another group, or starting a tug of war
+		 * with idle L2 groups constantly ripping communicating
+		 * tasks apart, and no L3 to mitigate the cache miss pain.
 		 */
-		if ((busiest->group_type != group_overloaded) &&
-				(local->idle_cpus <= (busiest->idle_cpus + 1)))
-			goto out_balanced;
+		if (busiest->group_type != group_overloaded) {
+			int imbalance = __this_cpu_read(sd_llc_size) == 2 ? 2 : 1;
+
+			if (local->idle_cpus <= busiest->idle_cpus + imbalance)
+				goto out_balanced;
+		}
 	} else {
 		/*
 		 * In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ