lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210321144832.36f31f3e@imladris.surriel.com>
Date:   Sun, 21 Mar 2021 14:48:32 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     linux-kernel@...r.kernel.org, kernel-team@...com,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Valentin Schneider <valentin.schneider@....com>
Subject: [PATCH] sched/fair: bring back select_idle_smt, but differently

    Mel Gorman did some nice work in 9fe1f127b913
    ("sched/fair: Merge select_idle_core/cpu()"), resulting in the kernel
    being more efficient at finding an idle CPU, and in tasks spending less
    time waiting to be run, both according to the schedstats run_delay
    numbers, and according to measured application latencies. Yay.
    
    The flip side of this is that we see more task migrations (about
    30% more), higher cache misses, higher memory bandwidth utilization,
    and higher CPU use, for the same number of requests/second.
    
    This is most pronounced on a memcache type workload, which saw
    a consistent 1-3% increase in total CPU use on the system, due
    to those increased task migrations leading to higher L2 cache
    miss numbers, and higher memory utilization. The exclusive L3
    cache on Skylake does us no favors there.
    
    On our web serving workload, that effect is usually negligible,
    but occasionally as large as a 9% regression in the number of
    requests served, due to some interaction between scheduler latency
    and the web load balancer.
    
    It appears that the increased number of CPU migrations is generally
    a good thing, since it leads to lower cpu_delay numbers, reflecting
    the fact that tasks get to run faster. However, the reduced locality
    and the corresponding increase in L2 cache misses hurts a little.
    
    The patch below appears to fix the regression, while keeping the
    benefit of the lower cpu_delay numbers, by reintroducing select_idle_smt
    with a twist: when a socket has no idle cores, check to see if the
    sibling of "prev" is idle, before searching all the other CPUs.
    
    This fixes both the occasional 9% regression on the web serving
    workload, and the continuous 2% CPU use regression on the memcache
    type workload.
    
    With Mel's patches and this patch together, the p95 and p99 response
    times for the memcache type application improve by about 20% over what
    they were before Mel's patches got merged.
    
    Signed-off-by: Rik van Riel <riel@...riel.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 794c2cb945f8..fcc47675d160 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6098,6 +6098,28 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
 	return -1;
 }
 
+/*
+ * Scan the local SMT mask for idle CPUs.
+ */
+static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int
+target)
+{
+	int cpu;
+
+	if (!static_branch_likely(&sched_smt_present))
+		return -1;
+
+	for_each_cpu(cpu, cpu_smt_mask(target)) {
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr) ||
+		    !cpumask_test_cpu(cpu, sched_domain_span(sd)))
+			continue;
+		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+			return cpu;
+	}
+
+	return -1;
+}
+
 #else /* CONFIG_SCHED_SMT */
 
 static inline void set_idle_cores(int cpu, int val)
@@ -6114,6 +6136,11 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
 	return __select_idle_cpu(core);
 }
 
+static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
+{
+	return -1;
+}
+
 #endif /* CONFIG_SCHED_SMT */
 
 /*
@@ -6121,7 +6148,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
  * average idle time for this rq (as found in rq->avg_idle).
  */
-static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int prev, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	int i, cpu, idle_cpu = -1, nr = INT_MAX;
@@ -6155,6 +6182,13 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 		time = cpu_clock(this);
 	}
 
+	if (cpus_share_cache(prev, target)) {
+		/* No idle core. Check if prev has an idle sibling. */
+		i = select_idle_smt(p, sd, prev);
+		if ((unsigned int)i < nr_cpumask_bits)
+			return i;
+	}
+
 	for_each_cpu_wrap(cpu, cpus, target) {
 		if (smt) {
 			i = select_idle_core(p, cpu, cpus, &idle_cpu);
@@ -6307,7 +6341,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	if (!sd)
 		return target;
 
-	i = select_idle_cpu(p, sd, target);
+	i = select_idle_cpu(p, sd, prev, target);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ