lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 1 Jun 2018 08:28:11 -0400
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     Song Liu <songliubraving@...com>, kernel-team@...com,
        mingo@...hat.com, luto@...nel.org, tglx@...utronix.de,
        x86@...nel.org
Subject: [PATCH] x86,switch_mm: skip atomic operations for init_mm

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels,using 2.4% of a 48 CPU system during a netperf to localhost run.
Digging into the profile, we noticed that cpumask_clear_cpu and
cpumask_set_cpu together take about half of the CPU time taken by
switch_mm_irqs_off.

However, the CPUs running netperf end up switching back and forth
between netperf and the idle task, which does not require changes
to the mm_cpumask. Furthermore, the init_mm cpumask ends up being
the most heavily contended one in the system.`

Skipping cpumask_clear_cpu and cpumask_set_cpu for init_mm
(mostly the idle task) reduced CPU use of switch_mm_irqs_off
from 2.4% of the CPU to 1.9% of the CPU, with the following
netperf commandline:

./super_netperf 300 -P 0 -t TCP_RR -p 8888 -H kerneltest008.09.atn1 -l 30 \
     -- -r 300,300 -o -s 1M,1M -S 1M,1M

perf output w/o this patch:
    1.26%  netserver        [kernel.vmlinux]          [k] switch_mm_irqs_off
    1.17%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off

perf output w/ this patch:
    1.01%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off
    0.88%  netserver        [kernel.vmlinux]          [k] switch_mm_irqs_off

Netperf throughput is about the same before and after.

Signed-off-by: Rik van Riel <riel@...riel.com>
Reported-and-tested-by: Song Liu <songliubraving@...com>
---
 arch/x86/mm/tlb.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e055d1a06699..c8f9c550f7ec 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -288,12 +288,14 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		/* Stop remote flushes for the previous mm */
 		VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(real_prev)) &&
 				real_prev != &init_mm);
-		cpumask_clear_cpu(cpu, mm_cpumask(real_prev));
+		if (real_prev != &init_mm)
+			cpumask_clear_cpu(cpu, mm_cpumask(real_prev));
 
 		/*
 		 * Start remote flushes and then read tlb_gen.
 		 */
-		cpumask_set_cpu(cpu, mm_cpumask(next));
+		if (next != &init_mm)
+			cpumask_set_cpu(cpu, mm_cpumask(next));
 		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
 
 		choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ