lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180620195652.27251-1-riel@surriel.com>
Date:   Wed, 20 Jun 2018 15:56:45 -0400
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     86@...r.kernel.org, luto@...nel.org, mingo@...nel.org,
        tglx@...utronix.de, dave.hansen@...ux.intel.com, efault@....de,
        songliubraving@...com, kernel-team@...com
Subject: [PATCH 0/7] x86,tlb,mm: make lazy TLB mode even lazier

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.


Using a memcache style workload on Broadwell systems, these patches
result in about a 0.5% reduction of CPU use on the system. Numbers
on Haswell are inconclusive so far.


Song's netperf performance results:

w/o patchset:

Throughput: 1.74716e+06
perf profile:
+    0.95%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off
+    0.82%  netserver        [kernel.vmlinux]          [k] switch_mm_irqs_off

w/ patchset:

Throughput: 1.76911e+06
perf profile:
+    0.81%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off

With these patches, netserver no longer calls switch_mm_irqs_off,
and the CPU use of enter_lazy_tlb was below the 0.05% threshold of
statistics gathered by Song's scripts.


I am still working on a patch to also get rid of the continuous
pounding on mm->count during lazy TLB entry and exit, when the same
mm_struct is being used all the time. I do not have that working yet.

Until then, these patches provide a nice performance boost, as well
as a little memory savings by shrinking the size of mm_struct.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ