lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626173126.12296-1-riel@surriel.com>
Date:   Tue, 26 Jun 2018 13:31:20 -0400
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     x86@...nel.org, luto@...nel.org, dave.hansen@...ux.intel.com,
        mingo@...nel.org, kernel-team@...com, tglx@...utronix.de,
        efault@....de, songliubraving@...com
Subject: [PATCH v2 0/7] x86,tlb,mm: make lazy TLB mode even lazier

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v2 of the series implements things in the way suggested by Andy
Lutomirski, which is a nice simplification from before.  If patch 3
looks larger, it is because some of the existing code changed
indentation, so it can easily be used by both sides of the if/else
test.


Song's netperf performance results:

w/o patchset:

0.95%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off
0.77%  netserver        [kernel.vmlinux]          [k] switch_mm_irqs_off

w/ patchset:

Throughput: 1.74075e+06
0.87%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off

With these patches, netserver no longer calls switch_mm_irqs_off,
and the CPU use of enter_lazy_tlb was below the 0.05% threshold of
statistics gathered by Song's scripts.


With a memcache style workload, performance does not change measurably,
but the amount of CPU time used by switch_mm_irqs_off and other parts
of the context switch code do appear to go down in profiles.


I am still working on a patch to also get rid of the continuous
pounding on mm->count during lazy TLB entry and exit, when the same
mm_struct is being used all the time. I do not have that working yet.

Until then, these patches provide a nice performance boost, as well
as a little memory savings by shrinking the size of mm_struct.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ