[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180706215658.18018-1-riel@surriel.com>
Date: Fri, 6 Jul 2018 17:56:51 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org, luto@...nel.org, dave.hansen@...ux.intel.com,
mingo@...nel.org, kernel-team@...com, tglx@...utronix.de,
efault@....de, songliubraving@...com, hpa@...or.com
Subject: [PATCH v4 0/7] x86,tlb,mm: make lazy TLB mode even lazier
Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.
However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.
Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.
Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.
v4 of the series has a few minor cleanups; no functional changes since v3
On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.
These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.
Powered by blists - more mailing lists