[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180728215357.3249-1-riel@surriel.com>
Date: Sat, 28 Jul 2018 17:53:47 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-team@...com, peterz@...radead.org, luto@...nel.org,
x86@...nel.org, vkuznets@...hat.com, mingo@...nel.org,
efault@....de, dave.hansen@...el.com, will.daecon@....com,
catalin.marinas@....com, benh@...nel.crashing.org
Subject: [PATCH 0/10] x86,tlb,mm: more lazy TLB cleanups & optimizations
This patch series implements the cleanups suggested by Peter and Andy,
removes lazy TLB mm refcounting on x86, and shows how other architectures
could implement that same optimization.
The previous patch series already seems to have removed most of the
cache line contention I was seeing at context switch time, so CPU use
of the memcache and memcache-like workloads has not changed measurably
with this patch series.
However, the memory bandwidth used by the memcache system has been
reduced by about 1%, to serve the same number of queries per second.
This happens on two socket Haswell and Broadwell systems. Maybe on
larger systems (4 or 8 socket) one might also see a measurable drop
in the amount of CPU time used, with workloads where the previous
patch series does not remove all cache line contention on the mm.
This is against the latest -tip tree, and seems to be stable (on top
of another tree) with workloads that do over a million context switches
a second.
Powered by blists - more mailing lists