lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 09 Feb 2015 17:47:14 +0800
From:	Daniel Thompson <daniel.thompson@...aro.org>
To:	Will Deacon <will.deacon@....com>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	John Stultz <john.stultz@...aro.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"patches@...aro.org" <patches@...aro.org>,
	"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
	Sumit Semwal <sumit.semwal@...aro.org>,
	Stephen Boyd <sboyd@...eaurora.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Russell King <linux@....linux.org.uk>,
	Catalin Marinas <Catalin.Marinas@....com>
Subject: Re: [PATCH v4 2/5] sched_clock: Optimize cache line usage


On 09/02/15 09:28, Will Deacon wrote:
> On Sun, Feb 08, 2015 at 12:02:37PM +0000, Daniel Thompson wrote:
>> Currently sched_clock(), a very hot code path, is not optimized to
>> minimise its cache profile. In particular:
>>
>>   1. cd is not ____cacheline_aligned,
>>
>>   2. struct clock_data does not distinguish between hotpath and
>>      coldpath data, reducing locality of reference in the hotpath,
>>
>>   3. Some hotpath data is missing from struct clock_data and is marked
>>      __read_mostly (which more or less guarantees it will not share a
>>      cache line with cd).
>>
>> This patch corrects these problems by extracting all hotpath data
>> into a separate structure and using ____cacheline_aligned to ensure
>> the hotpath uses a single (64 byte) cache line.
> 
> Have you got any performance figures for this change, or is this just a
> theoretical optimisation? It would be interesting to see what effect this
> has on systems with 32-byte cachelines and also scenarios where there's
> contention on the sequence counter.

Most of my testing has focused on proving the NMI safety parts of the
patch work as advertised so its mostly theoretical.

However there are some numbers from simple tight loop calls to
sched_clock (Stephen Boyd's results are more interesting than mine
because I observe pretty wild quantization effects that render the
results hard to trust):
http://thread.gmane.org/gmane.linux.kernel/1871157/focus=1879265

Not sure what useful figures would be useful for a contended sequence
counter. Firstly the counter is taken for write at 7/8 wrap time of the
times so even for the fastest timers the interval is likely to be >3s
and is very short duration. Additionally, the NMI safety changes make it
possible to read the timer whilst it is being updated so it is only
during the very short struct-copy/write/struct-copy/write update
sequence that we will observe the extra cache line used for a read.
Benchmarks that show the effect of update are therefore non-trivial to
construct.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists