lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090928184016.GB10693@Krystal>
Date:	Mon, 28 Sep 2009 14:40:16 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org, Jason Baron <jbaron@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Adrian Bunk <bunk@...sta.de>,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [patch 02/12] Immediate Values - Architecture Independent Code

* Arjan van de Ven (arjan@...radead.org) wrote:
> On Mon, 28 Sep 2009 10:46:17 -0700
> Andrew Morton <akpm@...ux-foundation.org> wrote:
> > 
> > Kernel gets a lot of cache misses, but that's usually against
> > userspace, pagecache, net headers/data, etc.  I doubt if it gets many
> > misses against a small number of small, read-mostly data items which
> > is what this patch addresses.
> > 
> > And it is a *small* number of things to which this change is
> > applicable.  This is because the write operation for these read-mostly
> > variables becomes very expensive indeed.  This means that we cannot
> > use "immediate values" for any variable which can conceivable be
> > modified at high frequency by any workload.
> 
> btw just to add to this:
> caches are unified code/data after L1 in general... it then does not
> matter much if you encode the "almost constant" in the codestream or
> slightly farther away, in both cases it takes up cache space.

Standard read from memory will typically need to have the address of the
data to access as operand to the instruction in i-cache, plus the data
in d-cache.

Compared to this, immediate values remove the need to have a pointer in
the i-cache, so the overall footprint, even for L2 cache, is lower.

> (you can argue "but in the data case it might pull in a whole cacheline
> just for this".. but that's a case for us to pack such read mostly
> things properly)
> 
> And for L1.. well.. the L2 latency is not THAT much bigger. And L1 is 
> tiny. more icache pressure hurts just as much as having more dcache
> pressure there.

Immediate values does not add i-cache pressure. They just remove d-cache
pressure. So it saves L1 d-cache, and the L1 i-cache pressure stays
mostly unchanged.

Thanks,

Mathieu


> 
> 
> -- 
> Arjan van de Ven 	Intel Open Source Technology Centre
> For development, discussion and tips for power savings, 
> visit http://www.lesswatts.org

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ