lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090928200317.64a419ff@infradead.org>
Date:	Mon, 28 Sep 2009 20:03:17 +0200
From:	Arjan van de Ven <arjan@...radead.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Jason Baron <jbaron@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Adrian Bunk <bunk@...sta.de>,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [patch 02/12] Immediate Values - Architecture Independent Code

On Mon, 28 Sep 2009 10:46:17 -0700
Andrew Morton <akpm@...ux-foundation.org> wrote:
> 
> Kernel gets a lot of cache misses, but that's usually against
> userspace, pagecache, net headers/data, etc.  I doubt if it gets many
> misses against a small number of small, read-mostly data items which
> is what this patch addresses.
> 
> And it is a *small* number of things to which this change is
> applicable.  This is because the write operation for these read-mostly
> variables becomes very expensive indeed.  This means that we cannot
> use "immediate values" for any variable which can conceivable be
> modified at high frequency by any workload.

btw just to add to this:
caches are unified code/data after L1 in general... it then does not
matter much if you encode the "almost constant" in the codestream or
slightly farther away, in both cases it takes up cache space.
(you can argue "but in the data case it might pull in a whole cacheline
just for this".. but that's a case for us to pack such read mostly
things properly)

And for L1.. well.. the L2 latency is not THAT much bigger. And L1 is 
tiny. more icache pressure hurts just as much as having more dcache
pressure there.


-- 
Arjan van de Ven 	Intel Open Source Technology Centre
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ