[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53B44E4E.6020706@sr71.net>
Date: Wed, 02 Jul 2014 11:24:14 -0700
From: Dave Hansen <dave@...1.net>
To: David Nellans <david@...lans.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"hpa@...or.com" <hpa@...or.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"x86@...nel.org" <x86@...nel.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"riel@...hat.com" <riel@...hat.com>,
"mgorman@...e.de" <mgorman@...e.de>
Subject: Re: [PATCH 7/7] x86: mm: set TLB flush tunable to sane value (33)
On 07/02/2014 11:16 AM, David Nellans wrote:
> Intuition here is that invalidate caused refills will almost always
> be serviced from the L2 or better since we've recently walked to
> modify the page needing flush and thus pre-warmed the caches for any
> refill? Or is this an artifact of the flush/refill test setup?
There are lots of caches in place, not just the CPU's normal L1/2/3
memory caches. See "4.10.3 Paging-Structure Caches" in the Intel SDM.
I _believe_ TLB misses can be serviced from these caches and their
purpose is to avoid going out to memory (or the memory caches).
So I think the effect that we're seeing is from _all_ of the caches,
plus prefetching. If you start a prefetch for a TLB miss before you
actually start to run the instruction needing the TLB entry, you will
pay less than the entire cost of going out to memory (or the memory caches).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists