[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5319FEA1.50107@sr71.net>
Date: Fri, 07 Mar 2014 09:15:13 -0800
From: Dave Hansen <dave@...1.net>
To: Davidlohr Bueso <davidlohr@...com>
CC: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
ak@...ux.intel.com, kirill.shutemov@...ux.intel.com,
mgorman@...e.de, alex.shi@...aro.org, x86@...nel.org,
linux-mm@...ck.org, dave.hansen@...ux.intel.com
Subject: Re: [PATCH 6/7] x86: mm: set TLB flush tunable to sane value
On 03/06/2014 05:55 PM, Davidlohr Bueso wrote:
> On Wed, 2014-03-05 at 16:45 -0800, Dave Hansen wrote:
>> From: Dave Hansen <dave.hansen@...ux.intel.com>
>>
>> Now that we have some shiny new tracepoints, we can actually
>> figure out what the heck is going on.
>>
>> During a kernel compile, 60% of the flush_tlb_mm_range() calls
>> are for a single page. It breaks down like this:
>
> It would be interesting to see similar data for opposite workloads with
> more random access patterns. That's normally when things start getting
> fun in the tlb world.
First of all, thanks for testing. It's much appreciated!
Any suggestions for opposite workloads?
I've seen this tunable have really heavy effects on ebizzy. It fits
almost entirely within the itlb and if we are doing full flushes, it
eats the itlb and increases the misses about 10x. Even putting this
tunable above 500 pages (which is pretty insane) didn't help it.
Things that thrash the TLB don't really care if someone invalidates
their TLB since they're thrashing it anyway.
I've had a really hard time finding workloads that _care_ or are
affected by small changes in this tunable. That's one of the reasons I
tried to simplify it: it's just not worth the complexity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists