[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131220115854.GA11295@suse.de>
Date: Fri, 20 Dec 2013 12:00:11 +0000
From: Mel Gorman <mgorman@...e.de>
To: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Alex Shi <alex.shi@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Fengguang Wu <fengguang.wu@...el.com>,
H Peter Anvin <hpa@...or.com>, Linux-X86 <x86@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB
range flush v2
On Fri, Dec 20, 2013 at 12:18:18PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman <mgorman@...e.de> wrote:
>
> > On Thu, Dec 19, 2013 at 05:49:25PM +0100, Ingo Molnar wrote:
> > >
> > > * Mel Gorman <mgorman@...e.de> wrote:
> > >
> > > > [...]
> > > >
> > > > Because we lack data on TLB range flush distributions I think we
> > > > should still go with the conservative choice for the TLB flush
> > > > shift. The worst case is really bad here and it's painfully obvious
> > > > on ebizzy.
> > >
> > > So I'm obviously much in favor of this - I'd in fact suggest
> > > making the conservative choice on _all_ CPU models that have
> > > aggressive TLB range values right now, because frankly the testing
> > > used to pick those values does not look all that convincing to me.
> >
> > I think the choices there are already reasonably conservative. I'd
> > be reluctant to support merging a patch that made a choice on all
> > CPU models without having access to the machines to run tests on. I
> > don't see the Intel people volunteering to do the necessary testing.
>
> So based on this thread I lost confidence in test results on all CPU
> models but the one you tested.
>
> I see two workable options right now:
>
> - We turn the feature off on all other CPU models, until someone
> measures and tunes them reliably.
>
That would mean setting tlb_flushall_shift to -1. I think it's overkill
but it's not really my call.
HPA?
> or
>
> - We make all tunings that are more aggressive than yours to match
> yours. In the future people can measure and argue for more
> aggressive tunings.
>
I'm missing something obvious because switching the default to 2 will use
individual page flushes more aggressively which I do not think was your
intent. The basic check is
if (tlb_flushall_shift == -1)
flush all
act_entries = tlb_entries >> tlb_flushall_shift;
nr_base_pages = range to flush
if (nr_base_pages > act_entries)
flush all
else
flush individual pages
Full mm flush is the "safe" bet
tlb_flushall_shift == -1 Always use flush all
tlb_flushall_shift == 1 Aggressively use individual flushes
tlb_flushall_shift == 6 Conservatively use individual flushes
IvyBridge was too aggressive using individual flushes and my patch makes
it less aggressive.
Intel's code for this currently looks like
switch ((c->x86 << 8) + c->x86_model) {
case 0x60f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
case 0x616: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
case 0x617: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
case 0x61d: /* six-core 45 nm xeon "Dunnington" */
tlb_flushall_shift = -1;
break;
case 0x61a: /* 45 nm nehalem, "Bloomfield" */
case 0x61e: /* 45 nm nehalem, "Lynnfield" */
case 0x625: /* 32 nm nehalem, "Clarkdale" */
case 0x62c: /* 32 nm nehalem, "Gulftown" */
case 0x62e: /* 45 nm nehalem-ex, "Beckton" */
case 0x62f: /* 32 nm Xeon E7 */
tlb_flushall_shift = 6;
break;
case 0x62a: /* SandyBridge */
case 0x62d: /* SandyBridge, "Romely-EP" */
tlb_flushall_shift = 5;
break;
case 0x63a: /* Ivybridge */
tlb_flushall_shift = 2;
break;
default:
tlb_flushall_shift = 6;
}
That default shift of "6" is already conservative which is why I don't
think we need to change anything there. AMD is slightly more aggressive
in their choices but not enough to panic.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists