[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091123151532.GB19175@wotan.suse.de>
Date: Mon, 23 Nov 2009 16:15:32 +0100
From: Nick Piggin <npiggin@...e.de>
To: Arjan van de Ven <arjan@...radead.org>
Cc: Ingo Molnar <mingo@...e.hu>, Jan Beulich <JBeulich@...ell.com>,
tglx@...utronix.de, linux-kernel@...r.kernel.org, hpa@...or.com,
Ravikiran Thirumalai <kiran@...lex86.org>,
Shai Fultheim <shai@...lemp.com>
Subject: Re: [PATCH] x86: eliminate redundant/contradicting cache line size config options
On Mon, Nov 23, 2009 at 06:52:01AM -0800, Arjan van de Ven wrote:
> On Mon, 23 Nov 2009 09:34:59 +0100
> Ingo Molnar <mingo@...e.hu> wrote:
>
> >
> > * Arjan van de Ven <arjan@...radead.org> wrote:
> >
> > > On Thu, 19 Nov 2009 09:13:07 +0100
> > > Nick Piggin <npiggin@...e.de> wrote:
> > > >
> > > > My other point was just this, but I don't care too much. But it is
> > > > worded pretty negatively. The key here is that increasing the
> > > > value too large tends to only cost a very small amount of size
> > > > (and no increase in cacheline foot print, only RAM).
> > >
> > > 128 has a pretty significant impact on TPC-C benchmarks.....
> > > it was the top issue until mainline fixed it to default to 64
> >
> > Mind sending a patch that sets the default to 64 on NUMA too?
> >
> > P4 based NUMA boxes are ... a bad memory to be forgotten.
>
> this patch adds a regression. Linux defaulted to 64 since.. march or so.
>
> now we go back to the old setting; Nick should fix that. Or at least
> extremely document and justify this change....
Oh well I didn't think the change to 128 should be done without
discussion, and I just raised my opinion. You did make some good
counter points so perhaps mainline is best to continue with 64.
However it would be nice to get to the bottom of why it cost
so much on your OLTP.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists