[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091119081307.GA20534@wotan.suse.de>
Date: Thu, 19 Nov 2009 09:13:07 +0100
From: Nick Piggin <npiggin@...e.de>
To: Arjan van de Ven <arjan@...radead.org>
Cc: Ingo Molnar <mingo@...e.hu>, Jan Beulich <JBeulich@...ell.com>,
tglx@...utronix.de, linux-kernel@...r.kernel.org, hpa@...or.com,
Ravikiran Thirumalai <kiran@...lex86.org>,
Shai Fultheim <shai@...lemp.com>
Subject: Re: [PATCH] x86: eliminate redundant/contradicting cache line size config options
On Wed, Nov 18, 2009 at 08:52:40PM -0800, Arjan van de Ven wrote:
> On Thu, 19 Nov 2009 04:56:40 +0100
> Ingo Molnar <mingo@...e.hu> wrote:
>
> >
> > * Nick Piggin <npiggin@...e.de> wrote:
> >
> > > The only other use for L1 cache size macro is to pack objects to
> > > cachelines better (so they always use the fewest number of lines).
> > > But this case is more rare nowadays people don't really count
> > > cachelines anymore, but I think even then it makes sense for it to
> > > be the largest line size in the system because we don't know how
> > > big L1s are, and if you want opimal L1 packing, you likely also
> > > want optimal Ln packing.
> >
> > We could do that - but then this default of X86_INTERNODE_CACHE_SHIFT:
> >
> > + default "7" if NUMA
> >
> > will bite us and turns the 64 bytes L1_CACHE_BYTES into an effective
> > 128 bytes value.
> >
> > So ... are you arguing for an increase of the default x86 linesize to
> > 128 bytes?
Basically what I think we should do is consider L1_CACHE_BYTES to be
*the* correct default value to use for 1) avoiding false sharing (which
seems to be the most common use), and 2) optimal and repeatable per-object
packing into cachelines (which is more of a micro-optimization to be
applied carefully to really critical structures).
But in either case, I think basicaly with these semantics, it makes mots
sense to use the largest line size in the system here (ie. "L1" isn't
really a good semantic because generic code still doesn't really know
enough to do much with that).
And then, I believe INTERNODE should basically just remain as a way
for vSMP to fine tune the size/speed tradeoffs on their platform, which
makes sense because it is so dissimilar to everyone else.
So with !VSMP, I think INTERNODE should just be the same as L1_CACHE_BYTES.
So this was my main point.
> 128 is basically always wrong.
> (unless you have a P4... but for default really we should not care
> about those anymore)
My other point was just this, but I don't care too much. But it is worded
pretty negatively. The key here is that increasing the value too large
tends to only cost a very small amount of size (and no increase in cacheline
foot print, only RAM). Wheras making the value too small could hurt
scalability by orders of magnitude in the worst case.
For a generic optimised kernel that is claiming to support P4, I think 128
has to be the correct value. If we want an option that supports a generic
for everything but P4 option, fine, but I have my doubts that 128 matters
that much.
Not such a relevant question for mainline, but are distros going to use
64 soon? I would certainly hate to have a customer "upgrade" a big P4
system to a much slower kernel that can't be fixed because of ABI issues,
even if there are only one or two of them left out there.
I don't know, I don't worry about this second point so much though :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists