[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0704241054270.8592@schroedinger.engr.sgi.com>
Date: Tue, 24 Apr 2007 10:55:45 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
cc: William Lee Irwin III <wli@...omorphy.com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Peter Williams <pwil3058@...pond.net.au>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
Willy Tarreau <w@....eu>, Gene Heskett <gene.heskett@...il.com>
Subject: Re: [patch] CFS scheduler, v3
On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> On Tue, Apr 24, 2007 at 10:47:45AM -0700, Christoph Lameter wrote:
> > On Tue, 24 Apr 2007, Siddha, Suresh B wrote:
> > > Anyhow, this is a straight forward optimization and needs to be done. Do you
> > > have any specific concerns?
> >
> > Yes there should not be contention on per cpu data in principle. The
> > point of per cpu data is for the cpu to have access to contention free
> > cachelines.
> >
> > If the data is contented then it should be moved out of per cpu data and properly
> > placed to minimize contention. Otherwise we will get into cacheline
> > aliases (__read_mostly in per cpu??) etc etc in the per cpu areas.
>
> yes, we were planning to move this to a different percpu section, where
> all the elements in this new section will be cacheline aligned(both
> at the start, aswell as end)
I would not call this a per cpu area. It is used by multiple cpus it
seems. But for 0.5%? on what benchmark? Is is really worth it?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists