[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070420201101.GC5475@linux-os.sc.intel.com>
Date: Fri, 20 Apr 2007 13:11:01 -0700
From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
To: William Lee Irwin III <wli@...omorphy.com>
Cc: Christoph Lameter <clameter@....com>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Peter Williams <pwil3058@...pond.net.au>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
Willy Tarreau <w@....eu>, Gene Heskett <gene.heskett@...il.com>
Subject: Re: [patch] CFS scheduler, v3
On Fri, Apr 20, 2007 at 01:03:22PM -0700, William Lee Irwin III wrote:
> On Fri, 20 Apr 2007, William Lee Irwin III wrote:
> >> I'm not really convinced it's all that worthwhile of an optimization,
> >> essentially for the same reasons as you, but presumably there's a
> >> benchmark result somewhere that says it matters. I've just not seen it.
>
> On Fri, Apr 20, 2007 at 12:44:55PM -0700, Christoph Lameter wrote:
> > If it is true that we frequently remotely write the per cpu runqueue
> > data then we may have a NUMA scalability issue.
>
> From the discussion on Suresh's thread, it appears to have sped up a
> database benchmark 0.5%.
>
> Last I checked it was workload-dependent, but there were things that
> hammer it. I mostly know of the remote wakeup issue, but there could
> be other things besides wakeups that do it, too.
remote wakeup was the main issue and the 0.5% improvement was seen
on a two node platform. Aligning it reduces the number of remote
cachelines that needs to be touched as part of this wakeup.
thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists