[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1262610927.9734.64.camel@marge.simson.net>
Date: Mon, 04 Jan 2010 14:15:27 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Arjan van de Ven <arjan@...radead.org>,
Lin Ming <ming.m.lin@...el.com>,
lkml <linux-kernel@...r.kernel.org>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: volano ~30% regression with 2.6.33-rc1 & -rc2
On Mon, 2010-01-04 at 14:02 +0100, Peter Zijlstra wrote:
> On Mon, 2010-01-04 at 13:57 +0100, Mike Galbraith wrote:
> > On Mon, 2010-01-04 at 04:40 -0800, Arjan van de Ven wrote:
> > > On Mon, 04 Jan 2010 16:15:58 +0800
> > > Lin Ming <ming.m.lin@...el.com> wrote:
> > >
> > > > Mike & Peter,
> > > >
> > > > Compared with 2.6.32, volano has ~30% regression with 2.6.33-rc1 &
> > > > -rc2. Testing machine: Tigerton Xeon, 16cpus(4P/4Core), 16G memory
> > >
> > > did this show up only on this cpu?
> > > (since this is a multi-core-without-shared-cache cpu, it could be that
> > > we get the topology wrong and think cores share cache where they don't)
> >
> > My fault for using PREFER_SIBLING I guess. However, I do wonder why in
> > the heck we set that at the CPU domain level. Siblings lie northward.
>
> Ah, PREFER_SIBLING means prefer sibling domain, not sibling thread. Its
> set at the CPU (really socket) level so make tasks spread over sockets
> first, so that there is no competition for the socket wide resources.
WRT the regression, would you prefer only the sched_fair.c hunk, and
maybe plunking the topology hunk in sched_devel, or both lines in one
patch, since ramp-up gain remains unrealized half of the time on Nehalem
and ilk.
> Your change is sane, but we really want a more extensive sched domain
> tree in the near future, reflecting the full machine topology.
Yeah.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists