[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1224907501.5161.36.camel@marge.simson.net>
Date: Sat, 25 Oct 2008 06:05:01 +0200
From: Mike Galbraith <efault@....de>
To: David Miller <davem@...emloft.net>
Cc: rjw@...k.pl, mingo@...e.hu, s0mbre@...rvice.net.ru,
a.p.zijlstra@...llo.nl, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.
On Fri, 2008-10-24 at 16:31 -0700, David Miller wrote:
> From: "Rafael J. Wysocki" <rjw@...k.pl>
> Date: Sat, 25 Oct 2008 00:25:34 +0200
>
> > On Friday, 10 of October 2008, Ingo Molnar wrote:
> > >
> > > * Evgeniy Polyakov <s0mbre@...rvice.net.ru> wrote:
> > >
> > > > On Fri, Oct 10, 2008 at 01:42:45PM +0200, Ingo Molnar (mingo@...e.hu) wrote:
> > > > > > vanilla 27: 347.222
> > > > > > no TSO/GSO: 357.331
> > > > > > no hrticks: 382.983
> > > > > > no balance: 389.802
> > > > >
> > > > > okay. The target is 470 MB/sec, right? (Assuming the workload is sane
> > > > > and 'fixing' it does not mean we have to schedule worse.)
> > > >
> > > > Well, that's where I started/stopped, so maybe we will even move
> > > > further? :)
> > >
> > > that's the right attitude ;)
> >
> > Can anyone please tell me if there was any conclusion of this thread?
>
> I made some more analysis in private with Ingo and Peter Z. and found
> that the tbench decreases correlate pretty much directly with the
> ongoing increasing cpu cost of wake_up() and friends in the fair
> scheduler.
>
> The largest increase in computational cost of wakeups came in 2.6.27
> when the hrtimer bits got added, it more than tripled the cost of a wakeup.
> In 2.6.28-rc1 the hrtimer feature has been disabled, but I think that
> should be backports into the 2.6.27-stable branch.
>
> So I think that should be backported, and meanwhile I'm spending some
> time in the background trying to replace the fair schedulers RB tree
> crud with something faster so maybe at some point we can recover all
> of the regressions in this area caused by the CFS code.
My test data indicates (to me anyway) that there is another source of
localhost throughput loss in .27. In that data, there is no hrtick
overhead since I didn't have highres timers enabled, and computational
costs added in .27 were removed. Dunno where it lives, but it does
appear to exist.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists