[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1226323330.4846.3.camel@marge.simson.net>
Date: Mon, 10 Nov 2008 14:22:10 +0100
From: Mike Galbraith <efault@....de>
To: Ingo Molnar <mingo@...e.hu>
Cc: netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Miklos Szeredi <mszeredi@...e.cz>,
Rusty Russell <rusty@...tcorp.com.au>,
David Miller <davem@...emloft.net>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Travis <travis@....com>
Subject: Re: [regression] benchmark throughput loss from a622cf6..f7160c7
pull
On Mon, 2008-11-10 at 13:50 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@....de> wrote:
>
> > Greetings,
> >
> > While retesting that recent scheduler fixes/improvements had
> > survived integration into mainline, I found that we've regressed a
> > bit since.. yesterday. In testing, it seems that CFS has finally
> > passed what the old O(1) scheduler could deliver in scalability and
> > throughput, but we already lost a bit.
>
> but CFS backported to a kernel with no other regressions measurably
> surpasses O(1) performance in all the metrics you are following,
> right?
Yes.
> i.e. the current state of things, when comparing these workloads to
> 2.6.22 is that we slowed down in non-scheduler codepaths and the CFS
> speedups helps offset some of that slowdown.
That's the way it looks to me, yes.
> But not all of it, and we also have new slowdowns:
>
> > Reverting 984f2f3 cd83e42 2d3854a and 6209344 recovered the loss.
>
> hm, that's two changes in essence:
>
> 2d3854a: cpumask: introduce new API, without changing anything
> 6209344: net: unix: fix inflight counting bug in garbage collector
>
> i'm surprised about the cpumask impact, it's just new APIs in essence,
> with little material changes elsewhere.
Dunno, I try not to look while testing, just test/report, look later.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists