[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070925085329.GJ26289@linux.vnet.ibm.com>
Date: Tue, 25 Sep 2007 14:23:29 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Dmitry Adamushko <dmitry.adamushko@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [git] CFS-devel, latest code
On Tue, Sep 25, 2007 at 10:33:27AM +0200, Mike Galbraith wrote:
> > Darn, have news: latency thing isn't dead. Two busy loops, one at nice
> > 0 pinned to CPU0, and one at nice 19 pinned to CPU1 produced the
> > latencies below for nice -5 Xorg. Didn't kill the box though.
> >
> > se.wait_max : 10.068169
> > se.wait_max : 7.465334
> > se.wait_max : 135.501816
> > se.wait_max : 0.884483
> > se.wait_max : 144.218955
> > se.wait_max : 128.578376
> > se.wait_max : 93.975768
> > se.wait_max : 4.965965
> > se.wait_max : 113.655533
> > se.wait_max : 4.301075
> >
> > sched_debug (attached) is.. strange.
Mike,
Do you have FAIR_USER_SCHED turned on as well? Can you send me
your .config pls?
Also how do you check se.wait_max?
> Disabling CONFIG_FAIR_GROUP_SCHED fixed both. Latencies of up to 336ms
> hit me during the recompile (make -j3), with nothing else running.
> Since reboot, latencies are, so far, very very nice. I'm leaving it
> disabled for now.
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists