[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1360931579.4736.29.camel@marge.simpson.net>
Date: Fri, 15 Feb 2013 13:32:59 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Clark Williams <clark@...hat.com>,
Andrew Theurer <habanero@...ibm.com>
Subject: Re: [RFC] sched: The removal of idle_balance()
On Fri, 2013-02-15 at 13:21 +0100, Peter Zijlstra wrote:
> On Fri, 2013-02-15 at 08:26 +0100, Mike Galbraith wrote:
> >
> > (the throttle is supposed to keep idle_balance() from doing severe
> > damage, that may want a peek/tweak)
>
> Right, as it stands idle_balance() can do a lot of work and if the avg
> idle time is less than the time we spend looking for a suitable task we
> loose.
>
> I've wanted to make this smarter by having the cpufreq/cpuidle avg idle
> time guestimator in the scheduler core so we actually know how log we
> expect to be idle and couple that with a cache refresh cost per sched
> domain (something we used to have pre 2.6.21 or so) so we can auto-limit
> the domain traversal for idle_balance.
>
> So far that's all fantasy though..
>
> Related, I wanted to use the idle time guestimate to 'optimize' the idle
> loop, currently that stuff is stupid expensive and pokes at timer
> hardware etc.. if we know we won't be idle longer than it takes to poke
> at timer hardware, don't go into nohz mode etc.
Yup. My trees have nohz throttled too, it's too expensive for fast
switchers scheduling cross core.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists