[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1407152244310.24854@nanos>
Date: Tue, 15 Jul 2014 22:46:45 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Tim Chen <tim.c.chen@...ux.intel.com>
cc: Peter Zijlstra <peterz@...radead.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
"H. Peter Anvin" <hpa@...or.com>,
"David S.Miller" <davem@...emloft.net>,
Ingo Molnar <mingo@...nel.org>,
Chandramouli Narayanan <mouli@...ux.intel.com>,
Vinodh Gopal <vinodh.gopal@...el.com>,
James Guilford <james.guilford@...el.com>,
Wajdi Feghali <wajdi.k.feghali@...el.com>,
Jussi Kivilinna <jussi.kivilinna@....fi>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 6/7] sched: add function nr_running_cpu to expose
number of tasks running on cpu
On Tue, 15 Jul 2014, Tim Chen wrote:
> On Tue, 2014-07-15 at 14:59 +0200, Thomas Gleixner wrote:
> > On Tue, 15 Jul 2014, Peter Zijlstra wrote:
> >
> > > On Tue, Jul 15, 2014 at 11:50:45AM +0200, Peter Zijlstra wrote:
> > > > So you already have an idle notifier (which is x86 only, we should fix
> > > > that I suppose), and you then double check there really isn't anything
> > > > else running.
> > >
> > > Note that we've already done a large part of the expense of going idle
> > > by the time we call that idle notifier -- in specific, we've
> > > reprogrammed the clock to stop the tick.
> > >
> > > Its really wasteful to then generate work again, which means we have to
> > > again reprogram the clock etc.
> >
> > Doing anything which is not related to idle itself in the idle
> > notifier is just plain wrong.
>
> I don't like the kicking the multi-buffer job flush using idle_notifier
> path either. I'll try another version of the patch by doing this in the
> multi-buffer job handler path.
>
> >
> > If that stuff wants to utilize idle slots, we really need to come up
> > with a generic and general solution. Otherwise we'll grow those warts
> > all over the architecture space, with slightly different ways of
> > wreckaging the world an some more.
> >
> > This whole attidute of people thinking that they need their own
> > specialized scheduling around the real scheduler is a PITA. All this
> > stuff is just damanging any sensible approach of power saving, load
> > balancing, etc.
> >
> > What we really want is infrastructure, which allows the scheduler to
> > actively query the async work situation and based on the results
> > actively decide when to process it and where.
>
> I agree with you. It will be great if we have such infrastructure.
You are heartly invited to come up with that. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists