[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108130407.GE31570@twins.programming.kicks-ass.net>
Date: Wed, 8 Jan 2014 14:04:07 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Morten Rasmussen <morten.rasmussen@....com>
Cc: Alex Shi <alex.shi@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <Dietmar.Eggemann@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"pjt@...gle.com" <pjt@...gle.com>,
"cmetcalf@...era.com" <cmetcalf@...era.com>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
"corbet@....net" <corbet@....net>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"len.brown@...el.com" <len.brown@...el.com>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"amit.kucheria@...aro.org" <amit.kucheria@...aro.org>,
"james.hogan@...tec.com" <james.hogan@...tec.com>,
"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
"heiko.carstens@...ibm.com" <heiko.carstens@...ibm.com>
Subject: Re: [RFC] sched: CPU topology try
On Wed, Jan 08, 2014 at 12:52:28PM +0000, Morten Rasmussen wrote:
> If I remember correctly, Alex used the rq runnable_avg_sum (in rq->avg)
> for this. It is the most obvious choice, but it takes ages to reach
> 100%.
>
> #define LOAD_AVG_MAX_N 345
>
> Worst case it takes 345 ms from the system is becomes fully utilized
> after a long period of idle until the rq runnable_avg_sum reaches 100%.
>
> An unweigthed version of cfs_rq->runnable_load_avg and blocked_load_avg
> wouldn't have that delay.
Right.. not sure we want to involve blocked load on the utilization
metric, but who knows maybe that does make sense.
But yes, we need unweighted runnable_avg.
> Also, if we are changing the load balance behavior when all cpus are
> fully utilized
We already have this tipping point. See all the has_capacity bits. But
yes, it'd get more involved I suppose.
> we may need to think about cases where the load is
> hovering around the saturation threshold. But I don't think that is
> important yet.
Yah.. I'm going to wait until we have a fail case that can give us
some guidance before really pondering this though :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists