[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108125228.GJ2936@e103034-lin>
Date: Wed, 8 Jan 2014 12:52:28 +0000
From: Morten Rasmussen <morten.rasmussen@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Alex Shi <alex.shi@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <Dietmar.Eggemann@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"pjt@...gle.com" <pjt@...gle.com>,
"cmetcalf@...era.com" <cmetcalf@...era.com>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
"corbet@....net" <corbet@....net>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"len.brown@...el.com" <len.brown@...el.com>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"amit.kucheria@...aro.org" <amit.kucheria@...aro.org>,
"james.hogan@...tec.com" <james.hogan@...tec.com>,
"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
"heiko.carstens@...ibm.com" <heiko.carstens@...ibm.com>
Subject: Re: [RFC] sched: CPU topology try
On Wed, Jan 08, 2014 at 08:37:16AM +0000, Peter Zijlstra wrote:
> On Wed, Jan 08, 2014 at 04:32:18PM +0800, Alex Shi wrote:
> > In my old power aware scheduling patchset, I had tried the 95 to 99. But
> > all those values will lead imbalance when we test while(1) like cases.
> > like in a 24LCPUs groups, 24*5% > 1. So, finally use 100% as overload
> > indicator. And in testing 100% works well to find overload since few
> > system service involved. :)
>
> Ah indeed, so 100% it is ;-)
If I remember correctly, Alex used the rq runnable_avg_sum (in rq->avg)
for this. It is the most obvious choice, but it takes ages to reach
100%.
#define LOAD_AVG_MAX_N 345
Worst case it takes 345 ms from the system is becomes fully utilized
after a long period of idle until the rq runnable_avg_sum reaches 100%.
An unweigthed version of cfs_rq->runnable_load_avg and blocked_load_avg
wouldn't have that delay.
Also, if we are changing the load balance behavior when all cpus are
fully utilized we may need to think about cases where the load is
hovering around the saturation threshold. But I don't think that is
important yet.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists