[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C24DE8.1010102@intel.com>
Date: Thu, 20 Jun 2013 08:33:44 +0800
From: Alex Shi <alex.shi@...el.com>
To: Paul Turner <pjt@...gle.com>
CC: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Namhyung Kim <namhyung@...nel.org>,
Mike Galbraith <efault@....de>,
Morten Rasmussen <morten.rasmussen@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
Michael Wang <wangyun@...ux.vnet.ibm.com>,
Jason Low <jason.low2@...com>,
Changlong Xie <changlongx.xie@...el.com>, sgruszka@...hat.com,
Frédéric Weisbecker <fweisbec@...il.com>
Subject: Re: [patch v8 6/9] sched: compute runnable load avg in cpu_load and
cpu_avg_load_per_task
On 06/19/2013 04:15 PM, Alex Shi wrote:
> On 06/18/2013 05:44 PM, Alex Shi wrote:
>>
>>>
>>> Paul, could I summary your point here:
>>> keep current weighted_cpu_load, but add blocked load avg in
>>> get_rq_runnable_load?
>>>
>>> I will test this change.
>>
>> Current testing(kbuild, oltp, aim7) don't show clear different on my NHM EP box
>> between the following and the origin patch,
>> the only different is get_rq_runnable_load added blocked_load_avg. in SMP
>> will test more cases and more box.
>
> I tested the tip/sched/core, tip/sched/core with old patchset and
> tip/schec/core with the blocked_load_avg on Core2 2S, NHM EP, IVB EP,
> SNB EP 2S and SNB EP 4S box, with benchmark kbuild, sysbench oltp,
> hackbench, tbench, dbench.
>
> blocked_load_avg VS origin patchset, oltp has suspicious 5% and
> hackbench has 3% drop on NHM EX; dbench has suspicious 6% drop on NHM
> EP. other benchmarks has no clear change on all other machines.
>
> origin patchset VS sched/core, hackbench rise 20% on NHM EX, 60% on SNB
> EP 4S, and 30% on IVB EP. others no clear changes.
>
>> +#ifdef CONFIG_SMP
>> +unsigned long get_rq_runnable_load(struct rq *rq)
>> +{
>> + return rq->cfs.runnable_load_avg + rq->cfs.blocked_load_avg;
According to above testing result, with blocked_load_avg is still a
slight worse than without it.
when the blocked_load_avg added here, it will impact nohz idle balance
and periodic balance in update_sg_lb_stats() when the idx is not 0.
As to nohz idle balance, blocked_load_avg should be too small to have
big effect.
As to in update_sg_lb_stats(), since it only works when _idx is not 0,
that means the blocked_load_avg was decay again in update_cpu_load. That
reduce its impact.
So, could I say, at least in above testing, blocked_load_avg should be
keep away from balance?
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists