[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5129DD1A.8070509@intel.com>
Date: Sun, 24 Feb 2013 17:27:54 +0800
From: Alex Shi <alex.shi@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: torvalds@...ux-foundation.org, mingo@...hat.com,
tglx@...utronix.de, akpm@...ux-foundation.org,
arjan@...ux.intel.com, bp@...en8.de, pjt@...gle.com,
namhyung@...nel.org, efault@....de, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
morten.rasmussen@....com
Subject: Re: [patch v5 09/15] sched: add power aware scheduling in fork/exec/wake
On 02/22/2013 04:54 PM, Peter Zijlstra wrote:
> On Thu, 2013-02-21 at 22:40 +0800, Alex Shi wrote:
>>> The name is a secondary issue, first you need to explain why you
>> think
>>> nr_running is a useful metric at all.
>>>
>>> You can have a high nr_running and a low utilization (a burst of
>>> wakeups, each waking a process that'll instantly go to sleep again),
>> or
>>> low nr_running and high utilization (a single process cpu bound
>>> process).
>>
>> It is true in periodic balance. But in fork/exec/waking timing, the
>> incoming processes usually need to do something before sleep again.
>
> You'd be surprised, there's a fair number of workloads that have
> negligible runtime on wakeup.
will appreciate if you like introduce some workload. :)
BTW, do you has some idea to handle them?
Actually, if tasks is just like transitory, it is also hard to catch
them in balance, like 'cyclitest -t 100' on my 4 LCPU laptop, vmstat
just can catch 1 or 2 tasks very second.
>
>> I use nr_running to measure how the group busy, due to 3 reasons:
>> 1, the current performance policy doesn't use utilization too.
>
> We were planning to fix that now that its available.
I had tried, but failed on aim9 benchmark. As a result I give up to use
utilization in performance balance.
Some trying and talking in the thread.
https://lkml.org/lkml/2013/1/6/96
https://lkml.org/lkml/2013/1/22/662
>
>> 2, the power policy don't care load weight.
>
> Then its broken, it should very much still care about weight.
Here power policy just use nr_running as the criteria to check if it's
eligible for power aware balance. when do balancing the load weight is
still the key judgment.
>
>> 3, I tested some benchmarks, kbuild/tbench/hackbench/aim7 etc, some
>> benchmark results looks clear bad when use utilization. if my memory
>> right, the hackbench/aim7 both looks bad. I had tried many ways to
>> engage utilization into this balance, like use utilization only, or
>> use
>> utilization * nr_running etc. but still can not find a way to recover
>> the lose. But with nr_running, the performance seems doesn't lose much
>> with power policy.
>
> You're failing to explain why utilization performs bad and you don't
> explain why nr_running is better. That things work simply isn't good
Um, let me try to explain again, The utilisation need much time to
accumulate itself(345ms). Whenever with or without load weight, many
bursting tasks just give a minimum weight to the carrier CPU at the
first few ms. So, it is too easy to do a incorrect distribution here and
need migration on later periodic balancing.
> enough, you have to have at least a general idea (but much preferable a
> very good idea) _why_ things work.
Here nr_running is just a criteria for if a suitable power policy
checking, in later task distribution judgment, load weight and util
still used. like in next patch: sched: packing transitory tasks in
wake/exec power balancing
I will reconsider the criteria, but also appreciate for any idea input.
>
>
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists