[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51998ADB.7080109@linux.vnet.ibm.com>
Date: Mon, 20 May 2013 08:00:51 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Alex Shi <alex.shi@...el.com>
CC: Mike Galbraith <bitbucket@...ine.de>,
Ingo Molnar <mingo@...nel.org>, Len Brown <lenb@...nel.org>,
Borislav Petkov <bp@...en8.de>, mingo@...hat.com,
peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
namhyung@...nel.org, morten.rasmussen@....com,
vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
len.brown@...el.com, rafael.j.wysocki@...el.com, jkosina@...e.cz,
clark.williams@...il.com, tony.luck@...el.com,
keescook@...omium.org, mgorman@...e.de, riel@...hat.com,
Linux PM list <linux-pm@...r.kernel.org>
Subject: Re: [patch v7 0/21] sched: power aware scheduling
Hi Alex,
On 05/20/2013 06:31 AM, Alex Shi wrote:
>
>>>>>> Which are the workloads where 'powersaving' mode hurts workload
>>>>>> performance measurably?
>>
>> I ran ebizzy on a 2 socket, 16 core, SMT 4 Power machine.
>
> Is this a 2 * 16 * 4 LCPUs PowerPC machine?
This is a 2 * 8 * 4 LCPUs PowerPC machine.
>> The power efficiency drops significantly with the powersaving policy of
>> this patch,over the power efficiency of the scheduler without this patch.
>>
>> The below parameters are measured relative to the default scheduler
>> behaviour.
>>
>> A: Drop in power efficiency with the patch+powersaving policy
>> B: Drop in performance with the patch+powersaving policy
>> C: Decrease in power consumption with the patch+powersaving policy
>>
>> NumThreads A B C
>> -----------------------------------------
>> 2 33% 36% 4%
>> 4 31% 33% 3%
>> 8 28% 30% 3%
>> 16 31% 33% 4%
>>
>> Each of the above run is for 30s.
>>
>> On investigating socket utilization,I found that only 1 socket was being
>> used during all the above threaded runs. As can be guessed this is due
>> to the group_weight being considered for the threshold metric.
>> This stacks up tasks on a core and further on a socket, thus throttling
>> them, as observed by Mike below.
>>
>> I therefore think we must switch to group_capacity as the metric for
>> threshold and use only (rq->utils*nr_running) for group_utils
>> calculation during non-bursty wakeup scenarios.
>> This way we are comparing right; the utilization of the runqueue by the
>> fair tasks and the cpu capacity available for them after being consumed
>> by the rt tasks.
>>
>> After I made the above modification,all the above three parameters came
>> to be nearly null. However, I am observing the load balancing of the
>> scheduler with the patch and powersavings policy enabled. It is behaving
>> very close to the default scheduler (spreading tasks across sockets).
>> That also explains why there is no performance drop or gain with the
>> patch+powersavings policy enabled. I will look into this observation and
>> revert.
>
> Thanks a lot for the great testings!
> Seem tasks per SMT cpu isn't power efficient.
> And I got the similar result last week. I tested the fspin testing(do
> endless calculation, in linux-next tree.). when I bind task per SMT cpu,
> the power efficiency really dropped with most every threads number. but
> when bind task per core, it has better power efficiency on all threads.
> Beside to move task depend on group_capacity, another choice is balance
> task according cpu_power. I did the transfer in code. but need to go
> through a internal open source process before public them.
What do you mean by *another* choice is balance task according to
cpu_power? group_capacity is based on cpu_power.
Also, your balance policy in v6 was doing the same right? It was rightly
comparing rq->utils * nr_running against cpu_power. Why not simply
switch to that code for power policy load balancing?
>>>>> Well, it'll lose throughput any time there's parallel execution
>>>>> potential but it's serialized instead.. using average will inevitably
>>>>> stack tasks sometimes, but that's its goal. Hackbench shows it.
>>>>
>>>> (but that consolidation can be a winner too, and I bet a nickle it would
>>>> be for a socket sized pgbench run)
>>>
>>> (belay that, was thinking of keeping all tasks on a single node, but
>>> it'll likely stack the whole thing on a CPU or two, if so, it'll hurt)
>>
>> At this point, I would like to raise one issue.
>> *Is the goal of the power aware scheduler improving power efficiency of
>> the scheduler or a compromise on the power efficiency but definitely a
>> decrease in power consumption, since it is the user who has decided to
>> prioritise lower power consumption over performance* ?
>>
>
> It could be one of reason for this feather, but I could like to
> make it has better efficiency, like packing tasks according to cpu_power
> not current group_weight.
Yes we could try the patch using group_capacity and observe the results
for power efficiency, before we decide to compromise on power efficiency
for decrease in power.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists