[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5124D514.70302@intel.com>
Date: Wed, 20 Feb 2013 21:52:20 +0800
From: Alex Shi <alex.shi@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: torvalds@...ux-foundation.org, mingo@...hat.com,
tglx@...utronix.de, akpm@...ux-foundation.org,
arjan@...ux.intel.com, bp@...en8.de, pjt@...gle.com,
namhyung@...nel.org, efault@....de, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
morten.rasmussen@....com
Subject: Re: [patch v5 11/15] sched: add power/performance balance allow flag
On 02/20/2013 09:37 PM, Peter Zijlstra wrote:
> On Wed, 2013-02-20 at 20:04 +0800, Alex Shi wrote:
>
>>>> @@ -5195,6 +5197,8 @@ static int load_balance(int this_cpu, struct rq
>>>> *this_rq,
>>>> .idle = idle,
>>>> .loop_break = sched_nr_migrate_break,
>>>> .cpus = cpus,
>>>> + .power_lb = 0,
>>>> + .perf_lb = 1,
>>>> };
>>>>
>>>> cpumask_copy(cpus, cpu_active_mask);
>>>
>>> This construct allows for the possibility of power_lb=1,perf_lb=1, does
>>> that make sense? Why not have a single balance_policy enumeration?
>>
>> (power_lb == 1 && perf_lb == 1) is incorrect and impossible to have.
>>
>> (power_lb == 0 && perf_lb == 0) is possible and it means there is no any
>> balance on this cpu.
>>
>> So, enumeration is not enough.
>
> Huh.. both 0 doesn't make any sense either. If there's no balancing, we
> shouldn't be here to begin with.
>
Um, both 0 means, there is a balance happen, and we think a power
balance is appropriate for this domain, but maybe this group is already
empty, so the cpu is inappropriate to pull a task, than we exit this
time balancing, to wait another cpu from another appropriate group do
balance and pull a task.
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists