[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCwgL0KzZWF39-fbqPUyp8veJXm-BbBOcgjj48x=zkrrw@mail.gmail.com>
Date: Wed, 1 Apr 2015 11:06:47 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Xunlei Pang <pang.xunlei@...aro.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Rik van Riel <riel@...hat.com>,
Linaro Kernel Mailman List <linaro-kernel@...ts.linaro.org>,
Mike Galbraith <efault@....de>,
Dietmar Eggemann <dietmar.eggemann@....com>
Subject: Re: [PATCH v10 08/11] sched: replace capacity_factor by usage
On 1 April 2015 at 05:37, Xunlei Pang <pang.xunlei@...aro.org> wrote:
> Hi Vincent,
>
> On 27 March 2015 at 23:59, Vincent Guittot <vincent.guittot@...aro.org> wrote:
>> On 27 March 2015 at 15:52, Xunlei Pang <pang.xunlei@...aro.org> wrote:
>>> Hi Vincent,
>>>
>>> On 27 February 2015 at 23:54, Vincent Guittot
>>> <vincent.guittot@...aro.org> wrote:
>>>> /**
>>>> @@ -6432,18 +6435,19 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
>>>>
>>>> /*
>>>> * In case the child domain prefers tasks go to siblings
>>>> - * first, lower the sg capacity factor to one so that we'll try
>>>> + * first, lower the sg capacity so that we'll try
>>>> * and move all the excess tasks away. We lower the capacity
>>>> * of a group only if the local group has the capacity to fit
>>>> - * these excess tasks, i.e. nr_running < group_capacity_factor. The
>>>> - * extra check prevents the case where you always pull from the
>>>> - * heaviest group when it is already under-utilized (possible
>>>> - * with a large weight task outweighs the tasks on the system).
>>>> + * these excess tasks. The extra check prevents the case where
>>>> + * you always pull from the heaviest group when it is already
>>>> + * under-utilized (possible with a large weight task outweighs
>>>> + * the tasks on the system).
>>>> */
>>>> if (prefer_sibling && sds->local &&
>>>> - sds->local_stat.group_has_free_capacity) {
>>>> - sgs->group_capacity_factor = min(sgs->group_capacity_factor, 1U);
>>>> - sgs->group_type = group_classify(sg, sgs);
>>>> + group_has_capacity(env, &sds->local_stat) &&
>>>> + (sgs->sum_nr_running > 1)) {
>>>> + sgs->group_no_capacity = 1;
>>>> + sgs->group_type = group_overloaded;
>>>> }
>>>>
>>>
>>> For SD_PREFER_SIBLING, if local has 1 task and group_has_capacity()
>>> returns true(but not overloaded) for it, and assume sgs group has 2
>>> tasks, should we still mark this group overloaded?
>>
>> yes, the load balance will then choose if it's worth pulling it or not
>> depending of the load of each groups
>
> Maybe I didn't make it clearly.
> For example, CPU0~1 are SMT siblings, CPU2~CPU3 are another pair.
> CPU0 is idle, others each has 1 task. Then according to this patch,
> CPU2~CPU3(as one group) will be viewed as overloaded(CPU0~CPU1 as
> local group, and group_has_capacity() returns true here), so the
> balancer may initiate an active task moving. This is different from
> the current code as SD_PREFER_SIBLING logic does. Is this problematic?
IMHO, it's not problematic, It's worth triggering a load balance if
there is an imbalance between the 2 groups (as an example CPU0~1 has
one low nice prio task but CPU1~2 have 2 high nice prio tasks) so the
decision will be done when calculating the imbalance
Vincent
>
>>
>>>
>>> -Xunlei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists