[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <536CF346.6080009@redhat.com>
Date: Fri, 09 May 2014 11:24:54 -0400
From: Rik van Riel <riel@...hat.com>
To: Mike Galbraith <umgwanakikbuti@...il.com>
CC: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, morten.rasmussen@....com,
mingo@...nel.org, george.mccollister@...il.com,
ktkhai@...allels.com
Subject: Re: [PATCH] sched: wake up task on prev_cpu if not in SD_WAKE_AFFINE
domain with cpu
On 05/09/2014 11:24 AM, Mike Galbraith wrote:
> On Fri, 2014-05-09 at 10:22 -0400, Rik van Riel wrote:
>> On 05/09/2014 03:34 AM, Mike Galbraith wrote:
>>> On Fri, 2014-05-09 at 01:27 -0400, Rik van Riel wrote:
>>>> On Thu, 08 May 2014 22:20:25 -0400
>>>> Rik van Riel <riel@...hat.com> wrote:
>>>>
>>>>> Looks like SD_BALANCE_WAKE is not gotten from the sd flags at
>>>>> all, but passed into select_task_rq by try_to_wake_up, as a
>>>>> hard coded sd_flags argument.
>>>>
>>>>> Should we do that, if SD_WAKE_BALANCE is not set for any sched domain?
>>>>
>>>> I answered my own question. The sd_flag SD_WAKE_BALANCE simply means
>>>> "this is a wakeup of a previously existing task, please place it
>>>> properly".
>>>>
>>>> However, it appears that the current code will fall back to the large
>>>> loop with select_idlest_group and friends, if prev_cpu and cpu are not
>>>> part of the same SD_WAKE_AFFINE sched domain. That is a bug...
>>>
>>> ttwu(): cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags);
>>>
>>> We pass SD_BALANCE_WAKE for a normal wakeup, so sd will only be set if
>>> we encounter a domain during traversal where Joe User has told us to do
>>> (expensive) wake balancing before we hit a domain shared by waker/wakee.
>>>
>>> The user can turn SD_WAKE_AFFINE off beyond socket, and we'll not pull
>>> cross node on wakeup.
>>>
>>> Or, you could create an override button to say despite SD_WAKE_AFFINE
>>> perhaps having been set during domain construction (because of some
>>> pseudo-random numbers), don't do that if we have a preferred node, or
>>> just make that automatically part of having numa scheduling enabled, and
>>> don't bother wasting cycles if preferred && this != preferred.
>>
>> That's not the problem.
>>
>> The problem is that if we do not do an affine wakeup, due to
>> SD_WAKE_AFFINE not being set on a top level domain, we will
>> not try to run p on prev_cpu, but we will fall through into
>> the loop with find_idlest_group, etc...
>
> If no ->flags & SD_BALANCE_WAKE is encountered during traversal, sd
> remains NULL, we fall through to return prev_cpu.
We do fall through, but into this loop:
while (sd) {
struct sched_group *group;
int weight;
if (!(sd->flags & sd_flag)) {
sd = sd->child;
continue;
}
group = find_idlest_group(sd, p, cpu, sd_flag);
if (!group) {
sd = sd->child;
continue;
}
new_cpu = find_idlest_cpu(group, p, cpu);
if (new_cpu == -1 || new_cpu == cpu) {
/* Now try balancing at a lower domain level of
cpu */
sd = sd->child;
continue;
}
/* Now try balancing at a lower domain level of new_cpu */
cpu = new_cpu;
weight = sd->span_weight;
sd = NULL;
for_each_domain(cpu, tmp) {
if (weight <= tmp->span_weight)
break;
if (tmp->flags & sd_flag)
sd = tmp;
}
/* while loop will break here if sd == NULL */
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists