[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50FD08E1.8000302@linux.vnet.ibm.com>
Date: Mon, 21 Jan 2013 17:22:41 +0800
From: Michael Wang <wangyun@...ux.vnet.ibm.com>
To: Mike Galbraith <bitbucket@...ine.de>
CC: linux-kernel@...r.kernel.org, mingo@...hat.com,
peterz@...radead.org, mingo@...nel.org, a.p.zijlstra@...llo.nl
Subject: Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
On 01/21/2013 05:09 PM, Mike Galbraith wrote:
> On Mon, 2013-01-21 at 15:45 +0800, Michael Wang wrote:
>> On 01/21/2013 03:09 PM, Mike Galbraith wrote:
>>> On Mon, 2013-01-21 at 07:42 +0100, Mike Galbraith wrote:
>>>> On Mon, 2013-01-21 at 13:07 +0800, Michael Wang wrote:
>>>
>>>>> May be we could try change this back to the old way later, after the aim
>>>>> 7 test on my server.
>>>>
>>>> Yeah, something funny is going on.
>>>
>>> Never entering balance path kills the collapse. Asking wake_affine()
>>> wrt the pull as before, but allowing us to continue should no idle cpu
>>> be found, still collapsed. So the source of funny behavior is indeed in
>>> balance_path.
>>
>> Below patch based on the patch set could help to avoid enter balance path
>> if affine_sd could be found, just like the old logical, would you like to
>> take a try and see whether it could help fix the collapse?
>
> No, it does not.
Hmm...what have changed now compared to the old logical?
May be I missed some thing, well, I think I need to find a machine which
could reproduce the issue firstly.
Regards,
Michael Wang
>
>>
>> Regards,
>> Michael Wang
>>
>> ---
>> kernel/sched/fair.c | 14 ++++++++------
>> 1 files changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index d600708..4e95bb0 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3297,6 +3297,8 @@ next:
>> sg = sg->next;
>> } while (sg != sd->groups);
>> }
>> +
>> + return -1;
>> done:
>> return target;
>> }
>> @@ -3349,7 +3351,7 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
>> * some cases.
>> */
>> new_cpu = select_idle_sibling(p, prev_cpu);
>> - if (idle_cpu(new_cpu))
>> + if (new_cpu != -1)
>> goto unlock;
>>
>> /*
>> @@ -3363,15 +3365,15 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
>> goto balance_path;
>>
>> new_cpu = select_idle_sibling(p, cpu);
>> - if (!idle_cpu(new_cpu))
>> - goto balance_path;
>> -
>> /*
>> * Invoke wake_affine() finally since it is no doubt a
>> * performance killer.
>> */
>> - if (wake_affine(sbm->affine_map[prev_cpu], p, sync))
>> - goto unlock;
>> + if (new_cpu == -1 ||
>> + !wake_affine(sbm->affine_map[prev_cpu], p, sync))
>> + new_cpu = prev_cpu;
>> +
>> + goto unlock;
>> }
>>
>> balance_path:
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists