[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1443538525.27815.47.camel@gmail.com>
Date: Tue, 29 Sep 2015 16:55:25 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Kirill Tkhai <ktkhai@...n.com>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH] sched/fair: Skip wake_affine() for core siblings
On Mon, 2015-09-28 at 18:36 +0300, Kirill Tkhai wrote:
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4df37a4..dfbe06b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4930,8 +4930,13 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
> int want_affine = 0;
> int sync = wake_flags & WF_SYNC;
>
> - if (sd_flag & SD_BALANCE_WAKE)
> - want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, tsk_cpus_allowed(p));
> + if (sd_flag & SD_BALANCE_WAKE) {
> + want_affine = 1;
> + if (cpu == prev_cpu || !cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
> + goto want_affine;
> + if (wake_wide(p))
> + goto want_affine;
> + }
That blew wake_wide() right out of the water.
It's not only about things like pgbench. Drive multiple tasks in a Xen
guest (single event channel dom0 -> domu, and no select_idle_sibling()
to save the day) via network, and watch workers fail to be all they can
be because they keep being stacked up on the irq source. Load balancing
yanks them apart, next irq stacks them right back up. I met that in
enterprise land, thought wake_wide() should cure it, and indeed it did.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists