[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae4bc21f-d6ce-25fb-1e51-5d41d318b3ec@arm.com>
Date: Tue, 8 Oct 2019 16:48:35 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>, Phil Auld <pauld@...hat.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Hillf Danton <hdanton@...a.com>
Subject: Re: [PATCH v3 04/10] sched/fair: rework load_balance
On 08/10/2019 16:30, Vincent Guittot wrote:
[...]
>
> This is how I plan to get ride of the problem:
> + if (busiest->group_weight == 1 || sds->prefer_sibling) {
> + unsigned int nr_diff = busiest->sum_h_nr_running;
> + /*
> + * When prefer sibling, evenly spread running tasks on
> + * groups.
> + */
> + env->migration_type = migrate_task;
> + lsub_positive(&nr_diff, local->sum_h_nr_running);
> + env->imbalance = nr_diff >> 1;
> + return;
> + }
>
I think this wants a
/* Local could have more tasks than busiest */
atop the lsub, otherwise yeah that ought to work.
Powered by blists - more mailing lists