[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160711111344.GO30909@twins.programming.kicks-ass.net>
Date: Mon, 11 Jul 2016 13:13:44 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Morten Rasmussen <morten.rasmussen@....com>
Cc: mingo@...hat.com, dietmar.eggemann@....com, yuyang.du@...el.com,
vincent.guittot@...aro.org, mgalbraith@...e.de,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 07/13] sched/fair: Let asymmetric cpu configurations
balance at wake-up
On Wed, Jun 22, 2016 at 06:03:18PM +0100, Morten Rasmussen wrote:
> Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if
> SD_BALANCE_WAKE is set on the sched_domains. For asymmetric
> configurations SD_WAKE_AFFINE is only desirable if the waking task's
> compute demand (utilization) is suitable for all the cpu capacities
> available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup
> balancing take over (find_idlest_{group, cpu}()).
I think I tripped over this one the last time around, and I'm not sure
this Changelog is any clearer.
This is about the case where the waking cpu and prev_cpu are both in the
'wrong' cluster, right?
> This patch makes affine wake-ups conditional on whether both the waker
> cpu and prev_cpu has sufficient capacity for the waking task, or not.
>
> It is assumed that the sched_group(s) containing the waker cpu and
> prev_cpu only contain cpu with the same capacity (homogeneous).
>
> Ideally, we shouldn't set 'want_affine' in the first place, but we don't
> know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start
> traversing them.
Is this again more fallout from that weird ASYM_CAP thing?
> +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
> +{
> + long min_cap, max_cap;
> +
> + min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu));
> + max_cap = cpu_rq(cpu)->rd->max_cpu_capacity;
> +
> + /* Minimum capacity is close to max, no need to abort wake_affine */
> + if (max_cap - min_cap < max_cap >> 3)
> + return 0;
> +
> + return min_cap * 1024 < task_util(p) * capacity_margin;
> +}
I'm most puzzled by these inequalities, how, why ?
I would figure you'd compare task_util to the current remaining util of
the small group, and if it fits, place it there. This seems to do
something entirely different.
Powered by blists - more mailing lists