[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2f9e7d1-08c6-2545-2088-e0226ffd79e0@arm.com>
Date: Wed, 29 Jan 2020 12:04:55 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Valentin Schneider <valentin.schneider@....com>,
linux-kernel@...r.kernel.org
Cc: mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
morten.rasmussen@....com, qperret@...gle.com,
adharmap@...eaurora.org
Subject: Re: [PATCH v3 1/3] sched/fair: Add asymmetric CPU capacity wakeup
scan
On 26/01/2020 21:09, Valentin Schneider wrote:
[...]
> +static int select_idle_capacity(struct task_struct *p, int target)
> +{
> + unsigned long best_cap = 0;
> + struct sched_domain *sd;
> + struct cpumask *cpus;
> + int best_cpu = -1;
> + struct rq *rq;
> + int cpu;
> +
> + if (!static_branch_unlikely(&sched_asym_cpucapacity))
> + return -1;
> +
> + sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
> + if (!sd)
> + return -1;
> +
> + sync_entity_load_avg(&p->se);
> +
> + cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +
> + for_each_cpu_wrap(cpu, cpus, target) {
> + rq = cpu_rq(cpu);
> +
> + if (!available_idle_cpu(cpu))
> + continue;
> + if (task_fits_capacity(p, rq->cpu_capacity))
> + return cpu;
> +
> + /*
> + * It would be silly to keep looping when we've found a CPU
> + * of highest available capacity. Just check that it's not been
> + * too pressured lately.
> + */
> + if (rq->cpu_capacity_orig == READ_ONCE(rq->rd->max_cpu_capacity) &&
There is a similar check in check_misfit_status(). Common helper function?
> + !check_cpu_capacity(rq, sd))
> + return cpu;
I wonder how this special treatment of a big CPU behaves in (LITTLE,
medium, big) system like Pixel4 (Snapdragon 855):
flame:/ $ cat /sys/devices/system/cpu/cpu*/cpu_capacity
261
261
261
261
871
871
871
1024
Or on legacy systems where the sd->imbalance_pct is 25% instead of 17%?
Powered by blists - more mailing lists