lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200128062245.GA27398@codeaurora.org>
Date:   Tue, 28 Jan 2020 11:52:45 +0530
From:   Pavan Kondeti <pkondeti@...eaurora.org>
To:     Valentin Schneider <valentin.schneider@....com>
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        peterz@...radead.org, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, morten.rasmussen@....com,
        qperret@...gle.com, adharmap@...eaurora.org
Subject: Re: [PATCH v3 1/3] sched/fair: Add asymmetric CPU capacity wakeup
 scan

Hi Valentin,

On Sun, Jan 26, 2020 at 08:09:32PM +0000, Valentin Schneider wrote:
>  
> +static inline int check_cpu_capacity(struct rq *rq, struct sched_domain *sd);
> +
> +/*
> + * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
> + * the task fits. If no CPU is big enough, but there are idle ones, try to
> + * maximize capacity.
> + */
> +static int select_idle_capacity(struct task_struct *p, int target)
> +{
> +	unsigned long best_cap = 0;
> +	struct sched_domain *sd;
> +	struct cpumask *cpus;
> +	int best_cpu = -1;
> +	struct rq *rq;
> +	int cpu;
> +
> +	if (!static_branch_unlikely(&sched_asym_cpucapacity))
> +		return -1;
> +
> +	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
> +	if (!sd)
> +		return -1;
> +
> +	sync_entity_load_avg(&p->se);
> +
> +	cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> +	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +
> +	for_each_cpu_wrap(cpu, cpus, target) {
> +		rq = cpu_rq(cpu);
> +
> +		if (!available_idle_cpu(cpu))
> +			continue;
> +		if (task_fits_capacity(p, rq->cpu_capacity))
> +			return cpu;

I have couple of questions.

(1) Any particular reason for not checking sched_idle_cpu() as a backup
for the case where all eligible CPUs are busy? select_idle_cpu() does
that.

(2) Assuming all CPUs are busy, we return -1 from here and end up
calling select_idle_cpu(). The traversal in select_idle_cpu() may be
waste in cases where sd_llc == sd_asym_cpucapacity . For example SDM845.
Should we worry about this?

> +
> +		/*
> +		 * It would be silly to keep looping when we've found a CPU
> +		 * of highest available capacity. Just check that it's not been
> +		 * too pressured lately.
> +		 */
> +		if (rq->cpu_capacity_orig == READ_ONCE(rq->rd->max_cpu_capacity) &&
> +		    !check_cpu_capacity(rq, sd))
> +			return cpu;
> +
> +		if (rq->cpu_capacity > best_cap) {
> +			best_cap = rq->cpu_capacity;
> +			best_cpu = cpu;
> +		}
> +	}
> +
> +	return best_cpu;
> +}
> +
>  /*
>   * Try and locate an idle core/thread in the LLC cache domain.
>   */
> @@ -5902,6 +5956,11 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  	struct sched_domain *sd;
>  	int i, recent_used_cpu;
>  
> +	/* For asymmetric capacities, try to be smart about the placement */
> +	i = select_idle_capacity(p, target);
> +	if ((unsigned)i < nr_cpumask_bits)
> +		return i;
> +
>  	if (available_idle_cpu(target) || sched_idle_cpu(target))
>  		return target;
>  
> -- 
> 2.24.0
> 

Thanks,
Pavan

-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ