lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150220112743.GN5029@twins.programming.kicks-ass.net>
Date:	Fri, 20 Feb 2015 12:27:43 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org,
	preeti@...ux.vnet.ibm.com, Morten.Rasmussen@....com,
	kamalesh@...ux.vnet.ibm.com, riel@...hat.com, efault@....de,
	nicolas.pitre@...aro.org, dietmar.eggemann@....com,
	linaro-kernel@...ts.linaro.org
Subject: Re: [PATCH RESEND v9 10/10] sched: move cfs task on a CPU with
 higher capacity

On Thu, Jan 15, 2015 at 11:09:30AM +0100, Vincent Guittot wrote:
> As a sidenote, this will note generate more spurious ilb because we already

s/note/not/

> trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that
> has a task, we will trig the ilb once for migrating the task.

> +static inline bool nohz_kick_needed(struct rq *rq)
>  {
>  	unsigned long now = jiffies;
>  	struct sched_domain *sd;
>  	struct sched_group_capacity *sgc;
>  	int nr_busy, cpu = rq->cpu;
> +	bool kick = false;
>  
>  	if (unlikely(rq->idle_balance))
> +		return false;
>  
>         /*
>  	* We may be recently in ticked or tickless idle mode. At the first
> @@ -7472,38 +7498,44 @@ static inline int nohz_kick_needed(struct rq *rq)
>  	 * balancing.
>  	 */
>  	if (likely(!atomic_read(&nohz.nr_cpus)))
> +		return false;
>  
>  	if (time_before(now, nohz.next_balance))
> +		return false;
>  
>  	if (rq->nr_running >= 2)
> +		return true;

So this,

>  	rcu_read_lock();
>  	sd = rcu_dereference(per_cpu(sd_busy, cpu));
>  	if (sd) {
>  		sgc = sd->groups->sgc;
>  		nr_busy = atomic_read(&sgc->nr_busy_cpus);
>  
> +		if (nr_busy > 1) {
> +			kick = true;
> +			goto unlock;
> +		}
> +
>  	}
>  
> +	sd = rcu_dereference(rq->sd);
> +	if (sd) {
> +		if ((rq->cfs.h_nr_running >= 1) &&
> +				check_cpu_capacity(rq, sd)) {
> +			kick = true;
> +			goto unlock;
> +		}
> +	}

vs this: how would we ever get here?

If h_nr_running > 1, must then not nr_running > 1 as well?

>  
> +	sd = rcu_dereference(per_cpu(sd_asym, cpu));
>  	if (sd && (cpumask_first_and(nohz.idle_cpus_mask,
>  				  sched_domain_span(sd)) < cpu))
> +		kick = true;

For consistencies sake I would've added a goto unlock here as well.

> +unlock:
>  	rcu_read_unlock();
> +	return kick;
>  }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ