lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140710113120.GA3935@laptop>
Date:	Thu, 10 Jul 2014 13:31:20 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org,
	linux@....linux.org.uk, linux-arm-kernel@...ts.infradead.org,
	preeti@...ux.vnet.ibm.com, Morten.Rasmussen@....com, efault@....de,
	nicolas.pitre@...aro.org, linaro-kernel@...ts.linaro.org,
	daniel.lezcano@...aro.org, dietmar.eggemann@....com
Subject: Re: [PATCH v3 08/12] sched: move cfs task on a CPU with higher
 capacity

On Mon, Jun 30, 2014 at 06:05:39PM +0200, Vincent Guittot wrote:

You 'forgot' to update the comment that goes with nohz_kick_needed().

> @@ -7233,9 +7253,10 @@ static inline int nohz_kick_needed(struct rq *rq)
>  	struct sched_domain *sd;
>  	struct sched_group_capacity *sgc;
>  	int nr_busy, cpu = rq->cpu;
> +	bool kick = false;
>  
>  	if (unlikely(rq->idle_balance))
> +		return false;
>  
>         /*
>  	* We may be recently in ticked or tickless idle mode. At the first
> @@ -7249,38 +7270,41 @@ static inline int nohz_kick_needed(struct rq *rq)
>  	 * balancing.
>  	 */
>  	if (likely(!atomic_read(&nohz.nr_cpus)))
> +		return false;
>  
>  	if (time_before(now, nohz.next_balance))
> +		return false;
>  
>  	if (rq->nr_running >= 2)
> +		return true;
>  
>  	rcu_read_lock();
>  	sd = rcu_dereference(per_cpu(sd_busy, cpu));
>  	if (sd) {
>  		sgc = sd->groups->sgc;
>  		nr_busy = atomic_read(&sgc->nr_busy_cpus);
>  
> +		if (nr_busy > 1) {
> +			kick = true;
> +			goto unlock;
> +		}
> +
> +		if ((rq->cfs.h_nr_running >= 1)
> +		 && ((rq->cpu_capacity * sd->imbalance_pct) <
> +					(rq->cpu_capacity_orig * 100))) {
> +			kick = true;
> +			goto unlock;
> +		}

Again, why only for shared caches?

>  	}
>  
>  	sd = rcu_dereference(per_cpu(sd_asym, cpu));
>  	if (sd && (cpumask_first_and(nohz.idle_cpus_mask,
>  				  sched_domain_span(sd)) < cpu))
> +		kick = true;
>  
> +unlock:
>  	rcu_read_unlock();
> +	return kick;
>  }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ