lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5010b09f-6954-fda6-a10f-a8aa05806866@arm.com>
Date:   Wed, 6 Feb 2019 17:26:06 +0000
From:   Valentin Schneider <valentin.schneider@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        vincent.guittot@...aro.org, morten.rasmussen@....com,
        Dietmar.Eggemann@....com
Subject: Re: [PATCH 5/5] sched/fair: Skip LLC nohz logic for asymmetric
 systems

Hi,

On 06/02/2019 16:14, Peter Zijlstra wrote:
[...]
>> @@ -9545,6 +9545,17 @@ static void nohz_balancer_kick(struct rq *rq)
>>  	}
>>  
>>  	rcu_read_lock();
>> +
>> +	if (static_branch_unlikely(&sched_asym_cpucapacity))
>> +		/*
>> +		 * For asymmetric systems, we do not want to nicely balance
>> +		 * cache use, instead we want to embrace asymmetry and only
>> +		 * ensure tasks have enough CPU capacity.
>> +		 *
>> +		 * Skip the LLC logic because it's not relevant in that case.
>> +		 */
>> +		goto check_capacity;
>> +
>>  	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
>>  	if (sds) {
>>  		/*
> 
> Since (before this) the actual order of the various tests doesn't
> matter, it's a logical cascade of conditions for which to KICK_MASK.
> 

Ah, I assumed the order did matter somewhat with the "cheaper" LLC check
first and the more costly loops further down in case we are still looking
for a reason to do a kick.

> We can easily reorder and short-circuit the cascase like so, no?
> 
> The only concern is if sd_llc_shared < sd_asym_capacity; in which case
> we just lost a balance opportunity. Not sure how to best retain that
> though.
> 

I'm afraid I don't follow - we don't lose a balance opportunity with the
below change (compared to the original patch), do we?

> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9568,25 +9568,6 @@ static void nohz_balancer_kick(struct rq
>  	}
>  
>  	rcu_read_lock();
> -	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> -	if (sds) {
> -		/*
> -		 * If there is an imbalance between LLC domains (IOW we could
> -		 * increase the overall cache use), we need some less-loaded LLC
> -		 * domain to pull some load. Likewise, we may need to spread
> -		 * load within the current LLC domain (e.g. packed SMT cores but
> -		 * other CPUs are idle). We can't really know from here how busy
> -		 * the others are - so just get a nohz balance going if it looks
> -		 * like this LLC domain has tasks we could move.
> -		 */
> -		nr_busy = atomic_read(&sds->nr_busy_cpus);
> -		if (nr_busy > 1) {
> -			flags = NOHZ_KICK_MASK;
> -			goto unlock;
> -		}
> -
> -	}
> -
>  	sd = rcu_dereference(rq->sd);
>  	if (sd) {
>  		/*
> @@ -9600,6 +9581,20 @@ static void nohz_balancer_kick(struct rq
>  		}
>  	}
>  
> +	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
> +	if (sd) {
> +		/*
> +		 * When ASYM_PACKING; see if there's a more preferred CPU going
> +		 * idle; in which case, kick the ILB to move tasks around.
> +		 */
> +		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
> +			if (sched_asym_prefer(i, cpu)) {
> +				flags = NOHZ_KICK_MASK;
> +				goto unlock;
> +			}
> +		}
> +	}
> +
>  	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
>  	if (sd) {
>  		/*
> @@ -9610,21 +9605,36 @@ static void nohz_balancer_kick(struct rq
>  			flags = NOHZ_KICK_MASK;
>  			goto unlock;
>  		}
> +
> +		/*
> +		 * For asymmetric systems, we do not want to nicely balance
> +		 * cache use, instead we want to embrace asymmetry and only
> +		 * ensure tasks have enough CPU capacity.
> +		 *
> +		 * Skip the LLC logic because it's not relevant in that case.
> +		 */
> +		goto unlock;
>  	}
>  
> -	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
> -	if (sd) {
> +	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
> +	if (sds) {
>  		/*
> -		 * When ASYM_PACKING; see if there's a more preferred CPU going
> -		 * idle; in which case, kick the ILB to move tasks around.
> +		 * If there is an imbalance between LLC domains (IOW we could
> +		 * increase the overall cache use), we need some less-loaded LLC
> +		 * domain to pull some load. Likewise, we may need to spread
> +		 * load within the current LLC domain (e.g. packed SMT cores but
> +		 * other CPUs are idle). We can't really know from here how busy
> +		 * the others are - so just get a nohz balance going if it looks
> +		 * like this LLC domain has tasks we could move.
>  		 */
> -		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
> -			if (sched_asym_prefer(i, cpu)) {
> -				flags = NOHZ_KICK_MASK;
> -				goto unlock;
> -			}
> +		nr_busy = atomic_read(&sds->nr_busy_cpus);
> +		if (nr_busy > 1) {
> +			flags = NOHZ_KICK_MASK;
> +			goto unlock;
>  		}
> +
>  	}
> +
>  unlock:
>  	rcu_read_unlock();
>  out:
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ