lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Dec 2019 12:40:02 +0000
From:   Valentin Schneider <valentin.schneider@....com>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>, pauld@...hat.com,
        srikar@...ux.vnet.ibm.com, quentin.perret@....com,
        dietmar.eggemann@....com, Morten.Rasmussen@....com,
        hdanton@...a.com, parth@...ux.ibm.com, riel@...riel.com,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small degree of load imbalance
 between SD_NUMA domains v2

On 20/12/2019 08:42, Mel Gorman wrote:
> In general, the patch simply seeks to avoid unnecessarily cross-node
> migrations when a machine is lightly loaded but shows benefits for other
> workloads. While tests are still running, so far it seems to benefit
> light-utilisation smaller workloads on large machines and does not appear
> to do any harm to larger or parallelised workloads.
> 
> [valentin.schneider@....com: Reformat code flow, correct comment, use idle_cpus]

I think only the comment bit is still there in this version and it's not
really worth mentioning (but I do thank you for doing it!).

> @@ -8671,6 +8667,39 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>  			return;
>  		}
>  
> +		/* Consider allowing a small imbalance between NUMA groups */
> +		if (env->sd->flags & SD_NUMA) {
> +			unsigned int imbalance_adj, imbalance_max;
> +
> +			/*
> +			 * imbalance_adj is the allowable degree of imbalance
> +			 * to exist between two NUMA domains. It's calculated
> +			 * relative to imbalance_pct with a minimum of two
> +			 * tasks or idle CPUs. The choice of two is due to
> +			 * the most basic case of two communicating tasks
> +			 * that should remain on the same NUMA node after
> +			 * wakeup.
> +			 */
> +			imbalance_adj = max(2U, (busiest->group_weight *
> +				(env->sd->imbalance_pct - 100) / 100) >> 1);
> +
> +			/*
> +			 * Ignore small imbalances unless the busiest sd has
> +			 * almost half as many busy CPUs as there are
> +			 * available CPUs in the busiest group. Note that

This is all on the busiest group, so this should be more like:

			 * Ignore small imbalances unless almost half of the
			 * busiest sg's CPUs are busy.

> +			 * it is not exactly half as imbalance_adj must be
> +			 * accounted for or the two domains do not converge
> +			 * as equally balanced if the number of busy tasks is
> +			 * roughly the size of one NUMA domain.
> +			 */
> +			imbalance_max = (busiest->group_weight >> 1) + imbalance_adj;
> +			if (env->imbalance <= imbalance_adj &&

I'm confused now, have we set env->imbalance to anything at this point? AIUI
Vincent's suggestion was to hinge this purely on the weight vs idle_cpus /
nr_running, IOW not use imbalance:

if (sd->flags & SD_NUMA) {                                                                         
	imbalance_adj = max(2U, (busiest->group_weight *                                           
				 (env->sd->imbalance_pct - 100) / 100) >> 1);                      
	imbalance_max = (busiest->group_weight >> 1) + imbalance_adj;                              
                                                                                                     
	if (busiest->idle_cpus >= imbalance_max) {                                                 
		env->imbalance = 0;                                                                
		return;                                                                            
	}                                                                                          
}                                                                                                  
       
Now, I have to say I'm not sold on the idle_cpus thing, I'd much rather use
the number of runnable tasks. We are setting up a threshold for how far we
are willing to ignore imbalances; if we have overloaded CPUs we *really*
should try to solve this. Number of tasks is the safer option IMO: when we
do have one task per CPU, it'll be the same as if we had used idle_cpus, and
when we don't have one task per CPU we'll load-balance more often that we
would have with idle_cpus.

> +			    busiest->idle_cpus >= imbalance_max) {
> +				env->imbalance = 0;
> +				return;
> +			}
> +		}
> +
>  		if (busiest->group_weight == 1 || sds->prefer_sibling) {
>  			unsigned int nr_diff = busiest->sum_nr_running;
>  			/*
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ