lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Jan 2021 16:13:25 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] sched/fair: reduce cases for active balance

On Wed, Jan 06, 2021 at 02:34:19PM +0100, Vincent Guittot wrote:
> Active balance is triggered for a number of voluntary case like misfit or
							cases
> pinned tasks cases but also after that a number of load balance failed to
								 ^attempts
> migrate a task. Remove the active load balance case for overloaded group
							 ^an ?
> as an overloaded state means that there is at least one waiting tasks. The
								  task
> threshold on the upper limit of the task's load will decrease with the
> number of failed LB until the task has migrated.

And I'm not sure I follow that last part, irrespective of spelling nits,
help?

> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> ---
>  kernel/sched/fair.c | 43 +++++++++++++++++++++----------------------
>  1 file changed, 21 insertions(+), 22 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 69a455113b10..ee87fd6f7359 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9499,13 +9499,30 @@ asym_active_balance(struct lb_env *env)
>  }
>  
>  static inline bool
> -voluntary_active_balance(struct lb_env *env)
> +imbalanced_active_balance(struct lb_env *env)
> +{
> +	struct sched_domain *sd = env->sd;
> +
> +	/* The imbalanced case includes the case of pinned tasks preventing a fair
> +	 * distribution of the load on the system but also the even distribution of the
> +	 * threads on a system with spare capacity
> +	 */

comment style fail

> +	if ((env->migration_type == migrate_task) &&
> +		(sd->nr_balance_failed > sd->cache_nice_tries+2))

indent fail; try: set cino=(0:0

> +		return 1;
> +
> +	return 0;
> +}
> +
> +static int need_active_balance(struct lb_env *env)
>  {
>  	struct sched_domain *sd = env->sd;
>  
>  	if (asym_active_balance(env))
>  		return 1;
>  
> +	if (imbalanced_active_balance(env))
> +		return 1;

+ whitespace

>  	/*
>  	 * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
>  	 * It's worth migrating the task if the src_cpu's capacity is reduced

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ