lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Feb 2022 16:44:32 +0530
From:   K Prateek Nayak <kprateek.nayak@....com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     peterz@...radead.org, aubrey.li@...ux.intel.com, efault@....de,
        gautham.shenoy@....com, linux-kernel@...r.kernel.org,
        mingo@...nel.org, song.bao.hua@...ilicon.com,
        srikar@...ux.vnet.ibm.com, valentin.schneider@....com,
        vincent.guittot@...aro.org
Subject: Re: [PATCH] sched/fair: Consider cpu affinity when allowing NUMA
 imbalance in find_idlest_group

Hello Mel,

Thank you for taking a look at the patch.

On 2/8/2022 4:21 PM, Mel Gorman wrote:
> On Mon, Feb 07, 2022 at 09:29:21PM +0530, K Prateek Nayak wrote:
>> Neither the sched/tip nor Mel's v5 patchset [1] provides an optimal
>> new-task wakeup strategy when the tasks are affined to a subset of cpus
>> which can result in piling of tasks on the same set of CPU in a NUMA
>> group despite there being other cpus in a different NUMA group where the
>> task could have run in. A good placement makes a difference especially
>> in case of short lived task where the delay in load balancer kicking in
>> can cause degradation in perfromance.
>>
> Thanks.
>
> V6 was posted based on previous feedback. While this patch is building
> on top of it, please add Acked-by or Tested-by if the imbalance series
> helps the general problem of handling imbalances when there are multiple
> last level caches.

Yes, the imbalance series does a good job handling the general imbalance
problem in case of systems with multiple LLCs. This patch builds on top of
your effort to balance more aggressively in certain scenarios arising when
tasks are pinned to a subset of CPUs.
I'll run benchmarks against v6 and ack the results on the imbalance patchset.

>> <SNIP>
>>
>> Aggressive NUMA balancing is only done when needed. We select the
>> minimum of number of allowed cpus in sched group and the calculated
>> sd.imb_numa_nr as our imbalance threshold and the default behavior
>> of mel-v5 is only modified when the former is smaller than
>> the latter.
>>
> In this context, it should be safe to reuse select_idle_mask like this
> build tested patch
Thank you for pointing this out. I'll make the changes in the follow up.
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 538756bd8e7f..1e759c21371b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9128,6 +9128,8 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  
>  	case group_has_spare:
>  		if (sd->flags & SD_NUMA) {
> +			struct cpumask *cpus;
> +			int imb;
>  #ifdef CONFIG_NUMA_BALANCING
>  			int idlest_cpu;
>  			/*
> @@ -9145,10 +9147,15 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>  			 * Otherwise, keep the task close to the wakeup source
>  			 * and improve locality if the number of running tasks
>  			 * would remain below threshold where an imbalance is
> -			 * allowed. If there is a real need of migration,
> -			 * periodic load balance will take care of it.
> +			 * allowed while accounting for the possibility the
> +			 * task is pinned to a subset of CPUs.  If there is a
> +			 * real need of migration, periodic load balance will
> +			 * take care of it.
>  			 */
> -			if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, sd->imb_numa_nr))
> +			cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> +			cpumask_and(cpus, sched_group_span(local), p->cpus_ptr);
> +			imb = min(cpumask_weight(cpus), sd->imb_numa_nr);
> +			if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, imb))
>  				return NULL;
>  		}
>  

Thank you for the feedback and suggestions.

Thanks and Regards
Prateek

Powered by blists - more mailing lists