lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Dec 2017 21:38:06 +0100
From:   Mike Galbraith <efault@....de>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Matt Fleming <matt@...eblueprint.co.uk>
Subject: Re: [PATCH] sched: Only migrate tasks due to interrupts on an idle
 CPU if prev and target CPUs share cache

On Fri, 2017-12-15 at 16:52 +0000, Mel Gorman wrote:
> 
> It's a small improvement...

>From my log on corr.arch.suse.de (2x8 box)

4.15.0.g2db767d-default
Throughput 2665.88 MB/sec  8 clients  8 procs  max_latency=21.472 ms
4.15.0.g2db767d-default NO_WA_IDLE
Throughput 3416.35 MB/sec  8 clients  8 procs  max_latency=9.825 ms

Not so small improvement.  WA_IDLE ripped corr up pretty bad, turning
it off restored 4.4 performance.

> 
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
> ---
>  kernel/sched/fair.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2fe3aa853e4d..4a1f7d32ecf6 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5701,7 +5701,13 @@ static bool
>  wake_affine_idle(struct sched_domain *sd, struct task_struct *p,
>  		 int this_cpu, int prev_cpu, int sync)
>  {
> -	if (idle_cpu(this_cpu))
> +	/*
> +	 * If this_cpu is idle, it implies the wakeup is from interrupt
> +	 * context. Only allow the move if cache is shared. Otherwise an
> +	 * interrupt intensive workload could force all tasks onto one
> +	 * node depending on the IO topology or IRQ affinity settings.
> +	 */
> +	if (idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>  		return true;
>  
>  	if (sync && cpu_rq(this_cpu)->nr_running == 1)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ