lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 07 Feb 2011 11:53:19 -0800
From:	Suresh Siddha <suresh.b.siddha@...el.com>
To:	Venkatesh Pallipadi <venki@...gle.com>
Cc:	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Paul Turner <pjt@...gle.com>, Mike Galbraith <efault@....de>,
	Nick Piggin <npiggin@...il.com>
Subject: Re: [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 - v1

On Mon, 2011-02-07 at 10:21 -0800, Venkatesh Pallipadi wrote: 
> On Mon, Feb 7, 2011 at 5:50 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > Why is SMT treaded differently from say a shared cache? In both cases we
> > want to spread the load as wide as possible to provide as much of the
> > resources to the few runnable tasks.
> >
> 
> IIRC, the reason for the whole sd_idle part was to have less aggressive
> load balance when one SMT sibling is busy and other is idle, in order not
> to take CPU cycles away from the busy sibling.
> Suresh will know the exact reasoning behind this and which CPUs and
> which workload this helped..

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5969fe06

Original code came from Nick in 2005.

        [PATCH] sched: HT optimisation
        
        If an idle sibling of an HT queue encounters a busy sibling, then make
        higher level load balancing of the non-idle variety.
        
        Performance of multiprocessor HT systems with low numbers of tasks
        (generally < number of virtual CPUs) can be significantly worse than the
        exact same workloads when running in non-HT mode.  The reason is largely
        due to poor scheduling behaviour.
        
        This patch improves the situation, making the performance gap far less
        significant on one problematic test case (tbench).
        

Peter, to answer your question of why SMT is treated different to cores
sharing cache, performance improvements contributed by SMT is far less
compared to the cores and any wrong decisions in SMT load balancing
(especially in the presence of idle cores, packages) has a bigger
impact.

I think in the tbench case referred by Nick, idle HT siblings in a busy
package picked the load instead of the idle packages. And thus we
probably had to wait for active load balance to kick in to distribute
the load etc by which the damage would have been. Performance impact of
this condition wouldn't be as severe in the cores sharing last level
cache and other resources.

Also there are lot of changes in this area since 2005. So it would be
nice to revisit the tbench case and see if the logic of propagating busy
sibling status to the higher level load balances is still needed or not.

On the contrary, perhaps there might be some workloads which may benefit
in performance/latency if we completely do away with this less
aggressive SMT load balancing.

Venki, as you are looking into the fixes in this area, can you run your
workloads (aswell as tbench) and compare the logic with your fixes vs
removing this logic ?

thanks,
suresh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ