lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110210211223.GB10757@sgi.com>
Date:	Thu, 10 Feb 2011 15:12:23 -0600
From:	Jack Steiner <steiner@....com>
To:	David Miller <davem@...emloft.net>
Cc:	mingo@...e.hu, raz@...lemp.com, linux-kernel@...r.kernel.org,
	mingo@...hat.com, a.p.zijlstra@...llo.nl, efault@....de,
	cpw@....com, travis@....com, tglx@...utronix.de, hpa@...or.com
Subject: Re: [BUG] soft lockup while booting machine with more than 700
	cores

On Thu, Feb 10, 2011 at 01:03:25PM -0800, David Miller wrote:
> From: Jack Steiner <steiner@....com>
> Date: Thu, 10 Feb 2011 14:56:48 -0600
> 
> > We also noticed that the rebalance_domains() code references many per-cpu
> > run queue structures. All of the structures have identical offsets relative
> > to the size of a cache leaf. The result is that all index into the same lines in the
> > L3 caches. That causes many evictions. We tried an experimental to
> > stride the run queues at 128 byte offsets. That helped in some cases but the
> > results were mixed.  We are still experimenting with the patch.
> 
> I think chasing after cache alignment issues misses the point entirely.
> 
> The core issue is that rebalance_domains() is insanely expensive, by
> design.  It's complexity is N factorial for the idle non-HZ cpu that is
> selected to balance every single domain.
> 
> A statistic datastructure that is approximately 128 bytes in size is
> repopulated N! times each time this global rebalance thing runs.
> 
> I've been seeing rebalance_domains() in my perf top output on 128 cpu
> machines for several years now.  Even on an otherwise idle machine,
> the system churns in thus code path endlessly.

Completely agree! Idle rebalancing is also a big problem. We've seen
significant improvements on large systems in network thruput by
disabling IDLE load balancing for the higher (2 & 3) scheduling domains.

This is not a real fix but points to a problem.

--- jack
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ