[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110210.130325.112603217.davem@davemloft.net>
Date: Thu, 10 Feb 2011 13:03:25 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: steiner@....com
Cc: mingo@...e.hu, raz@...lemp.com, linux-kernel@...r.kernel.org,
mingo@...hat.com, a.p.zijlstra@...llo.nl, efault@....de,
cpw@....com, travis@....com, tglx@...utronix.de, hpa@...or.com
Subject: Re: [BUG] soft lockup while booting machine with more than 700
cores
From: Jack Steiner <steiner@....com>
Date: Thu, 10 Feb 2011 14:56:48 -0600
> We also noticed that the rebalance_domains() code references many per-cpu
> run queue structures. All of the structures have identical offsets relative
> to the size of a cache leaf. The result is that all index into the same lines in the
> L3 caches. That causes many evictions. We tried an experimental to
> stride the run queues at 128 byte offsets. That helped in some cases but the
> results were mixed. We are still experimenting with the patch.
I think chasing after cache alignment issues misses the point entirely.
The core issue is that rebalance_domains() is insanely expensive, by
design. It's complexity is N factorial for the idle non-HZ cpu that is
selected to balance every single domain.
A statistic datastructure that is approximately 128 bytes in size is
repopulated N! times each time this global rebalance thing runs.
I've been seeing rebalance_domains() in my perf top output on 128 cpu
machines for several years now. Even on an otherwise idle machine,
the system churns in thus code path endlessly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists