[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706180948250.4751@schroedinger.engr.sgi.com>
Date: Mon, 18 Jun 2007 09:54:18 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Dinakar Guniguntala <dino@...ibm.com>,
Dmitry Adamushko <dmitry.adamushko@...il.com>,
suresh.b.siddha@...el.com, pwil3058@...pond.net.au,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: v2.6.21.4-rt11
On Mon, 18 Jun 2007, Srivatsa Vaddagiri wrote:
> This particular machine, elm3b6, is a 4-cpu, (gasp, yes!) 4-node box i.e
> each CPU is a node by itself. If you don't have CONFIG_NUMA enabled,
> then we won't have cross-node (i.e cross-cpu) load balancing.
> Fortunately in your case you had CONFIG_NUMA enabled, but still were
> hitting the (gross) load imbalance.
>
> The problem seems to be with idle_balance(). This particular routine,
> invoked by schedule() on a idle cpu, walks up sched-domain hierarchy and
> tries to balance in each domain that has SD_BALANCE_NEWIDLE flag set.
> The nodes-level domain (SD_NODE_INIT) however doesn't set this flag,
> which means idle cpu looks for (im)balance within its own node at most and
The nodes-level domain looks for internode balances between up to 16
nodes. It is not restricted to a single node. The balancing on the
phys_domain level does balance within a node.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists