[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071122074652.GA6502@vmware.com>
Date: Wed, 21 Nov 2007 23:46:52 -0800
From: Micah Dowty <micah@...are.com>
To: Dmitry Adamushko <dmitry.adamushko@...il.com>
Cc: Ingo Molnar <mingo@...e.hu>, Christoph Lameter <clameter@....com>,
Kyle Moffett <mrmacman_g4@....com>,
Cyrus Massoumi <cyrusm@....net>,
LKML Kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...l.org>, Mike Galbraith <efault@....de>,
Paul Menage <menage@...gle.com>,
Peter Williams <pwil3058@...pond.net.au>
Subject: Re: High priority tasks break SMP balancer?
On Tue, Nov 20, 2007 at 10:47:52PM +0100, Dmitry Adamushko wrote:
> btw., what's your system? If I recall right, SD_BALANCE_NEWIDLE is on
> by default for all configs, except for NUMA nodes.
It's a dual AMD64 Opteron.
So, I recompiled my 2.6.23.1 kernel without NUMA support, and with
your patch for scheduling domain flags in /proc. It looks like with
NUMA disabled, my test case no longer shows the CPU imbalance
problem. Cool.
With NUMA disabled (and my test running smoothly), the flags show that
SD_BALANCE_NEWIDLE is set:
root@...ah-64:~# cat /proc/sys/kernel/sched_domain/cpu0/domain0/flags
55
Next I turned it off:
root@...ah-64:~# echo 53 > /proc/sys/kernel/sched_domain/cpu0/domain0/flags
root@...ah-64:~# echo 53 > /proc/sys/kernel/sched_domain/cpu1/domain0/flags
Oddly enough, I still don't observe the CPU imbalance problem.
Now I reboot into a kernel which has NUMA re-enabled but which is
otherwise identical. I verify that now I can reproduce the CPU
imbalance again.
root@...ah-64:~# cat /proc/sys/kernel/sched_domain/cpu0/domain0/flags
1101
Now I set cpu[10]/domain0/flags to 1099, and the imbalance immediately
disappears. I can reliably cause the imbalance again by setting it
back to 1101, and remove the imbalance by setting them to 1099.
Do these results make sense? I'm not sure I understand how
SD_BALANCE_NEWIDLE could be the whole story, since my /proc/schedstat
graphs do show that we continuously try to balance on idle, but we
can't successfully do so because the idle CPU has a much higher load
than the non-idle CPU. I don't understand how the problem I'm seeing
could be related to the time at which we run the balancer, rather than
being related to the load average calculation.
Assuming the CPU imbalance I'm seeing is actually related to
SD_BALANCE_NEWIDLE being unset, I have a couple of questions:
- Is this intended/expected behaviour for a machine without
NEWIDLE set? I'm not familiar with the rationale for disabling
this flag on NUMA systems.
- Is there a good way to detect, without any kernel debug flags
set, whether the current machine has any scheduling domains
that are missing the SD_BALANCE_NEWIDLE bit? I'm looking for
a good way to work around the problem I'm seeing with VMware's
code. Right now the best I can do is disable all thread priority
elevation when running on an SMP machine with Linux 2.6.20 or
later.
Thank you again for all your help.
--Micah
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists