[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ws5af0km.fsf@basil.nowhere.org>
Date: Wed, 12 Aug 2009 00:27:37 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Bill Fink <billfink@...dspring.com>
Cc: Neil Horman <nhorman@...driver.com>,
Andrew Gallatin <gallatin@...i.com>,
Brice Goglin <Brice.Goglin@...ia.fr>,
Linux Network Developers <netdev@...r.kernel.org>,
Yinghai Lu <yhlu.kernel@...il.com>
Subject: Re: Receive side performance issue with multi-10-GigE and NUMA
Bill Fink <billfink@...dspring.com> writes:
>
> I originally tried to just use alloc_pages_node() instead of alloc_pages(),
> but it didn't help. As mentioned in an earlier e-mail, that seems to
> be because I discovered that doing:
>
> find /sys -name numa_node -exec grep . {} /dev/null \;
>
> revealed that the NUMA node associated with _all_ the PCI devices was
> always 0, when at least some of them should have been associated with
> NUMA node 2, including 6 of the 12 Myricom 10-GigE devices.
> I discovered today that the NUMA node cpulist/cpumap is also wrong.
> A cat of /sys/devices/system/node/node0/cpulist returns "0-7" (with a
> cpumask of 00000000,000000ff), while the cpulist for node2 is empty
> (with a cpumask of 00000000,00000000). The distance is correct,
> with "10 20" for node 0 and "20 10" for node2.
When the CPU nodes are not correct the device nodes are unlikely
to correct either. In fact your system likely has no node 1 configured,
right?
This information comes from the BIOS. So either your BIOS is broken
or you simply didn't enable NUMA mode in the BIOS, but configured
memory interleaving.
If you post dmesg output somewhere I can take a look.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists