[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A7C9A14.7070600@inria.fr>
Date: Fri, 07 Aug 2009 23:18:12 +0200
From: Brice Goglin <Brice.Goglin@...ia.fr>
To: Bill Fink <billfink@...dspring.com>
CC: Linux Network Developers <netdev@...r.kernel.org>,
Yinghai Lu <yhlu.kernel@...il.com>, gallatin@...i.com
Subject: Re: Receive side performance issue with multi-10-GigE and NUMA
Bill Fink wrote:
> This could be because I discovered that if I did:
>
> find /sys -name numa_node -exec grep . {} /dev/null \;
>
> that the numa_node associated with all the PCI devices was always 0,
> and if IIUC then I believe some of the PCI devices should have been
> associated with NUMA node 2. Perhaps this is what is causing all
> the memory pages allocated by the myri10ge driver to be on NUMA
> node 0, and thus causing the major performance issue.
>
I've seen some cases in the past where numa_node was always 0 on
quad-Opteron machines with a PCI bus on node 1. IIRC it got fixed in
later kernels thanks to patches from Yinghai Lu (CC'ed).
Is the corresponding local_cpus sysfs file wrong as well ?
Maybe your kernel doesn't properly handle the NUMA location of PCI
devices on Nehalem machines yet?
Brice
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists