[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090807175112.a1f57407.billfink@mindspring.com>
Date: Fri, 7 Aug 2009 17:51:12 -0400
From: Bill Fink <billfink@...dspring.com>
To: Brice Goglin <Brice.Goglin@...ia.fr>
Cc: Linux Network Developers <netdev@...r.kernel.org>,
Yinghai Lu <yhlu.kernel@...il.com>, gallatin@...i.com
Subject: Re: Receive side performance issue with multi-10-GigE and NUMA
On Fri, 07 Aug 2009, Brice Goglin wrote:
> Bill Fink wrote:
> > This could be because I discovered that if I did:
> >
> > find /sys -name numa_node -exec grep . {} /dev/null \;
> >
> > that the numa_node associated with all the PCI devices was always 0,
> > and if IIUC then I believe some of the PCI devices should have been
> > associated with NUMA node 2. Perhaps this is what is causing all
> > the memory pages allocated by the myri10ge driver to be on NUMA
> > node 0, and thus causing the major performance issue.
> >
>
> I've seen some cases in the past where numa_node was always 0 on
> quad-Opteron machines with a PCI bus on node 1. IIRC it got fixed in
> later kernels thanks to patches from Yinghai Lu (CC'ed).
By later kernels do you mean 2.6.30 or 2.6.31?
> Is the corresponding local_cpus sysfs file wrong as well ?
All sysfs local_cpus values are the same (00000000,000000ff),
so yes they are also wrong.
> Maybe your kernel doesn't properly handle the NUMA location of PCI
> devices on Nehalem machines yet?
I assume so, unless there's some secret NUMA system setting that
I'm unaware of that would affect this and needs changing for my
setup.
-Thanks
-Bill
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists