[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4807377b0907140041y6c9da555lf3e1dba0775cfe7c@mail.gmail.com>
Date: Tue, 14 Jul 2009 00:41:30 -0700
From: Jesse Brandeburg <jesse.brandeburg@...il.com>
To: Jesse Barnes <jbarnes@...tuousgeek.org>
Cc: Yinghai Lu <yinghai@...nel.org>, linux-kernel@...r.kernel.org,
NetDEV list <netdev@...r.kernel.org>, ak@...ux.intel.com,
matthew@....cx
Subject: Re: [PATCH] x86/PCI: initialize PCI bus node numbers early
On Fri, Jul 10, 2009 at 2:06 PM, Jesse Barnes<jbarnes@...tuousgeek.org> wrote:
> From 2b51fba93f7b2dabf453a74923a9a217611ebc1a Mon Sep 17 00:00:00 2001
> From: Jesse Barnes <jbarnes@...tuousgeek.org>
> Date: Fri, 10 Jul 2009 14:04:30 -0700
> Subject: [PATCH] x86/PCI: initialize PCI bus node numbers early
>
> The current mp_bus_to_node array is initialized only by AMD specific
> code, since AMD platforms have registers that can be used for
> determining mode numbers. On new Intel platforms it's necessary to
> initialize this array as well though, otherwise all PCI node numbers
> will be 0, when in fact they should be -1 (indicating that I/O isn't
> tied to any particular node).
>
> So move the mp_bus_to_node code into the common PCI code, and
> initialize it early with a default value of -1. This may be overridden
> later by arch code (e.g. the AMD code).
>
> With this change, PCI consistent memory and other node specific
> allocations (e.g. skbuff allocs) should occur on the "current" node.
> If, for performance reasons, applications want to be bound to specific
> nodes, they should open their devices only after being pinned to the
> CPU where they'll run, for maximum locality.
>
> Acked-by: Yinghai Lu <yinghai@...nel.org>
> Tested-by: Jesse Brandeburg <jesse.brandeburg@...il.com>
> Signed-off-by: Jesse Barnes <jbarnes@...tuousgeek.org>
I can confirm this works, aside from the MSI-X interrupt migration
instability (panics) that I believe are unrelated since they happen
without this patch.
I also see a pretty nice performance boost by running with this change
on a 5520 motherboard, with an 82599 10GbE forwarding packets, esp
with interrupt affinity set correctly.
I'd like to see this applied if at all possible, I think it is really
hampering I/O traffic performance due to limiting all network (among
others) memory allocation to one of the two numa nodes.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists