[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080729174839.GG10471@solarflare.com>
Date: Tue, 29 Jul 2008 18:48:41 +0100
From: Ben Hutchings <bhutchings@...arflare.com>
To: "Luck, Tony" <tony.luck@...el.com>
Cc: "jgarzik@...hat.com" <jgarzik@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-ia64@...r.kernel.org" <linux-ia64@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"matthew@....cx" <matthew@....cx>, Robin Holt <holt@....com>,
linux-net-drivers <linux-net-drivers@...arflare.com>
Subject: Re: [Patch] fix ia64 build failure when CONFIG_SFC=m
Luck, Tony wrote:
> > CONFIG_SFC=m uses topology_core_siblings() which, for ia64, expects
> > cpu_core_map to be exported. It is not. This patch exports the needed
> > symbol.
>
> Ben,
>
> Before I rush to apply this (or one of the other identical patches that
> I've received) ... I'd like to ponder on whether this is the right thing
> to do.
>
> Looking at the code in drivers/net/sfc/efx.c I see that you are using
> this to compute the number of RX queues based on the number of packages
> with the git commit comment saying:
>
> Using multiple cores in the same package to handle received traffic
> does not appear to provide a performance benefit. Therefore use CPU
> topology information to count CPU packages and use that as the default
> number of RX queues and interrupts. We rely on interrupt balancing to
> spread the interrupts across packages.
>
> I have some questions on this:
>
> 1) Did you measure on FSB based systems? ... It's kind of sad that the extra
> cores are not helping and I wonder why. Do AMD and QPI based systems have
> this same limitation?
This heuristic has been in the out-of-tree driver for some time, mostly
based on experience with Intel x86 systems but apparently resulting in good
performance on both Intel and AMD multi-core systems tested here. We do not
have any IA64 multi-core systems. I think a single core in each package can
generally saturate the memory bus and this is why spreading the load wider
is not useful.
> 2) Should Linux have an API to give you a useful number of threads, rather
> than make you dig through the topology data structures? At some point cpumask_t
> data structure is going to be too big for the kernel stack.
Since that's all we want to know, it would certainly simplify this driver
and might be useful to others that implement RSS.
> 3) Does interrupt balancing always do the right thing to spread your interrupts
> across packages?
No, if we want to be sure then we do interrupt pinning afterwards.
> 4) In a hotplug system would you want to adjust the number of threads if cpus
> were added or removed?
That would make sense, but I don't think it can be done without disrupting
network traffic.
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists