[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1295466499.11126.67.camel@bwh-desktop>
Date: Wed, 19 Jan 2011 19:48:19 +0000
From: Ben Hutchings <bhutchings@...arflare.com>
To: Pasi Kärkkäinen <pasik@....fi>
Cc: Jeremy Fitzhardinge <jeremy@...p.org>,
Ian Campbell <Ian.Campbell@...citrix.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
xen-devel <xen-devel@...ts.xensource.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [Xen-devel] Re: [PATCH] xen network backend driver
On Wed, 2011-01-19 at 21:28 +0200, Pasi Kärkkäinen wrote:
> On Wed, Jan 19, 2011 at 11:16:59AM -0800, Jeremy Fitzhardinge wrote:
> > On 01/19/2011 10:05 AM, Ben Hutchings wrote:
> > > Not in itself. NAPI polling will run on the same CPU which scheduled it
> > > (so wherever the IRQ was initially handled). If the protocol used
> > > between netfront and netback doesn't support RSS then RPS
> > > <http://lwn.net/Articles/362339/> can be used to spread the RX work
> > > across CPUs.
> >
> > There's only one irq per netback which is bound to one (V)CPU at a
> > time. I guess we could extend it to have multiple irqs per netback and
> > some way of distributing packet flows over them, but that would only
> > really make sense if there's a single interface with much more traffic
> > than the others; otherwise the interrupts should be fairly well
> > distributed (assuming that the different netback irqs are routed to
> > different cpus).
> >
>
> Does "multiqueue" only work for NIC drivers (and frontend drivers),
> or could it be used also for netback?
Netfront and netback would have to agree on how many queues to use in
each direction.
> (afaik Linux multiqueue enables setting up multiple receive queues
> each having a separate irq.)
In the context of Linux networking, 'multiqueue' generally refers to use
of multiple *transmit* queues. The networking core handles scheduling
and locking of each transmit queue, so it had to be extended to support
multiple queues - initially done in 2.6.23, then made scalable in
2.6.27.
It was possible to use multiple receive queues per device long before
this since the networking core is not involved in locking them. (Though
it did require some hacks to create multiple NAPI contexts, before
2.6.24.) This is mostly useful useful in conjunction with separate IRQs
per RX queue, spread across multiple CPUs (sometimes referred to as
Receive Side Scaling or RSS).
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists