lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110119195839.GO2754@reaktio.net>
Date:	Wed, 19 Jan 2011 21:58:39 +0200
From:	Pasi Kärkkäinen <pasik@....fi>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	Ian Campbell <Ian.Campbell@...citrix.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	xen-devel <xen-devel@...ts.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [Xen-devel] Re: [PATCH] xen network backend driver

On Wed, Jan 19, 2011 at 07:48:19PM +0000, Ben Hutchings wrote:
> On Wed, 2011-01-19 at 21:28 +0200, Pasi Kärkkäinen wrote:
> > On Wed, Jan 19, 2011 at 11:16:59AM -0800, Jeremy Fitzhardinge wrote:
> > > On 01/19/2011 10:05 AM, Ben Hutchings wrote:
> > > > Not in itself.  NAPI polling will run on the same CPU which scheduled it
> > > > (so wherever the IRQ was initially handled).  If the protocol used
> > > > between netfront and netback doesn't support RSS then RPS
> > > > <http://lwn.net/Articles/362339/> can be used to spread the RX work
> > > > across CPUs.
> > > 
> > > There's only one irq per netback which is bound to one (V)CPU at a
> > > time.  I guess we could extend it to have multiple irqs per netback and
> > > some way of distributing packet flows over them, but that would only
> > > really make sense if there's a single interface with much more traffic
> > > than the others; otherwise the interrupts should be fairly well
> > > distributed (assuming that the different netback irqs are routed to
> > > different cpus).
> > > 
> > 
> > Does "multiqueue" only work for NIC drivers (and frontend drivers),
> > or could it be used also for netback?
> 
> Netfront and netback would have to agree on how many queues to use in
> each direction.
> 

Yep.

> > (afaik Linux multiqueue enables setting up multiple receive queues
> > each having a separate irq.)
> 
> In the context of Linux networking, 'multiqueue' generally refers to use
> of multiple *transmit* queues.  The networking core handles scheduling
> and locking of each transmit queue, so it had to be extended to support
> multiple queues - initially done in 2.6.23, then made scalable in
> 2.6.27.
> 

Thanks for clearing that up.

> It was possible to use multiple receive queues per device long before
> this since the networking core is not involved in locking them.  (Though
> it did require some hacks to create multiple NAPI contexts, before
> 2.6.24.)  This is mostly useful useful in conjunction with separate IRQs
> per RX queue, spread across multiple CPUs (sometimes referred to as
> Receive Side Scaling or RSS).
> 

Ok. I should read changelogs more closely.. I thought both the receive/transmit 
multiqueue features appeared 'recently', but it seems I was wrong.

I think Linux 2.6.32 added multiqueue VLAN support..

-- Pasi

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ