lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1329357439.3048.115.camel@deadeye>
Date:	Thu, 16 Feb 2012 01:57:19 +0000
From:	Ben Hutchings <bhutchings@...arflare.com>
To:	John Fastabend <john.r.fastabend@...el.com>
CC:	David Miller <davem@...emloft.net>, <netdev@...r.kernel.org>,
	<linux-net-drivers@...arflare.com>,
	Shradha Shah <sshah@...arflare.com>
Subject: Re: [PATCH net-next 19/19] sfc: Add SR-IOV back-end support for
 SFC9000 family

On Wed, 2012-02-15 at 17:18 -0800, John Fastabend wrote:
> On 2/15/2012 4:52 PM, Ben Hutchings wrote:
> > On the SFC9000 family, each port has 1024 Virtual Interfaces (VIs),
> > each with an RX queue, a TX queue, an event queue and a mailbox
> > register.  These may be assigned to up to 127 SR-IOV virtual functions
> > per port, with up to 64 VIs per VF.
> > 
> > We allocate an extra channel (IRQ and event queue only) to receive
> > requests from VF drivers.
> > 
> > There is a per-port limit of 4 concurrent RX queue flushes, and queue
> > flushes may be initiated by the MC in response to a Function Level
> > Reset (FLR) of a VF.  Therefore, when SR-IOV is in use, we submit all
> > flush requests via the MC.
> > 
> > The RSS indirection table is shared with VFs, so the number of RX
> > queues used in the PF is limited to the number of VIs per VF.
> > 
> > This is almost entirely the work of Steve Hodgson, formerly
> > shodgson@...arflare.com.
> > 
> > Signed-off-by: Ben Hutchings <bhutchings@...arflare.com>
> > ---
> 
> Hi Ben,
> 
> So how would multiple VIs per VF work? Looks like each VI has a TX/RX
> pair all bundled under a single netdev with some set of TX MAC filters.

They can be used to provide a multiqueue net device for use in multi-
processor guests.

> Do you expect users to build tc rules and edit the queue_mapping to get
> the skb headed at the correct tx queue? Would it be better to model each
> VI has its own net device.

No, we expect users to assign the VF into the guest.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ