lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1329855866.2689.41.camel@bwh-desktop>
Date:	Tue, 21 Feb 2012 20:24:26 +0000
From:	Ben Hutchings <bhutchings@...arflare.com>
To:	John Fastabend <john.r.fastabend@...el.com>
CC:	David Miller <davem@...emloft.net>, <netdev@...r.kernel.org>,
	<linux-net-drivers@...arflare.com>,
	Shradha Shah <sshah@...arflare.com>
Subject: Re: [PATCH net-next 19/19] sfc: Add SR-IOV back-end support for
 SFC9000 family

On Tue, 2012-02-21 at 11:52 -0800, John Fastabend wrote:
> On 2/15/2012 5:57 PM, Ben Hutchings wrote:
> > On Wed, 2012-02-15 at 17:18 -0800, John Fastabend wrote:
> >> On 2/15/2012 4:52 PM, Ben Hutchings wrote:
> >>> On the SFC9000 family, each port has 1024 Virtual Interfaces (VIs),
> >>> each with an RX queue, a TX queue, an event queue and a mailbox
> >>> register.  These may be assigned to up to 127 SR-IOV virtual functions
> >>> per port, with up to 64 VIs per VF.
> >>>
> >>> We allocate an extra channel (IRQ and event queue only) to receive
> >>> requests from VF drivers.
> >>>
> >>> There is a per-port limit of 4 concurrent RX queue flushes, and queue
> >>> flushes may be initiated by the MC in response to a Function Level
> >>> Reset (FLR) of a VF.  Therefore, when SR-IOV is in use, we submit all
> >>> flush requests via the MC.
> >>>
> >>> The RSS indirection table is shared with VFs, so the number of RX
> >>> queues used in the PF is limited to the number of VIs per VF.
> >>>
> >>> This is almost entirely the work of Steve Hodgson, formerly
> >>> shodgson@...arflare.com.
> >>>
> >>> Signed-off-by: Ben Hutchings <bhutchings@...arflare.com>
> >>> ---
> >>
> >> Hi Ben,
> >>
> >> So how would multiple VIs per VF work? Looks like each VI has a TX/RX
> >> pair all bundled under a single netdev with some set of TX MAC filters.
> > 
> > They can be used to provide a multiqueue net device for use in multi-
> > processor guests.
> 
> OK thanks its really just a multiqueue VF then. Calling it a virtual
> interface seems a bit confusing here.

The Falcon architecture was designed around the needs of user-level
networking, so that we could give each process a Virtual Interface
consisting of one RX, one TX and one event queue by mapping one page of
MMIO space into the process.  This term is now used to refer to a set of
queues accessible through a single page - but there is no hard-wired
connection between them or with other resources like filters.

> For example it doesn't resemble
> a VSI (Virtual Station Interface) per 802.1Q spec at all.

Right.  Also, Solarflare documentation uses the term 'VNIC' instead of
'VI', while it's not what is usually meant by 'vNIC' now, either.  But
Solarflare and its predecessors were using these terms well before
networking virtualisation was cool. ;-)

> I'm guessing using this with a TX MAC/VLAN filter looks something like
> Intel's VMDQ solutions.

Possibly; I haven't compared.

Ben.
 
> >> Do you expect users to build tc rules and edit the queue_mapping to get
> >> the skb headed at the correct tx queue? Would it be better to model each
> >> VI has its own net device.
> > 
> > No, we expect users to assign the VF into the guest.
> > 
> 
> Got it.
> 
> .John

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ