lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <608505778d76b1b01cb3e8d19ecda5b8578f0f79.camel@kernel.org>
Date:   Mon, 14 Dec 2020 22:15:34 -0800
From:   Saeed Mahameed <saeed@...nel.org>
To:     Alexander Duyck <alexander.duyck@...il.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Jason Gunthorpe <jgg@...dia.com>,
        Leon Romanovsky <leonro@...dia.com>,
        Netdev <netdev@...r.kernel.org>, linux-rdma@...r.kernel.org,
        David Ahern <dsahern@...nel.org>,
        Jacob Keller <jacob.e.keller@...el.com>,
        Sridhar Samudrala <sridhar.samudrala@...el.com>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Kiran Patil <kiran.patil@...el.com>,
        Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support

On Mon, 2020-12-14 at 17:53 -0800, Alexander Duyck wrote:
> On Mon, Dec 14, 2020 at 1:49 PM Saeed Mahameed <saeed@...nel.org>
> wrote:
> > Hi Dave, Jakub, Jason,
> > 
> > This series form Parav was the theme of this mlx5 release cycle,
> > we've been waiting anxiously for the auxbus infrastructure to make
> > it into
> > the kernel, and now as the auxbus is in and all the stars are
> > aligned, I
> > can finally submit this V2 of the devlink and mlx5 subfunction
> > support.
> > 
> > Subfunctions came to solve the scaling issue of virtualization
> > and switchdev environments, where SRIOV failed to deliver and users
> > ran
> > out of VFs very quickly as SRIOV demands huge amount of physical
> > resources
> > in both of the servers and the NIC.
> > 
> > Subfunction provide the same functionality as SRIOV but in a very
> > lightweight manner, please see the thorough and detailed
> > documentation from Parav below, in the commit messages and the
> > Networking documentation patches at the end of this series.
> > 
> 
> Just to clarify a few things for myself. You mention virtualization
> and SR-IOV in your patch description but you cannot support direct
> assignment with this correct? The idea here is simply logical
> partitioning of an existing network interface, correct? So this isn't
> so much a solution for virtualization, but may work better for
> containers. I view this as an important distinction to make as the

at the current state yes, but the SF solution can be extended to
support direct assignment, so this is why i think SF solution can do
better and eventually replace SRIOV.
also many customers are currently using SRIOV with containers to get
the performance and isolation features since there was no other
options.

> first thing that came to mind when I read this was mediated devices
> which is similar, but focused only on the virtualization case:
> https://www.kernel.org/doc/html/v5.9/driver-api/vfio-mediated-device.html
> 
> > Parav Pandit Says:
> > =================
> > 
> > This patchset introduces support for mlx5 subfunction (SF).
> > 
> > A subfunction is a lightweight function that has a parent PCI
> > function on
> > which it is deployed. mlx5 subfunction has its own function
> > capabilities
> > and its own resources. This means a subfunction has its own
> > dedicated
> > queues(txq, rxq, cq, eq). These queues are neither shared nor
> > stealed from
> > the parent PCI function.
> 
> Rather than calling this a subfunction, would it make more sense to
> call it something such as a queue set? It seems like this is exposing
> some of the same functionality we did in the Intel drivers such as
> ixgbe and i40e via the macvlan offload interface. However the
> ixgbe/i40e hardware was somewhat limited in that we were only able to
> expose Ethernet interfaces via this sort of VMQ/VMDQ feature, and
> even
> with that we have seen some limitations to the interface. It sounds
> like you are able to break out RDMA capable devices this way as well.
> So in terms of ways to go I would argue this is likely better. 

We've discussed this thoroughly on V0, the SF solutions is closer to a
VF than a VMDQ, this is not just a set of queues.

https://lore.kernel.org/linux-rdma/421951d99a33d28b91f2b2997409d0c97fa5a98a.camel@kernel.org/

> However
> one downside is that we are going to end up seeing each subfunction
> being different from driver to driver and vendor to vendor which I
> would argue was also one of the problems with SR-IOV as you end up
> with a bit of vendor lock-in as a result of this feature since each
> vendor will be providing a different interface.
> 

I disagree, SFs are tightly coupled with switchdev model and devlink
functions port, they are backed with the a well defined model, i can
say the same about sriov with switchdev mode, this sort of vendor lock-
in issues is eliminated when you migrate to switchdev mode.

> > When subfunction is RDMA capable, it has its own QP1, GID table and
> > rdma
> > resources neither shared nor stealed from the parent PCI function.
> > 
> > A subfunction has dedicated window in PCI BAR space that is not
> > shared
> > with ther other subfunctions or parent PCI function. This ensures
> > that all
> > class devices of the subfunction accesses only assigned PCI BAR
> > space.
> > 
> > A Subfunction supports eswitch representation through which it
> > supports tc
> > offloads. User must configure eswitch to send/receive packets
> > from/to
> > subfunction port.
> > 
> > Subfunctions share PCI level resources such as PCI MSI-X IRQs with
> > their other subfunctions and/or with its parent PCI function.
> 
> This piece to the architecture for this has me somewhat concerned. If
> all your resources are shared and you are allowing devices to be

not all, only PCI MSIX, for now..

> created incrementally you either have to pre-partition the entire
> function which usually results in limited resources for your base
> setup, or free resources from existing interfaces and redistribute
> them as things change. I would be curious which approach you are
> taking here? So for example if you hit a certain threshold will you
> need to reset the port and rebalance the IRQs between the various
> functions?
> 

Currently SFs will use whatever IRQs the PF has pre-allocated for
itself, so there is no IRQ limit issue at the moment, we are
considering a dynamic IRQ pool with dynamic balancing, or even better
us the IMS approach, which perfectly fits the SF architecture. 
https://patchwork.kernel.org/project/linux-pci/cover/1568338328-22458-1-git-send-email-megha.dey@linux.intel.com/

for internal resources the are fully isolated (not shared) and
they are internally managed by FW exactly like a VF internal resources.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ