[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UdXxLB-TAOB3BbP2NuDeeoza5XwPh-TO+b-KzJe4dEcLw@mail.gmail.com>
Date: Tue, 15 Dec 2020 19:12:50 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Edwin Peer <edwin.peer@...adcom.com>
Cc: Parav Pandit <parav@...dia.com>, Saeed Mahameed <saeed@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jason Gunthorpe <jgg@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Netdev <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
David Ahern <dsahern@...nel.org>,
Jacob Keller <jacob.e.keller@...el.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>,
"Ertman, David M" <david.m.ertman@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Kiran Patil <kiran.patil@...el.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support
On Tue, Dec 15, 2020 at 5:13 PM Edwin Peer <edwin.peer@...adcom.com> wrote:
>
> On Tue, Dec 15, 2020 at 10:49 AM Alexander Duyck
> <alexander.duyck@...il.com> wrote:
>
> > It isn't "SR-IOV done right" it seems more like "VMDq done better".
>
> I don't think I agree with that assertion. The fact that VMDq can talk
> to a common driver still makes VMDq preferable in some respects. Thus,
> subfunctions do appear to be more of a better SR-IOV than a better
> VMDq, but I'm similarly not sold on whether a better SR-IOV is
> sufficient benefit to warrant the additional complexity this
> introduces. If I understand correctly, subfunctions buy two things:
>
> 1) More than 256 SFs are possible: Maybe it's about time PCI-SIG
> addresses this limit for VFs? If that were the only problem with VFs,
> then fixing it once there would be cleaner. The devlink interface for
> configuring a SF is certainly more sexy than legacy SR-IOV, but it
> shouldn't be fundamentally impossible to zhuzh up VFs either. One can
> also imagine possibilities around remapping multiple PFs (and their
> VFs) in a clever way to get around the limited number of PCI resources
> exposed to the host.
The fact is SR-IOV just wasn't designed to scale well. I think we are
probably going to see most vendors move away from it.
The fact is what we are talking about now is the future of all this
and how to implement Scalable I/O Virtualization
(https://software.intel.com/content/www/us/en/develop/download/intel-scalable-io-virtualization-technical-specification.html).
The document is a good primer to many of the features we are
discussing as it describes how to compose a device.
The problem is as it was with SR-IOV that the S-IOV specification is
very PCIe centric and doesn't do a good job explaining how to deal
with the network as it relates to all this. Then to complicate things
the S-IOV expected this to be used with direct assigned devices for
guests/applications and instead we are talking about using the devices
in the host which makes things a bit messier.
> 2) More flexible division of resources: It's not clear that device
> firmwarre can't perform smarter allocation than N/<num VFs>, but
> subfunctions also appear to allow sharing of certain resources by the
> PF driver, if desirable. To the extent that resources are shared, how
> are workloads isolated from each other?
>
> I'm not sure I like the idea of having to support another resource
> allocation model in our driver just to support this, at least not
> without a clearer understanding of what is being gained.
I view this as the future alternative to SR-IOV. It is just a matter
of how we define it. Eventually we probably would be dropping the
SR-IOV implementation and instead moving over to S-IOV as an
alternative instead. As such if this is done right I don't see this as
a thing where we need to support both. Really we should be able to
drop support for one if we have the other.
Powered by blists - more mailing lists