[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKOOJTybz71h6V5228vk1erusfb-QJQEQPOaRKzmspfotRHYhA@mail.gmail.com>
Date: Tue, 15 Dec 2020 17:12:33 -0800
From: Edwin Peer <edwin.peer@...adcom.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Parav Pandit <parav@...dia.com>, Saeed Mahameed <saeed@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jason Gunthorpe <jgg@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Netdev <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
David Ahern <dsahern@...nel.org>,
Jacob Keller <jacob.e.keller@...el.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>,
"Ertman, David M" <david.m.ertman@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Kiran Patil <kiran.patil@...el.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [net-next v4 00/15] Add mlx5 subfunction support
On Tue, Dec 15, 2020 at 10:49 AM Alexander Duyck
<alexander.duyck@...il.com> wrote:
> It isn't "SR-IOV done right" it seems more like "VMDq done better".
I don't think I agree with that assertion. The fact that VMDq can talk
to a common driver still makes VMDq preferable in some respects. Thus,
subfunctions do appear to be more of a better SR-IOV than a better
VMDq, but I'm similarly not sold on whether a better SR-IOV is
sufficient benefit to warrant the additional complexity this
introduces. If I understand correctly, subfunctions buy two things:
1) More than 256 SFs are possible: Maybe it's about time PCI-SIG
addresses this limit for VFs? If that were the only problem with VFs,
then fixing it once there would be cleaner. The devlink interface for
configuring a SF is certainly more sexy than legacy SR-IOV, but it
shouldn't be fundamentally impossible to zhuzh up VFs either. One can
also imagine possibilities around remapping multiple PFs (and their
VFs) in a clever way to get around the limited number of PCI resources
exposed to the host.
2) More flexible division of resources: It's not clear that device
firmwarre can't perform smarter allocation than N/<num VFs>, but
subfunctions also appear to allow sharing of certain resources by the
PF driver, if desirable. To the extent that resources are shared, how
are workloads isolated from each other?
I'm not sure I like the idea of having to support another resource
allocation model in our driver just to support this, at least not
without a clearer understanding of what is being gained.
Like you, I would also prefer a more common infrastructure for
exposing something based on VirtIO/VMDq as the container/VM facing
netdevs. Is the lowest common denominator that a VMDq based interface
would constrain things to really unsuitable for container use cases?
Is the added complexity and confusion around VF vs SF vs VMDq really
warranted? I also don't see how this tackles container/VF portability,
migration of workloads, kernel network stack bypass, or any of the
other legacy limitations regarding SR-IOV VFs when we have vendor
specific aux bus drivers talking directly to vendor specific backend
hardware resources. In this regard, don't subfunctions, by definition,
have most of the same limitations as SR-IOV VFs?
Regards,
Edwin Peer
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
Download attachment "smime.p7s" of type "application/pkcs7-signature" (4160 bytes)
Powered by blists - more mailing lists