[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Uf4i9hrq6Z4dx03sv_ubVpZwKm5Tiz+-UwJp38cTyZg+g@mail.gmail.com>
Date: Fri, 20 Nov 2020 09:58:16 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: David Ahern <dsahern@...il.com>, Jason Gunthorpe <jgg@...dia.com>,
Parav Pandit <parav@...dia.com>,
Saeed Mahameed <saeed@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
Jiri Pirko <jiri@...dia.com>,
"dledford@...hat.com" <dledford@...hat.com>,
Leon Romanovsky <leonro@...dia.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net-next 00/13] Add mlx5 subfunction support
On Thu, Nov 19, 2020 at 5:29 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed, 18 Nov 2020 21:35:29 -0700 David Ahern wrote:
> > On 11/18/20 7:14 PM, Jakub Kicinski wrote:
> > > On Tue, 17 Nov 2020 14:49:54 -0400 Jason Gunthorpe wrote:
> > >> On Tue, Nov 17, 2020 at 09:11:20AM -0800, Jakub Kicinski wrote:
> > >>
> > >>>> Just to refresh all our memory, we discussed and settled on the flow
> > >>>> in [2]; RFC [1] followed this discussion.
> > >>>>
> > >>>> vdpa tool of [3] can add one or more vdpa device(s) on top of already
> > >>>> spawned PF, VF, SF device.
> > >>>
> > >>> Nack for the networking part of that. It'd basically be VMDq.
> > >>
> > >> What are you NAK'ing?
> > >
> > > Spawning multiple netdevs from one device by slicing up its queues.
> >
> > Why do you object to that? Slicing up h/w resources for virtual what
> > ever has been common practice for a long time.
>
> My memory of the VMDq debate is hazy, let me rope in Alex into this.
> I believe the argument was that we should offload software constructs,
> not create HW-specific APIs which depend on HW availability and
> implementation. So the path we took was offloading macvlan.
I think it somewhat depends on the type of interface we are talking
about. What we were wanting to avoid was drivers spawning their own
unique VMDq netdevs and each having a different way of doing it. The
approach Intel went with was to use a MACVLAN offload to approach it.
Although I would imagine many would argue the approach is somewhat
dated and limiting since you cannot do many offloads on a MACVLAN
interface.
With the VDPA case I believe there is a set of predefined virtio
devices that are being emulated and presented so it isn't as if they
are creating a totally new interface for this.
What I would be interested in seeing is if there are any other vendors
that have reviewed this and sign off on this approach. What we don't
want to see is Nivida/Mellanox do this one way, then Broadcom or Intel
come along later and have yet another way of doing this. We need an
interface and feature set that will work for everyone in terms of how
this will look going forward.
Powered by blists - more mailing lists