[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201117184954.GV917484@nvidia.com>
Date: Tue, 17 Nov 2020 14:49:54 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Parav Pandit <parav@...dia.com>, Saeed Mahameed <saeed@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
Jiri Pirko <jiri@...dia.com>,
"dledford@...hat.com" <dledford@...hat.com>,
Leon Romanovsky <leonro@...dia.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH net-next 00/13] Add mlx5 subfunction support
On Tue, Nov 17, 2020 at 09:11:20AM -0800, Jakub Kicinski wrote:
> > Just to refresh all our memory, we discussed and settled on the flow
> > in [2]; RFC [1] followed this discussion.
> >
> > vdpa tool of [3] can add one or more vdpa device(s) on top of already
> > spawned PF, VF, SF device.
>
> Nack for the networking part of that. It'd basically be VMDq.
What are you NAK'ing?
It is consistent with the multi-subsystem device sharing model we've
had for ages now.
The physical ethernet port is shared between multiple accelerator
subsystems. netdev gets its slice of traffic, so does RDMA, iSCSI,
VDPA, etc.
Jason
Powered by blists - more mailing lists