[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191110091855.GE1435668@kroah.com>
Date: Sun, 10 Nov 2019 10:18:55 +0100
From: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
To: Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Parav Pandit <parav@...lanox.com>,
Jiri Pirko <jiri@...nulli.us>,
David M <david.m.ertman@...el.com>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"leon@...nel.org" <leon@...nel.org>,
"cohuck@...hat.com" <cohuck@...hat.com>,
Jiri Pirko <jiri@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Or Gerlitz <gerlitz.or@...il.com>
Subject: Re: [PATCH net-next 00/19] Mellanox, mlx5 sub function support
On Sat, Nov 09, 2019 at 09:27:47AM -0800, Jakub Kicinski wrote:
> On Fri, 8 Nov 2019 20:44:26 -0400, Jason Gunthorpe wrote:
> > On Fri, Nov 08, 2019 at 01:45:59PM -0800, Jakub Kicinski wrote:
> > > Yes, my suggestion to use mdev was entirely based on the premise that
> > > the purpose of this work is to get vfio working.. otherwise I'm unclear
> > > as to why we'd need a bus in the first place. If this is just for
> > > containers - we have macvlan offload for years now, with no need for a
> > > separate device.
> >
> > This SF thing is a full fledged VF function, it is not at all like
> > macvlan. This is perhaps less important for the netdev part of the
> > world, but the difference is very big for the RDMA side, and should
> > enable VFIO too..
>
> Well, macvlan used VMDq so it was pretty much a "legacy SR-IOV" VF.
> I'd perhaps need to learn more about RDMA to appreciate the difference.
>
> > > On the RDMA/Intel front, would you mind explaining what the main
> > > motivation for the special buses is? I'm a little confurious.
> >
> > Well, the issue is driver binding. For years we have had these
> > multi-function netdev drivers that have a single PCI device which must
> > bind into multiple subsystems, ie mlx5 does netdev and RDMA, the cxgb
> > drivers do netdev, RDMA, SCSI initiator, SCSI target, etc. [And I
> > expect when NVMe over TCP rolls out we will have drivers like cxgb4
> > binding to 6 subsytems in total!]
>
> What I'm missing is why is it so bad to have a driver register to
> multiple subsystems.
Because these PCI devices seem to do "different" things all in one PCI
resource set. Blame the hardware designers :)
> I've seen no end of hacks caused people trying to split their driver
> too deeply by functionality. Separate sub-drivers, buses and modules.
>
> The nfp driver was split up before I upstreamed it, I merged it into
> one monolithic driver/module. Code is still split up cleanly internally,
> the architecture doesn't change in any major way. Sure 5% of developers
> were upset they can't do some partial reloads they were used to, but
> they got used to the new ways, and 100% of users were happy about the
> simplicity.
I agree, you should stick with the "one device/driver" thing where ever
possible, like you did.
> For the nfp I think the _real_ reason to have a bus was that it
> was expected to have some out-of-tree modules bind to it. Something
> I would not encourage :)
That's not ok, and I agree with you.
But there seems to be some more complex PCI devices that do lots of
different things all at once. Kind of like a PCI device that wants to
be both a keyboard and a storage device at the same time (i.e. a button
on a disk drive...)
thanks,
greg k-h
Powered by blists - more mailing lists