lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 27 Apr 2020 21:03:44 -0300
From:   Jason Gunthorpe <jgg@...pe.ca>
To:     "Saleem, Shiraz" <shiraz.saleem@...el.com>
Cc:     Leon Romanovsky <leon@...nel.org>,
        "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        "Patil, Kiran" <kiran.patil@...el.com>,
        "Ertman, David M" <david.m.ertman@...el.com>,
        "Sharp, Robert O" <robert.o.sharp@...el.com>,
        "Lacombe, John S" <john.s.lacombe@...el.com>
Subject: Re: [RFC PATCH v5 01/16] RDMA/irdma: Add driver framework definitions

On Mon, Apr 27, 2020 at 11:57:51PM +0000, Saleem, Shiraz wrote:
> > Subject: Re: [RFC PATCH v5 01/16] RDMA/irdma: Add driver framework
> > definitions
> > 
> > On Thu, Apr 23, 2020 at 11:54:18PM +0000, Saleem, Shiraz wrote:
> > > > Subject: Re: [RFC PATCH v5 01/16] RDMA/irdma: Add driver framework
> > > > definitions
> > > >
> > > > On Thu, Apr 23, 2020 at 05:15:22PM +0000, Saleem, Shiraz wrote:
> > > > > > Subject: Re: [RFC PATCH v5 01/16] RDMA/irdma: Add driver
> > > > > > framework definitions
> > > > > >
> > > > > > On Thu, Apr 23, 2020 at 12:32:48AM +0000, Saleem, Shiraz wrote:
> > > > > >
> > > > > > > we have a split initialization design for gen2 and future products.
> > > > > > > phase1 is control path resource initialization in
> > > > > > > irdma_probe_dev and
> > > > > > > phase-2 is the rest of the resources with the ib registration
> > > > > > > at the end of irdma_open. irdma_close must de-register the ib
> > > > > > > device which will take care of ibdev free too. So it makes
> > > > > > > sense to keep allocation of the ib device in irdma_open.
> > > > > >
> > > > > > The best driver pattern is to allocate the ib_device at the very
> > > > > > start of probe() and use this to anchor all the device resources and
> > memories.
> > > > > >
> > > > > > The whole close/open thing is really weird, you should get rid of it.
> > > > > maybe I missing something. But why is it weird?
> > > >
> > > > Because the RDMA driver should exist as its own entity. It does not
> > > > shutdown unless the remove() method on is struct device_driver is closed.
> > > > So what exactly are open/cose supposed to be doing? I think it is a
> > > > left over of trying to re-implement the driver model.
> > > >
> > > > > underlying configuration changes and reset management for the
> > > > > physical function need a light-weight mechanism which is realized
> > > > > with the close/open from netdev PCI drv --> rdma drv.
> > > >
> > > > > Without a teardown and re-add of virtual device off the bus.
> > > >
> > > > Yes, that is exactly right. If you have done something so disruptive
> > > > that the ib_device needs to be destroyed then you should
> > > > unplug/replug the entire virtual bus device, that is the correct and sane thing to
> > do.
> > >
> > > Well we have resources created in rdma driver probe which are used by
> > > any VF's regardless of the registration of the ib device on the PF.
> > 
> > Ugh, drivers that have the VF driver require the PF driver have a lot of problems.
> > 
> > But, even so, with your new split design, resources held for a VF belong in the
> > core pci driver, not the rdma virtual bus device.
> > 
> 
> This is not a new design per se but been this way from the get go in our first
> submission.
> 
> What your suggesting makes sense if we had a core PCI driver and
> function specific drivers (i.e netdev and rdma driver in our case).
> The resources held for VF, device IRQs and other common resource
> initialization would be done by this core PCI driver. Function specific
> drivers would bind to their virtual devices and access their slice of
> resources. It sounds architecturally more clean but this is a major
> undertaking that needs a re-write of both netdev and rdma drivers.
> Moreover not sure if we are solving any problem here and the current
> design is proven out to work for us.
> 
> As it stands now, the netdev driver is the pci driver and moving rdma
> specific admin queues / resources out of rdma PF driver to be managed
> by the netdev driver does not make a lot of sense in the present design.
> We want rdma VF specific resources be managed by rdma PF driver.
> And netdev specific VF resources by netdev PF driver.

While I won't say you need to undertake such work, it does seem very
hacky considering the new virtual bus/etc to leave it like this.

Still, you need to be able to cope with the user unbinding your
drivers in any order via sysfs. What happens to the VFs when the PF is
unbound and releases whatever resources? This is where the broadcom
driver ran into troubles..

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ