[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGUAWxoEngmqFcLJ@bhelgaas>
Date: Wed, 17 May 2023 11:27:07 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc: Shuai Xue <xueshuai@...ux.alibaba.com>,
Robin Murphy <robin.murphy@....com>, yangyicong@...wei.com,
will@...nel.org, baolin.wang@...ux.alibaba.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-pci@...r.kernel.org, rdunlap@...radead.org,
mark.rutland@....com, zhuo.song@...ux.alibaba.com,
linux-cxl@...r.kernel.org
Subject: Re: [PATCH v3 2/3] drivers/perf: add DesignWare PCIe PMU driver
On Wed, May 17, 2023 at 10:54:21AM +0100, Jonathan Cameron wrote:
> On Tue, 16 May 2023 14:17:52 -0500
> Bjorn Helgaas <helgaas@...nel.org> wrote:
>
> > On Tue, May 16, 2023 at 04:03:04PM +0100, Jonathan Cameron wrote:
> ...
> > > The approach used here is to separately walk the PCI topology and
> > > register the devices. It can 'maybe' get away with that because no
> > > interrupts and I assume resets have no nasty impacts on it because
> > > the device is fairly simple. In general that's not going to work.
> > > CXL does a similar trick (which I don't much like, but too late
> > > now), but we've also run into the problem of how to get interrupts
> > > if not the main driver.
> >
> > Yes, this is a real problem. I think the "walk all PCI devices
> > looking for one we like" approach is terrible because it breaks a lot
> > of driver model assumptions (no device ID to autoload module via udev,
> > hotplug doesn't work, etc), but we don't have a good alternative right
> > now.
> >
> > I think portdrv is slightly better because at least it claims the
> > device in the usual way and gives a way for service drivers to
> > register with it. But I don't really like that either because it
> > created a new weird /sys/bus/pci_express hierarchy full of these
> > sub-devices that aren't really devices, and it doesn't solve the
> > module load and hotplug issues.
> >
> > I would like to have portdrv be completely built into the PCI core and
> > not claim Root Ports or Switch Ports. Then those devices would be
> > available via the usual driver model for driver loading and binding
> > and for hotplug.
>
> Let me see if I understand this correctly as I can think of a few options
> that perhaps are inline with what you are thinking.
>
> 1) All the portdrv stuff converted to normal PCI core helper functions
> that a driver bound to the struct pci_dev can use.
> 2) Driver core itself provides a bunch of extra devices alongside the
> struct pci_dev one to which additional drivers can bind? - so kind
> of portdrv handling, but squashed into the PCI device topology?
> 3) Have portdrv operated under the hood, so all the services etc that
> it provides don't require a driver to be bound at all. Then
> allow usual VID/DID based driver binding.
>
> If 1 - we are going to run into class device restrictions and that will
> just move where we have to handle the potential vendor specific parts.
> We probably don't want that to be a hydra with all the functionality
> and lookups etc driven from there, so do we end up with sub devices
> of that new PCI port driver with a discover method based on either
> vsec + VID or DVSEC with devices created under the main pci_dev.
> That would have to include nastiness around interrupt discovery for
> those sub devices. So ends up roughly like port_drv.
>
> I don't think 2 solves anything.
>
> For 3 - interrupts and ownership of facilities is going to be tricky
> as initially those need to be owned by the PCI core (no device driver bound)
> and then I guess handed off to the driver once it shows up? Maybe that
> driver should call a pci_claim_port() that gives it control of everything
> and pci_release_port() that hands it all back to the core. That seems
> racey.
Yes, 3 is the option I want to explore. That's what we already do for
things like ASPM. Agreed, interrupts is a potential issue. I think
the architected parts of config space should be implicitly owned by
the PCI core, with interfaces à la pci_disable_link_state() if drivers
need them.
Bjorn
Powered by blists - more mailing lists