[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10579c9bf79a42dd878c1e201bc9d254@ausx13mps321.AMER.DELL.COM>
Date: Thu, 29 Nov 2018 23:24:58 +0000
From: <Alex_Gagniuc@...lteam.com>
To: <helgaas@...nel.org>, <lukas@...ner.de>
Cc: <mr.nuke.me@...il.com>, <Austin.Bolen@...l.com>,
<keith.busch@...el.com>, <Shyam.Iyer@...l.com>,
<mika.westerberg@...ux.intel.com>, <okaya@...eaurora.org>,
<rafael.j.wysocki@...el.com>, <poza@...eaurora.org>,
<linux-pci@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] PCI: pciehp: Report degraded links via link bandwidth
notification
On 11/29/2018 5:05 PM, Bjorn Helgaas wrote:
> On Thu, Nov 29, 2018 at 08:13:12PM +0100, Lukas Wunner wrote:
>> I guess the interrupt is shared with hotplug and PME? In that case write
>> a separate pcie_port_service_driver and request the interrupt with
>> IRQF_SHARED. Define a new service type in drivers/pci/pcie/portdrv.h.
>> Amend get_port_device_capability() to check for PCI_EXP_LNKCAP_LBNC.
>
> I really don't like the port driver design. I'd rather integrate
> those services more tightly into the PCI core. But realistically
> that's wishful thinking and may never happen, so this might be the
> most expedient approach.
So, how would it get integrated? I don't like the port service driver
either. It's too dicky on how it creates some new devices that other
drives bind to. If we could have a 1:1 mapping between service drivers
and PCI capabilities, then it might make better sense.
So, do I go the new service driver route?
Alex
Powered by blists - more mailing lists