[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180729180230.GA11016@wunner.de>
Date: Sun, 29 Jul 2018 20:02:30 +0200
From: Lukas Wunner <lukas@...ner.de>
To: Sinan Kaya <okaya@...nel.org>
Cc: Bjorn Helgaas <helgaas@...nel.org>,
Oza Pawandeep <poza@...eaurora.org>, linux-pci@...r.kernel.org,
open list <linux-kernel@...r.kernel.org>,
Keith Busch <keith.busch@...el.com>,
linux-arm-msm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH V5 3/3] PCI: Mask and unmask hotplug interrupts during
reset
On Fri, Jul 20, 2018 at 07:58:20PM -0700, Sinan Kaya wrote:
> My patch solves the problem if AER interrupt happens before the hotplug
> interrupt. We are masking the data link layer active interrupt. So,
> AER/DPC can perform their link operations without hotplug driver race.
>
> We need to figure out how to gracefully return inside hotplug driver
> if link down happened and there is an error pending.
>
> My first question is why hotplug driver is reacting to the link event
> if there was not an actual device insertion/removal.
>
> Would it help to keep track of presence changed interrupts since last
> link event?
>
> IF counter is 0 and device is present, hotplug driver bails out
> silently as an example.
Counting PDC events doesn't work reliably if multiple such events
occur in very short succession as the interrupt handler may not
run quickly enough. See this commit message which shows unbalanced
Link Up / Link Down events:
https://patchwork.ozlabs.org/patch/867418/
And on Thunderbolt, interrupts can be signaled even though the port
and its parents are in D3hot (sic!). A Thunderbolt daisy chain can
consist of up to 6 devices, each comprising a PCI switch, so there's
a cascade of over a dozen Upstream / Downstream ports between the
Root port and the hotplug port at the end of the daisy chain.
God knows how many events have occurred by the time all the parents
are resumed to D0 and the Slot Status register of the hotplug port
is read/written. That was really the motivation for the event
handling rework.
Thanks,
Lukas
Powered by blists - more mailing lists