lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a020128-80e8-76a7-6b94-e165d3c6f778@linux.intel.com>
Date:   Wed, 17 Mar 2021 13:02:07 -0700
From:   "Kuppuswamy, Sathyanarayanan" 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>
To:     Lukas Wunner <lukas@...ner.de>,
        Sathyanarayanan Kuppuswamy Natarajan 
        <sathyanarayanan.nkuppuswamy@...il.com>
Cc:     Dan Williams <dan.j.williams@...el.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Linux PCI <linux-pci@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        Keith Busch <kbusch@...nel.org>, knsathya@...nel.org,
        Sinan Kaya <okaya@...nel.org>
Subject: Re: [PATCH v2 1/1] PCI: pciehp: Skip DLLSC handling if DPC is
 triggered



On 3/17/21 12:01 PM, Lukas Wunner wrote:
> On Wed, Mar 17, 2021 at 10:54:09AM -0700, Sathyanarayanan Kuppuswamy Natarajan wrote:
>> Flush of hotplug event after successful recovery, and a simulated
>> hotplug link down event after link recovery fails should solve the
>> problems raised by Lukas. I assume Lukas' proposal adds this support.
>> I will check his patch shortly.
> 
> Thank you!
> 
> I'd like to get a better understanding of the issues around hotplug/DPC,
> specifically I'm wondering:
> 
> If DPC recovery was successful, what is the desired behavior by pciehp,
> should it ignore the Link Down/Up or bring the slot down and back up
> after DPC recovery?
> 
> If the events are ignored, the driver of the device in the hotplug slot
> is not unbound and rebound.  So the driver must be able to cope with
> loss of TLPs during DPC recovery and it must be able to cope with
> whatever state the endpoint device is in after DPC recovery.
> Is this really safe?  How does the nvme driver deal with it?
During DPC recovery, in pcie_do_recovery() function, we use
report_frozen_detected() to notify all devices attached to the port
about the fatal error. After this notification, we expect all
affected devices to halt its IO transactions.

Regarding state restoration, after successful recovery, we use
report_slot_reset() to notify about the slot/link reset. So device
drivers are expected to restore the device to working state after this
notification.
> 
> Also, if DPC is handled by firmware, your patch does not ignore the
> Link Down/Up events, 
Only for pure firmware model. For EDR case, we still ignore the Link
Down/Up events.
so pciehp brings down the slot when DPC is
> triggered, then brings it up after succesful recovery.  In a code
> comment, you write that this behavior is okay because there's "no
> race between hotplug and DPC recovery". 
My point is, there is no race in OS handlers (pciehp_ist() vs pcie_do_recovery())
  However, Sinan wrote in
> 2018 that one of the issues with hotplug versus DPC is that pciehp
> may turn off slot power and thereby foil DPC recovery.  (Power off =
> cold reset, whereas DPC recovery = warm reset.)  This can occur
> as well if DPC is handled by firmware.
I am not sure how pure firmware DPC recovery works. Is there a platform
which uses this combination? For firmware DPC model, spec does not clarify
following points.

1. Who will notify the affected device drivers to halt the IO transactions.
2. Who is responsible to restore the state of the device after link reset.

IMO, pure firmware DPC does not support seamless recovery. I think after it
clears the DPC trigger status, it might expect hotplug handler be responsible
for device recovery.

I don't want to add fix to the code path that I don't understand. This is the
reason for extending this logic to pure firmware DPC case.

> 
> So I guess pciehp should make an attempt to await DPC recovery even
> if it's handled by firmware?  Or am I missing something?  We may be
> able to achieve that by polling the DPC Trigger Status bit and
> DLLLA bit, but it won't work as perfectly as with native DPC support.
> 
> Finally, you write in your commit message that there are "a lot of
> stability issues" if pciehp and DPC are allowed to recover freely
> without proper serialization.  What are these issues exactly?
In most cases, I see failure of DPC recovery handler (you can see the
example dmesg in commit log). Other than this, we also noticed some
extended delay or failure in link retraining (while waiting for LINK UP
in pcie_wait_for_link(pdev, true)).
In some cases, we noticed slot power on failures and that card will not
be detected when running lscpi.

Above mentioned cases are just observations we have made. we did not dig
deeper on why the race between pciehp and DPC leads to such issues.
> (Beyond the slot power issue mentioned above, and that the endpoint
> device's driver should presumably not be unbound if DPC recovery
> was successful.)
> 
> Thanks!
> 
> Lukas
> 

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ