lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11282df5-9126-4b5b-82ae-5f1ef3b8aaf5@linux.alibaba.com>
Date: Mon, 25 Nov 2024 13:43:42 +0800
From: Shuai Xue <xueshuai@...ux.alibaba.com>
To: "Bowman, Terry" <terry.bowman@....com>, linux-pci@...r.kernel.org,
 linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
 bhelgaas@...gle.com, kbusch@...nel.org, Lukas Wunner <lukas@...ner.de>
Cc: mahesh@...ux.ibm.com, oohall@...il.com,
 sathyanarayanan.kuppuswamy@...ux.intel.com
Subject: Re: [PATCH v2 2/2] PCI/AER: Report fatal errors of RCiEP and EP if
 link recoverd



在 2024/11/17 21:36, Shuai Xue 写道:
> 
> 
> 在 2024/11/16 20:44, Shuai Xue 写道:
>>
>>
>> 在 2024/11/16 04:20, Bowman, Terry 写道:
>>> Hi Shuai,
>>>
>>>
>>> On 11/12/2024 7:54 AM, Shuai Xue wrote:
>>>> The AER driver has historically avoided reading the configuration space of
>>>> an endpoint or RCiEP that reported a fatal error, considering the link to
>>>> that device unreliable. Consequently, when a fatal error occurs, the AER
>>>> and DPC drivers do not report specific error types, resulting in logs like:
>>>>
>>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>>    nvme nvme0: frozen state error detected, reset controller
>>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>>
>>>> AER status registers are sticky and Write-1-to-clear. If the link recovered
>>>> after hot reset, we can still safely access AER status of the error device.
>>>> In such case, report fatal errors which helps to figure out the error root
>>>> case.
>>>>
>>>> After this patch, the logs like:
>>>>
>>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 source:0x3400
>>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>>    nvme nvme0: frozen state error detected, reset controller
>>>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after activation
>>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Data Link Layer, (Receiver ID)
>>>>    nvme 0000:34:00.0:   device [144d:a804] error status/mask=00000010/00504000
>>>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>>
>>>> Signed-off-by: Shuai Xue <xueshuai@...ux.alibaba.com>
>>>> ---
>>>>   drivers/pci/pci.h      |  3 ++-
>>>>   drivers/pci/pcie/aer.c | 11 +++++++----
>>>>   drivers/pci/pcie/dpc.c |  2 +-
>>>>   drivers/pci/pcie/err.c |  9 +++++++++
>>>>   4 files changed, 19 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>>>> index 0866f79aec54..6f827c313639 100644
>>>> --- a/drivers/pci/pci.h
>>>> +++ b/drivers/pci/pci.h
>>>> @@ -504,7 +504,8 @@ struct aer_err_info {
>>>>       struct pcie_tlp_log tlp;    /* TLP Header */
>>>>   };
>>>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info);
>>>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>>>> +                  bool link_healthy);
>>>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>>>   #endif    /* CONFIG_PCIEAER */
>>>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>>>> index 13b8586924ea..97ec1c17b6f4 100644
>>>> --- a/drivers/pci/pcie/aer.c
>>>> +++ b/drivers/pci/pcie/aer.c
>>>> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>>>>    * aer_get_device_error_info - read error status from dev and store it to info
>>>>    * @dev: pointer to the device expected to have a error record
>>>>    * @info: pointer to structure to store the error record
>>>> + * @link_healthy: link is healthy or not
>>>>    *
>>>>    * Return 1 on success, 0 on error.
>>>>    *
>>>>    * Note that @info is reused among all error devices. Clear fields properly.
>>>>    */
>>>> -int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>>> +int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info,
>>>> +                  bool link_healthy)
>>>>   {
>>>>       int type = pci_pcie_type(dev);
>>>>       int aer = dev->aer_cap;
>>>> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
>>>>       } else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>>>>              type == PCI_EXP_TYPE_RC_EC ||
>>>>              type == PCI_EXP_TYPE_DOWNSTREAM ||
>>>> -           info->severity == AER_NONFATAL) {
>>>> +           info->severity == AER_NONFATAL ||
>>>> +           (info->severity == AER_FATAL && link_healthy)) {
>>>>           /* Link is still healthy for IO reads */
>>>>           pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>>>> @@ -1258,11 +1261,11 @@ static inline void aer_process_err_devices(struct aer_err_info *e_info)
>>>>       /* Report all before handle them, not to lost records by reset etc. */
>>>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>>>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>>>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>>>               aer_print_error(e_info->dev[i], e_info);
>>>>       }
>>>
>>> Would it be reasonable to detect if the link is intact and set the aer_get_device_error_info()
>>> function's 'link_healthy' parameter accordingly? I was thinking the port upstream capability
>>> link status register could be used to indicate the link viability.
>>>
>>> Regards,
>>> Terry
>>
>> Good idea. I think pciehp_check_link_active is a good implementation to check
>> link_healthy in aer_get_device_error_info().
>>
>>    int pciehp_check_link_active(struct controller *ctrl)
>>    {
>>        struct pci_dev *pdev = ctrl_dev(ctrl);
>>        u16 lnk_status;
>>        int ret;
>>        ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
>>        if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
>>            return -ENODEV;
>>        ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
>>        ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status);
>>        return ret;
>>    }
>>
>> Thank you for valuable comments.
>>
>> Best Regards
>> Shuai
> 
> Hi, Bowman,
> 
> After dive into the code details, I found that both dpc_reset_link() and
> aer_root_reset() use pci_bridge_wait_for_secondary_bus() to wait for secondary
> bus to be accessible. IMHO, pci_bridge_wait_for_secondary_bus() is better
> robustness than function like pciehp_check_link_active(). So I think
> reset_subordinates() is good boundary for delineating whether a link is
> accessible.
> 
> Besides, for DPC driver, the link status of upstream port, e.g, rootport, is
> inactive when DPC is triggered, and is recoverd to active until
> dpc_reset_link() success. But for AER driver, the link is active before and
> after aer_root_reset(). As a result, the AER status will be reported twice.
> 


Hi, Bowman, and Bjorn,

Do you have any feedback or other concern?

Thank you.

Best Regards,
Shuai


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ