lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <88127007-4351-4d8d-ab7c-3f5ae9d36139@linux.intel.com>
Date: Thu, 23 Jan 2025 23:03:05 -0800
From: Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@...ux.intel.com>
To: Shuai Xue <xueshuai@...ux.alibaba.com>, linux-pci@...r.kernel.org,
 linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
 bhelgaas@...gle.com, kbusch@...nel.org
Cc: mahesh@...ux.ibm.com, oohall@...il.com
Subject: Re: [PATCH v2 2/2] PCI/AER: Report fatal errors of RCiEP and EP if
 link recoverd


On 1/23/25 5:45 PM, Shuai Xue wrote:
>
>
> 在 2025/1/24 04:10, Sathyanarayanan Kuppuswamy 写道:
>> Hi,
>>
>> On 11/12/24 5:54 AM, Shuai Xue wrote:
>>> The AER driver has historically avoided reading the configuration 
>>> space of
>>> an endpoint or RCiEP that reported a fatal error, considering the 
>>> link to
>>> that device unreliable. Consequently, when a fatal error occurs, the 
>>> AER
>>> and DPC drivers do not report specific error types, resulting in 
>>> logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 
>>> source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> AER status registers are sticky and Write-1-to-clear. If the link 
>>> recovered
>>> after hot reset, we can still safely access AER status of the error 
>>> device.
>>> In such case, report fatal errors which helps to figure out the 
>>> error root
>>> case.
>>>
>>> After this patch, the logs like:
>>>
>>>    pcieport 0000:30:03.0: EDR: EDR event received
>>>    pcieport 0000:30:03.0: DPC: containment event, status:0x0005 
>>> source:0x3400
>>>    pcieport 0000:30:03.0: DPC: ERR_FATAL detected
>>>    pcieport 0000:30:03.0: AER: broadcast error_detected message
>>>    nvme nvme0: frozen state error detected, reset controller
>>>    pcieport 0000:30:03.0: waiting 100 ms for downstream link, after 
>>> activation
>>>    nvme 0000:34:00.0: ready 0ms after DPC
>>>    nvme 0000:34:00.0: PCIe Bus Error: severity=Uncorrectable 
>>> (Fatal), type=Data Link Layer, (Receiver ID)
>>>    nvme 0000:34:00.0:   device [144d:a804] error 
>>> status/mask=00000010/00504000
>>>    nvme 0000:34:00.0:    [ 4] DLP                    (First)
>>>    pcieport 0000:30:03.0: AER: broadcast slot_reset message
>>>
>>> Signed-off-by: Shuai Xue <xueshuai@...ux.alibaba.com>
>>> ---
>>>   drivers/pci/pci.h      |  3 ++-
>>>   drivers/pci/pcie/aer.c | 11 +++++++----
>>>   drivers/pci/pcie/dpc.c |  2 +-
>>>   drivers/pci/pcie/err.c |  9 +++++++++
>>>   4 files changed, 19 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
>>> index 0866f79aec54..6f827c313639 100644
>>> --- a/drivers/pci/pci.h
>>> +++ b/drivers/pci/pci.h
>>> @@ -504,7 +504,8 @@ struct aer_err_info {
>>>       struct pcie_tlp_log tlp;    /* TLP Header */
>>>   };
>>> -int aer_get_device_error_info(struct pci_dev *dev, struct 
>>> aer_err_info *info);
>>> +int aer_get_device_error_info(struct pci_dev *dev, struct 
>>> aer_err_info *info,
>>> +                  bool link_healthy);
>>>   void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);
>>>   #endif    /* CONFIG_PCIEAER */
>>> diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
>>> index 13b8586924ea..97ec1c17b6f4 100644
>>> --- a/drivers/pci/pcie/aer.c
>>> +++ b/drivers/pci/pcie/aer.c
>>> @@ -1200,12 +1200,14 @@ EXPORT_SYMBOL_GPL(aer_recover_queue);
>>>    * aer_get_device_error_info - read error status from dev and 
>>> store it to info
>>>    * @dev: pointer to the device expected to have a error record
>>>    * @info: pointer to structure to store the error record
>>> + * @link_healthy: link is healthy or not
>>>    *
>>>    * Return 1 on success, 0 on error.
>>>    *
>>>    * Note that @info is reused among all error devices. Clear fields 
>>> properly.
>>>    */
>>> -int aer_get_device_error_info(struct pci_dev *dev, struct 
>>> aer_err_info *info)
>>> +int aer_get_device_error_info(struct pci_dev *dev, struct 
>>> aer_err_info *info,
>>> +                  bool link_healthy)
>>>   {
>>>       int type = pci_pcie_type(dev);
>>>       int aer = dev->aer_cap;
>>> @@ -1229,7 +1231,8 @@ int aer_get_device_error_info(struct pci_dev 
>>> *dev, struct aer_err_info *info)
>>>       } else if (type == PCI_EXP_TYPE_ROOT_PORT ||
>>>              type == PCI_EXP_TYPE_RC_EC ||
>>>              type == PCI_EXP_TYPE_DOWNSTREAM ||
>>> -           info->severity == AER_NONFATAL) {
>>> +           info->severity == AER_NONFATAL ||
>>> +           (info->severity == AER_FATAL && link_healthy)) {
>>>           /* Link is still healthy for IO reads */
>>>           pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS,
>>> @@ -1258,11 +1261,11 @@ static inline void 
>>> aer_process_err_devices(struct aer_err_info *e_info)
>>>       /* Report all before handle them, not to lost records by reset 
>>> etc. */
>>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>>               aer_print_error(e_info->dev[i], e_info);
>>>       }
>>>       for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
>>> -        if (aer_get_device_error_info(e_info->dev[i], e_info))
>>> +        if (aer_get_device_error_info(e_info->dev[i], e_info, false))
>>>               handle_error_source(e_info->dev[i], e_info);
>>>       }
>>>   }
>>> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
>>> index 62a68cde4364..b3f157a00405 100644
>>> --- a/drivers/pci/pcie/dpc.c
>>> +++ b/drivers/pci/pcie/dpc.c
>>> @@ -304,7 +304,7 @@ struct pci_dev *dpc_process_error(struct pci_dev 
>>> *pdev)
>>>           dpc_process_rp_pio_error(pdev);
>>>       else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
>>>            dpc_get_aer_uncorrect_severity(pdev, &info) &&
>>> -         aer_get_device_error_info(pdev, &info)) {
>>> +         aer_get_device_error_info(pdev, &info, false)) {
>>>           aer_print_error(pdev, &info);
>>>           pci_aer_clear_nonfatal_status(pdev);
>>>           pci_aer_clear_fatal_status(pdev);
>>> diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
>>> index 31090770fffc..462577b8d75a 100644
>>> --- a/drivers/pci/pcie/err.c
>>> +++ b/drivers/pci/pcie/err.c
>>> @@ -196,6 +196,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev 
>>> *dev,
>>>       struct pci_dev *bridge;
>>>       pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER;
>>>       struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
>>> +    struct aer_err_info info;
>>>       /*
>>>        * If the error was detected by a Root Port, Downstream Port, 
>>> RCEC,
>>> @@ -223,6 +224,13 @@ pci_ers_result_t pcie_do_recovery(struct 
>>> pci_dev *dev,
>>>               pci_warn(bridge, "subordinate device reset failed\n");
>>>               goto failed;
>>>           }
>>> +
>>> +        info.severity = AER_FATAL;
>>> +        /* Link recovered, report fatal errors of RCiEP or EP */
>>> +        if ((type == PCI_EXP_TYPE_ENDPOINT ||
>>> +             type == PCI_EXP_TYPE_RC_END) &&
>>> +            aer_get_device_error_info(dev, &info, true))
>>> +            aer_print_error(dev, &info);
>>
>> IMO, error device information is more like a debug info. Can we change
>> the print level of this info to debug?
>
> Yes, but error device information is quite important for user to 
> figure out the
> device status and should not been ignored. We need it in production to 
> analysis
> server healthy.


IMO, such information is needed for debugging repeated DPC event 
occurrences.
So when encountering repeated failures, interested party can increase 
log level
and gather this data. I personally think this is too much detail for a 
kernel info
messages. Lets see what others and Bjorn think.


>
>>
>>>       } else {
>>>           pci_walk_bridge(bridge, report_normal_detected, &status);
>>>       }
>>> @@ -259,6 +267,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev 
>>> *dev,
>>>       if (host->native_aer || pcie_ports_native) {
>>>           pcie_clear_device_status(dev);
>>>           pci_aer_clear_nonfatal_status(dev);
>>> +        pci_aer_clear_fatal_status(dev);
>>
>> I think we clear fatal status in DPC driver, why do it again?
>
> DPC driver only clear fatal status for the err_port, but not the err_dev.
> err_dev and err_port are indeed easy to confuse, so I have 
> differentiated them
> again in patch1.
>

Got it.

>>
>>>       }
>>>       pci_walk_bridge(bridge, pci_pm_runtime_put, NULL);
>>
-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ