[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe946c32-8fa8-45bf-908c-17aa7374b9ac@linux.alibaba.com>
Date: Mon, 28 Jul 2025 17:17:47 +0800
From: Shuai Xue <xueshuai@...ux.alibaba.com>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: rostedt@...dmis.org, lukas@...ner.de, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-edac@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, ilpo.jarvinen@...ux.intel.com,
mattc@...estorage.com, Jonathan.Cameron@...wei.com, bhelgaas@...gle.com,
tony.luck@...el.com, bp@...en8.de, mhiramat@...nel.org,
mathieu.desnoyers@...icios.com, oleg@...hat.com, naveen@...nel.org,
davem@...emloft.net, anil.s.keshavamurthy@...el.com, mark.rutland@....com,
peterz@...radead.org, tianruidong@...ux.alibaba.com
Subject: Re: [PATCH v9 2/2] PCI: trace: Add a RAS tracepoint to monitor link
speed changes
在 2025/7/26 05:09, Bjorn Helgaas 写道:
> On Wed, Jul 23, 2025 at 11:31:08AM +0800, Shuai Xue wrote:
>> PCIe link speed degradation directly impacts system performance and
>> often indicates hardware issues such as faulty devices, physical layer
>> problems, or configuration errors.
>>
>> To this end, add a RAS tracepoint to monitor link speed changes,
>> enabling proactive health checks and diagnostic analysis.
>>
>> The output is like below:
>>
>> $ echo 1 > /sys/kernel/debug/tracing/events/pci/pci_link_event/enable
>> $ cat /sys/kernel/debug/tracing/trace_pipe
>> cat /sys/kernel/debug/tracing/trace_pipe
>> <...>-119 [002] ..... 125.776171: pci_hp_event: 0000:00:03.0 slot:30, event:CARD_PRESENT
>>
>> <...>-119 [002] ..... 125.776197: pci_hp_event: 0000:00:03.0 slot:30, event:LINK_UP
>>
>> irq/57-pciehp-119 [002] ..... 125.904335: pcie_link_event: 0000:00:03.0 type:4, reason:4, cur_bus_speed:2.5 GT/s PCIe, max_bus_speed:16.0 GT/s PCIe, width:1, flit_mode:0, status:DLLLA
>>
>> irq/57-pciehp-119 [002] ..... 125.907051: pcie_link_event: 0000:00:03.0 type:4, reason:0, cur_bus_speed:2.5 GT/s PCIe, max_bus_speed:16.0 GT/s PCIe, width:1, flit_mode:0, status:DLLLA
>
> I guess this example would actually require both of these enables, right?
>
> echo 1 > /sys/kernel/debug/tracing/events/pci/pci_hp_event/enable
> echo 1 > /sys/kernel/debug/tracing/events/pci/pci_link_event/enable
Yes, you're absolutely right. I'll correct the commit log to show both
commands.
(As a side note, echo 1 > /sys/kernel/debug/tracing/events/pci/enable
would enable both events at once.)
(echo 1 > /sys/kernel/debug/tracing/events/pci/enable will enable both
of these event.)
>
>> Suggested-by: Bjorn Helgaas <helgaas@...nel.org>
>
> I don't think I've suggested anything that really warrants this ;)
>
Fair enough! I'll drop the Suggested-by tag.
>> ...
>> @@ -292,7 +292,7 @@ int pciehp_check_link_status(struct controller *ctrl)
>> {
>> struct pci_dev *pdev = ctrl_dev(ctrl);
>> bool found;
>> - u16 lnk_status, linksta2;
>> + u16 lnk_status;
>>
>> if (!pcie_wait_for_link(pdev, true)) {
>> ctrl_info(ctrl, "Slot(%s): No link\n", slot_name(ctrl));
>> @@ -319,8 +319,7 @@ int pciehp_check_link_status(struct controller *ctrl)
>> return -1;
>> }
>>
>> - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA2, &linksta2);
>> - __pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status, linksta2);
>> + pcie_update_link_speed(ctrl->pcie->port->subordinate, PCIE_HOTPLUG);
>
> It kind of bugs me that the hot-add flow reads LNKSTA three times and
> generates both pci_hp_event LINK_UP and link_event tracepoints:
>
> pciehp_handle_presence_or_link_change
> link_active = pciehp_check_link_active()
> pcie_capability_read_word(PCI_EXP_LNKSTA)
> if (link_active)
> ctrl_info(ctrl, "Slot(%s): Link Up\n")
> trace_pci_hp_event(PCI_HOTPLUG_LINK_UP)
> pciehp_enable_slot
> __pciehp_enable_slot
> board_added
> pciehp_check_link_status
> pcie_capability_read_word(PCI_EXP_LNKSTA)
> pcie_update_link_speed
> pcie_capability_read_word(PCI_EXP_LNKSTA)
> pcie_capability_read_word(PCI_EXP_LNKSTA2)
> trace_pcie_link_event(<REASON>)
>
> Maybe there are good reasons for reading LNKSTA three times, but it
> does make me raise my eyebrows. Not that this is a performance path,
> but it just offends my sense of propriety.
>
> And maybe we need both a bare LINK_UP event and a link_event with all
> the details, but again it seems a little weird to me that there are
> two tracepoints when there's really only one event and we know all the
> link_event information from the very first LNKSTA read.
>
> Bjorn
I understand your concern about the multiple LNKSTA reads and the
apparent duplication. Please see comments from Lukas.
Best Regards,
Shuai
Powered by blists - more mailing lists