[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <309dd6e6-53ec-4f82-94ca-242941bd7136@linux.alibaba.com>
Date: Wed, 8 Jan 2025 17:04:25 +0800
From: Shuai Xue <xueshuai@...ux.alibaba.com>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: rostedt@...dmis.org, lukas@...ner.de, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-edac@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, bhelgaas@...gle.com,
tony.luck@...el.com, bp@...en8.de, mhiramat@...nel.org,
mathieu.desnoyers@...icios.com, oleg@...hat.com, naveen@...nel.org,
davem@...emloft.net, anil.s.keshavamurthy@...el.com, mark.rutland@....com,
peterz@...radead.org
Subject: Re: [PATCH v4] PCI: hotplug: Add a generic RAS tracepoint for hotplug
event
在 2025/1/8 07:19, Bjorn Helgaas 写道:
> On Sat, Nov 23, 2024 at 07:31:08PM +0800, Shuai Xue wrote:
>> Hotplug events are critical indicators for analyzing hardware health,
>> particularly in AI supercomputers where surprise link downs can
>> significantly impact system performance and reliability. The failure
>> characterization analysis illustrates the significance of failures
>> caused by the Infiniband link errors. Meta observes that 2% in a machine
>> learning cluster and 6% in a vision application cluster of Infiniband
>> failures co-occur with GPU failures, such as falling off the bus, which
>> may indicate a correlation with PCIe.[1]
>>
>> To this end, define a new TRACING_SYSTEM named pci, add a generic RAS
>> tracepoint for hotplug event to help healthy check, and generate
>> tracepoints for pcie hotplug event. To monitor these tracepoints in
>> userspace, e.g. with rasdaemon, put `enum pci_hotplug_event` in uapi
>> header.
>>
>> The output like below:
>> $ echo 1 > /sys/kernel/debug/tracing/events/pci/pci_hp_event/enable
>> $ cat /sys/kernel/debug/tracing/trace_pipe
>> <...>-206 [001] ..... 40.373870: pci_hp_event: 0000:00:02.0 slot:10, event:Link Down
>>
>> <...>-206 [001] ..... 40.374871: pci_hp_event: 0000:00:02.0 slot:10, event:Card not present
>>
>> [1]https://arxiv.org/abs/2410.21680
>
> Doesn't apply on pci/main (v6.13-rc1); can you rebase it?
Sure. Do you mean Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git
branch main
>
> s/pcie/PCIe/ in English text.
Will fix it.
>
> Probably more detail than necessary about AI supercomputers,
> Infiniband, vision applications, etc. This is a very generic issue.
Agreed. It is generic. Are you asking for the first background paragraph to be
deleted?
>
> "Falling off the bus" doesn't really mean anything to me. I suppose
> it's another way to describe a "link down" event that leads to UR
> errors when trying to access the device?
Sorry for the confusion. "Falling off the bus" is a common error for NVIDIA GPU
observed in production. The GPU driver will log a such message when GPU is not
accessible. And we also see many hotplug event like bellow:
[12945750.691652] pcieport 0000:42:02.0: pciehp: Slot(65): Link Down
[12945750.691655] pcieport 0000:42:02.0: pciehp: Slot(65): Card not present
> https://docs.nvidia.com/deploy/xid-errors/index.html#xid-79-gpu-has-fallen-off-the-bus
>
> I'm guessing that monitoring these via rasdaemon requires more than
> just adding "enum pci_hotplug_event"? Or does rasdaemon read
> include/uapi/linux/pci.h and automagically incorporate new events?
> Maybe there's at least a rebuild involved?
Yes, a rebuild is needed. Rasdaemon has a basic infrastructure to manually
register a tracepoint event handler. For example, for this new event, we can
register to handle pci_hp_event:
rc = add_event_handler(ras, pevent, page_size, "pci", "pci_hp_event",
ras_pci_hp_event_handler, NULL, PCI_HOTPLUG_EVENT);
>
> Anything in the arxiv link that is specifically relevant to this patch
> needs to be in the commit log itself. But I think there's already
> enough information here to motivate this change, and whatever is in
> the arxiv link may be of general interest, but is probably not
> required to justify, understand, or debug this useful functionality.
I see, will remove the arxiv link.
>
> Bjorn
Thank you for quick reply.
Best Regards,
Shuai
Powered by blists - more mailing lists