[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <25f36fa7-d1d6-4b81-a42f-64c445d6f065@amd.com>
Date: Tue, 14 Oct 2025 12:50:44 +0200
From: Christian König <christian.koenig@....com>
To: Mario Limonciello <superm1@...nel.org>, Bert Karwatzki
<spasswolf@....de>, linux-kernel@...r.kernel.org
Cc: linux-next@...r.kernel.org, regressions@...ts.linux.dev,
linux-pci@...r.kernel.org, linux-acpi@...r.kernel.org,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [REGRESSION 00/04] Crash during resume of pcie bridge
On 13.10.25 20:51, Mario Limonciello wrote:
> On 10/13/25 11:29 AM, Bert Karwatzki wrote:
>> Am Dienstag, dem 07.10.2025 um 16:33 -0500 schrieb Mario Limonciello:
>>>
>>> Can you still reproduce with amd_iommu=off?
>>
>> Reproducing this is at all is very difficult, so I'll try to find the exact spot
>> where things break (i.e. when the pci bus breaks and no more message are transmitted
>> via netconsole) first. The current state of this search is that the crash occurs in
>> pci_pm_runtime_resume(), before pci_fixup_device() is called:
>>
>
> One other (unfortunate) possibility is that the timing of this crash occurring is not deterministic.
Yeah, completely agree.
The exact spot where things break is actually pretty uninteresting I think. Background is that it is most likely not the spot which caused the issue.
Instead what happens is that something in the HW times out and you see a spontaneous reboot because of this.
I would rather try to narrow down which operation or combination of things is causing the issue.
Maybe also double check if runtime pm is actually working on the good kernel or if the issue might be that somebody fixed runtime pm and you are now seeing issues because you happen to have problematic HW which we need to add to the blacklist.
Regards,
Christian.
>
> As an idea for debugging this issue, do you think maybe using kdumpst [1] might be helpful to get more information on the state during the crash?
>
> Since NVME is missing you might need to boot off of USB or SD though so that kdumpst is able to save the vmcore out of RAM.
>
> Link: https://blogs.igalia.com/gpiccoli/2024/07/presenting-kdumpst-or-how-to-collect-kernel-crash-logs-on-arch-linux/ [1]
>> static int pci_pm_runtime_resume(struct device *dev)
>> {
>> struct pci_dev *pci_dev = to_pci_dev(dev);
>> const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
>> pci_power_t prev_state = pci_dev->current_state;
>> int error = 0;
>> // dev_info(dev, "%s = %px\n", __func__, (void *) pci_pm_runtime_resume); // remove this so we don't get too much delay
>> // This was still printed in the case of a crash
>> // so the crash must happen below
>>
>> /*
>> * Restoring config space is necessary even if the device is not bound
>> * to a driver because although we left it in D0, it may have gone to
>> * D3cold when the bridge above it runtime suspended.
>> */
>> pci_pm_default_resume_early(pci_dev);
>> if (!strcmp(dev_name(dev), "0000:00:01.1")) // This is the current test.
>> dev_info(dev, "%s %d\n", __func__, __LINE__);
>> pci_resume_ptm(pci_dev);
>>
>> if (!pci_dev->driver)
>> return 0;
>>
>> //if (!strcmp(dev_name(dev), "0000:00:01.1")) // This was not printed when 6.17.0-rc6-next-20250917-gpudebug-00036-g4f7b4067c9ce
>> // dev_info(dev, "%s %d\n", __func__, __LINE__); // crashed, so the crash must happen above
>> pci_fixup_device(pci_fixup_resume_early, pci_dev);
>> pci_pm_default_resume(pci_dev);
>>
>> if (prev_state == PCI_D3cold)
>> pci_pm_bridge_power_up_actions(pci_dev);
>>
>> if (pm && pm->runtime_resume)
>> error = pm->runtime_resume(dev);
>>
>> return error;
>> }
>>
>>
>> Bert Karwatzki
>
Powered by blists - more mailing lists