[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e807ca71fdef97c931fd9f92eda0f7551aa3ef7b.camel@web.de>
Date: Fri, 28 Nov 2025 21:47:35 +0100
From: Bert Karwatzki <spasswolf@....de>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Christian König <christian.koenig@....com>, "Mario
Limonciello (AMD) (kernel.org)" <superm1@...nel.org>,
linux-kernel@...r.kernel.org, linux-next@...r.kernel.org,
regressions@...ts.linux.dev, linux-pci@...r.kernel.org,
linux-acpi@...r.kernel.org, "Rafael J . Wysocki"
<rafael.j.wysocki@...el.com>, acpica-devel@...ts.linux.dev, Robert Moore
<robert.moore@...el.com>, Saket Dumbre <saket.dumbre@...el.com>,
spasswolf@....de
Subject: Re: Crash during resume of pcie bridge due to infinite loop in
ACPICA
This is not an ACPICA problem after all:
I did some more monitoring:
https://gitlab.freedesktop.org/spasswolf/linux-stable/-/commits/amdgpu_suspend_resume?ref_type=heads
and I still get a crash, but perhaps due to the delays the printk()s caused I actually get a helpful error message in netconsole:
T5971;ACPI BIOS Error (bug): Could not resolve symbol [\x5cM013.VARR], AE_NOT_FOUND (20240827/psargs-332)
T5971;acpi_ps_complete_op returned 0x5
T5971;acpi_ps_parse_aml_debug: parse loop returned = 0x5
T5971;ACPI Error: Aborting method \x5cM013 due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;ACPI Error: Aborting method \x5cM017 due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;ACPI Error: Aborting method \x5cM019 due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;ACPI Error: Aborting method \x5c_SB.PCI0.GPP0.M439 due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;ACPI Error: Aborting method \x5c_SB.PCI0.GPP0.M241 due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;ACPI Error: Aborting method \x5c_SB.PCI0.GPP0.M237._ON due to previous error (AE_NOT_FOUND) (20240827/psparse-935)
T5971;acpi_ps_parse_aml_debug: after walk loop
T5971;acpi_ps_execute_method_debug 331
T5971;acpi_ns_evaluate_debug 475 METHOD
T5971;acpi_evaluate_object_debug 255
T5971;__acpi_power_on_debug 369
T5971;acpi_power_on_unlocked_debug 442
T5971;acpi_power_on_unlocked_debug 446
T5971;acpi_power_on_debug 471
T5971;acpi_power_on_list_debug 649: result = -19
T5971;pcieport 0000:00:01.1: pci_pm_default_resume_early 568#012 SUBSYSTEM=pci#012 DEVICE=+pci:0000:00:01.1
T5971;pcieport 0000:00:01.1: broken device, retraining non-functional downstream link at 2.5GT/s#012 SUBSYSTEM=pci#012 DEVICE=+pci:0000:00:01.1
T5971;pcieport 0000:00:01.1: retraining failed#012 SUBSYSTEM=pci#012 DEVICE=+pci:0000:00:01.1
T5971;pcieport 0000:00:01.1: Data Link Layer Link Active not set in 1000 msec#012 SUBSYSTEM=pci#012 DEVICE=+pci:0000:00:01.1
T5971;pcieport 0000:01:00.0: Unable to change power state from D3cold to D0, device inaccessible#012 SUBSYSTEM=pci#012 DEVICE=+pci:0000:01:00.0
This shows that there seems to be no problem with ACPICA, and acpi_power_on_list(_debug)() returns -ENODEV,
the crash occurs later.
This leaves two question:
1. Is this crash avoidable by different error handling in the pci code?
2. If the crash is not avoidable, can we at least modify the error handling in such a way that
we get an error message through netconsole by default? (perhaps a little delay will suffice)
Bert Karwatzki
Powered by blists - more mailing lists