[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be8321e5-d048-4434-9b2a-8159e9bdba43@cixtech.com>
Date: Fri, 2 May 2025 23:49:07 +0800
From: Hans Zhang <hans.zhang@...tech.com>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: kbusch@...nel.org, axboe@...nel.dk, hch@....de, sagi@...mberg.me,
manivannan.sadhasivam@...aro.org, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org
Subject: Re: [PATCH] nvme-pci: Fix system hang when ASPM L1 is enabled during
suspend
On 2025/5/2 23:00, Bjorn Helgaas wrote:
> EXTERNAL EMAIL
>
> On Fri, May 02, 2025 at 11:20:51AM +0800, hans.zhang@...tech.com wrote:
>> From: Hans Zhang <hans.zhang@...tech.com>
>>
>> When PCIe ASPM L1 is enabled (CONFIG_PCIEASPM_POWERSAVE=y), certain
>
> CONFIG_PCIEASPM_POWERSAVE=y only sets the default. L1 can be enabled
> dynamically regardless of the config.
>
Dear Bjorn,
Thank you very much for your reply.
Yes. To reduce the power consumption of the SOC system, we have enabled
ASPM L1 by default.
>> NVMe controllers fail to release LPI MSI-X interrupts during system
>> suspend, leading to a system hang. This occurs because the driver's
>> existing power management path does not fully disable the device
>> when ASPM is active.
>
> I have no idea what this has to do with ASPM L1. I do see that
> nvme_suspend() tests pcie_aspm_enabled(pdev) (which seems kind of
> janky and racy). But this doesn't explain anything about what would
> cause a system hang.
[ 92.411265] [pid:322,cpu11,kworker/u24:6]nvme 0000:91:00.0: PM:
calling pci_pm_suspend_noirq+0x0/0x2c0 @ 322, parent: 0000:90:00.0
[ 92.423028] [pid:322,cpu11,kworker/u24:6]nvme 0000:91:00.0: PM:
pci_pm_suspend_noirq+0x0/0x2c0 returned 0 after 1 usecs
[ 92.433894] [pid:324,cpu10,kworker/u24:7]pcieport 0000:90:00.0: PM:
calling pci_pm_suspend_noirq+0x0/0x2c0 @ 324, parent: pci0000:90
[ 92.445880] [pid:324,cpu10,kworker/u24:7]pcieport 0000:90:00.0: PM:
pci_pm_suspend_noirq+0x0/0x2c0 returned 0 after 39 usecs
[ 92.457227] [pid:916,cpu7,bash]sky1-pcie a070000.pcie: PM: calling
sky1_pcie_suspend_noirq+0x0/0x174 @ 916, parent: soc@0
[ 92.479315] [pid:916,cpu7,bash]cix-pcie-phy a080000.pcie_phy:
pcie_phy_common_exit end
[ 92.487389] [pid:916,cpu7,bash]sky1-pcie a070000.pcie:
sky1_pcie_suspend_noirq
[ 92.494604] [pid:916,cpu7,bash]sky1-pcie a070000.pcie: PM:
sky1_pcie_suspend_noirq+0x0/0x174 returned 0 after 26379 usecs
[ 92.505619] [pid:916,cpu7,bash]sky1-audss-clk
7110000.system-controller:clock-controller: PM: calling
genpd_suspend_noirq+0x0/0x80 @ 916, parent: 7110000.system-controller
[ 92.520919] [pid:916,cpu7,bash]sky1-audss-clk
7110000.system-controller:clock-controller: PM:
genpd_suspend_noirq+0x0/0x80 returned 0 after 1 usecs
[ 92.534214] [pid:916,cpu7,bash]Disabling non-boot CPUs ...
Hans: Before I added the printk for debugging, it hung here.
I added the log output after debugging printk.
Sky1 SOC Root Port driver's suspend function: sky1_pcie_suspend_noirq
Our hardware is in STR(suspend to ram), and the controller and PHY will
lose power.
So in sky1_pcie_suspend_noirq, the AXI,APB clock, etc. of the PCIe
controller will be turned off. In sky1_pcie_resume_noirq, the PCIe
controller and PHY will be reinitialized. If suspend does not close the
AXI and APB clock, and the AXI is reopened during the resume process,
the APB clock will cause the reference count of the kernel API to
accumulate continuously.
Additionally, since the controller and phy lost power, and pci_msix_mask
was executed after sky1_pcie_suspend_noirq. The place for hang is to
write PCI_MSIX_ENTRY_CTRL_MASKBIT.
static inline void pci_msix_mask(struct msi_desc *desc)
{
struct irq_desc *irq_desc = NULL;;
printk(KERN_EMERG"[HANS] fun = %s, line = %d irq = %d \n", __func__,
__LINE__, desc->irq);
irq_desc = irq_to_desc(desc->irq);
printk(KERN_EMERG"[HANS] fun = %s, line = %d irq_desc->depth = %d \n",
__func__, __LINE__, irq_desc->depth);
dump_stack();
printk(KERN_EMERG"[HANS] fun = %s, line = %d ........... \n", __func__,
__LINE__);
desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl); // SOC hang
/* Flush write to device */
readl(desc->pci.mask_base);
}
[ 92.542027] [pid:19,cpu1,migration/1][HANS] fun = __cpu_disable, line
= 317 ...........
[ 92.550148] [pid:19,cpu1,migration/1][HANS] fun = migrate_one_irq,
line = 120 ...........
[ 92.558410] [pid:19,cpu1,migration/1][HANS] fun =
irq_shutdown_and_deactivate, line = 325 ...........
[ 92.567711] [pid:19,cpu1,migration/1][HANS] fun = irq_shutdown, line
= 308 ...........
[ 92.575712] [pid:19,cpu1,migration/1][HANS] fun = pci_msix_mask, line
= 67 irq = 94
[ 92.583449] [pid:19,cpu1,migration/1][HANS] fun = pci_msix_mask, line
= 69 irq_desc->depth = 1
[ 92.592142] [pid:19,cpu1,migration/1]CPU: 1 PID: 19 Comm: migration/1
Tainted: G S O 6.1.44-cix-build-generic #242
[ 92.603702] [pid:19,cpu1,migration/1]Hardware name: Cix Technology
Group Co., Ltd. CIX Merak Board/CIX Merak Board, BIOS 1.0 Apr 14 2025
[ 92.615953] [pid:19,cpu1,migration/1]Stopper:
multi_cpu_stop+0x0/0x190 <- stop_machine_cpuslocked+0x138/0x184
[ 92.625876] [pid:19,cpu1,migration/1]Call trace:
[ 92.630490] [pid:19,cpu1,migration/1] dump_backtrace+0xdc/0x130
[ 92.636409] [pid:19,cpu1,migration/1] show_stack+0x18/0x30
[ 92.641891] [pid:19,cpu1,migration/1] dump_stack_lvl+0x64/0x80
[ 92.647725] [pid:19,cpu1,migration/1] dump_stack+0x18/0x34
[ 92.653209] [pid:19,cpu1,migration/1] pci_msix_mask+0x5c/0xcc
[ 92.658956] [pid:19,cpu1,migration/1] pci_msi_mask_irq+0x48/0x4c
[ 92.664963] [pid:19,cpu1,migration/1] its_mask_msi_irq+0x18/0x30
[ 92.670970] [pid:19,cpu1,migration/1] irq_shutdown+0xc4/0xf4
[ 92.676626] [pid:19,cpu1,migration/1]
irq_shutdown_and_deactivate+0x38/0x50
[ 92.683583] [pid:19,cpu1,migration/1]
irq_migrate_all_off_this_cpu+0x2ec/0x300
[ 92.690807] [pid:19,cpu1,migration/1] __cpu_disable+0xe0/0xf0
[ 92.696554] [pid:19,cpu1,migration/1] take_cpu_down+0x3c/0xa4
[ 92.702302] [pid:19,cpu1,migration/1] multi_cpu_stop+0x9c/0x190
[ 92.708218] [pid:19,cpu1,migration/1] cpu_stopper_thread+0x84/0x11c
[ 92.714482] [pid:19,cpu1,migration/1] smpboot_thread_fn+0x228/0x250
[ 92.720749] [pid:19,cpu1,migration/1] kthread+0x108/0x10c
[ 92.726147] [pid:19,cpu1,migration/1] ret_from_fork+0x10/0x20
[ 92.731894] [pid:19,cpu1,migration/1][HANS] fun = pci_msix_mask, line
= 71 ...........
>
>> The fix adds an explicit device disable and reset preparation step
>> in the suspend path after successfully setting the power state.
>> This ensures proper cleanup of interrupt resources even when ASPM
>> L1 is enabled, preventing the system from hanging during suspend.
>
> Maybe there's a clue in the 600 lines of debug output that I trimmed,
> but without some interpretation, I have no idea how to find it.
>
You can also view this webpage. I have placed the log during the suspend
process.
https://patchwork.kernel.org/project/linux-pci/patch/20250502032051.920990-1-hans.zhang@cixtech.com/
> Unless you see similar problems on other systems, I would suspect an
> issue with the SoC or the SoC driver where you do see problems.
I will test it on RK3588 after my vacation, that is, on May 6th.
However, the SOC design of each company may be different. Our SOC PCIe
controller and PHY lose power after STR. This is determined by the RTL
design.
I used the patch I modified myself. The NVMe SSD enabled ASPM L1 and STR
worked properly. Or it is normal that Mani's patch (which was not
accepted) works.
My patch:
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b178d52eac1b..2243fabd54e4 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -3508,6 +3508,8 @@ static int nvme_suspend(struct device *dev)
*/
ret = nvme_disable_prepare_reset(ndev, true);
ctrl->npss = 0;
+ } else {
+ ret = nvme_disable_prepare_reset(ndev, true);
}
unfreeze:
nvme_unfreeze(ctrl);
If my reply is not sufficient, please raise your questions and I will
re-capture the log or explain it.
Best regards,
Hans
Powered by blists - more mailing lists