[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250708055307.4555-1-weilinghan@xiaomi.com>
Date: Tue, 8 Jul 2025 13:53:07 +0800
From: weilinghan <weilinghan@...omi.com>
To: <helgaas@...nel.org>
CC: <bhelgaas@...gle.com>, <hulingchen@...omi.com>,
<linux-kernel@...r.kernel.org>, <vidyas@...dia.com>, <weilinghan@...omi.com>,
<weipengliang@...omi.com>
Subject: Re: [PATCH] PCI: remove call pci_save_aspm_l1ss_state() from pci_save_pcie_state()
On Mon, 7 Jul 2025 14:49:03 -0500, Bjorn Helgaas wrote:
>On Mon, Jul 07, 2025 at 07:52:36PM +0800, weilinghan wrote:
>> During the suspend-resume process, PCIe resumes by enabling L1.2 in
>> the pci_restore_state function due to patch 4ff116d0d5fd.
>> However, in the following scenario, the resume process becomes very
>> time-consuming:
>>
>> 1.The platform has multiple PCI buses.
>> 2.The link transition time from L1.2 to L0 exceeds 100 microseconds by
>> accessing the configuration space of the EP.
>> 3.The PCI framework has async_suspend enabled (by calling
>> device_enable_async_suspend(&dev->dev)
>> in pci_pm_init(struct pci_dev *dev)).
>> 4.On ARM platforms, CONFIG_PCI_LOCKLESS_CONFIG is not enabled, which
>> means the pci_bus_read_config_##size interfaces contain locks (spinlock).
>>
>> Practical measurements show that enabling L1.2 during the resume
>> process introduces an additional delay of approximately 150ms in the
>> pci_pm_resume_noirq() function for platforms with two PCI buses,
>> compared to when L1.2 is disabled.
>We really need an argument for why this change would be correct, not just the fact that it makes resume faster. Vidya made the change in 4ff116d0d5fd to fix a problem, and it looks like this patch would reintroduce the problem.
Ok, I'm seeing lock contention issues when multiple PCI devices call pci_restore_state() during the resume_noirq phase.
This problem arises due to commit a1e4d72cd ("PM: Allow PCI devices to suspend/resume asynchronously"), which changed the noirq phase of PCI devices to be executed asynchronously. As a result, multiple PCI devices may attempt to restore configuration space concurrently, leading to contention on the PCI configuration lock.
Additionally, commit 4ff116d0d5fd ("PCI/ASPM: Save L1 PM Substates Capability for suspend/resume") by Vidya enables L1.2 state handling, which increases the time spent in the critical section, thereby further exacerbating the lock contention and increasing resume latency.
Currently, I'm considering a few possible approaches to address this:
1.In the driver, call device_disable_async_suspend() to prevent asynchronous suspend/resume for specific devices that are known to have contention issues.
2.Enable CONFIG_PCI_LOCKLESS_CONFIG on the ARM platform
3.Make dev_pm_skip_resume() return true for certain devices, skipping pci_restore_state() in the PCI core during resume.
4.revert commit a1e4d72cd ("PM: Allow PCI devices to suspend/resume asynchronously")
I'd appreciate any insights or recommendations from the community on the best way to proceed. Are there any preferred approaches for handling?
Thanks,
weilinghan
Powered by blists - more mailing lists