[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <807730f9-0220-d297-dffd-929dde30d513@quicinc.com>
Date: Thu, 22 Sep 2022 21:09:28 +0530
From: Krishna Chaitanya Chundru <quic_krichai@...cinc.com>
To: Bjorn Helgaas <helgaas@...nel.org>
CC: <linux-pci@...r.kernel.org>, <linux-arm-msm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <mka@...omium.org>,
<quic_vbadigan@...cinc.com>, <quic_hemantk@...cinc.com>,
<quic_nitegupt@...cinc.com>, <quic_skananth@...cinc.com>,
<quic_ramkri@...cinc.com>, <manivannan.sadhasivam@...aro.org>,
<swboyd@...omium.org>, <dmitry.baryshkov@...aro.org>,
<svarbanov@...sol.com>, <agross@...nel.org>,
<andersson@...nel.org>, <konrad.dybcio@...ainline.org>,
<lpieralisi@...nel.org>, <robh@...nel.org>, <kw@...ux.com>,
<bhelgaas@...gle.com>, <linux-phy@...ts.infradead.org>,
<vkoul@...nel.org>, <kishon@...com>, <mturquette@...libre.com>,
<linux-clk@...r.kernel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
"Rafael J. Wysocki" <rafael@...nel.org>, <linux-pm@...r.kernel.org>
Subject: Re: [PATCH v7 1/5] PCI: qcom: Add system suspend and resume support
On 9/21/2022 10:26 PM, Bjorn Helgaas wrote:
> [+cc Rafael, linux-pm since this is real power management magic,
> beginning of thread:
> https://lore.kernel.org/all/1663669347-29308-1-git-send-email-quic_krichai@quicinc.com/
> full patch since I trimmed too much of it:
> https://lore.kernel.org/all/1663669347-29308-2-git-send-email-quic_krichai@quicinc.com/]
>
> On Wed, Sep 21, 2022 at 03:23:35PM +0530, Krishna Chaitanya Chundru wrote:
>> On 9/20/2022 11:46 PM, Bjorn Helgaas wrote:
>>> On Tue, Sep 20, 2022 at 03:52:23PM +0530, Krishna chaitanya chundru wrote:
>>>> Add suspend and resume syscore ops.
>>>>
>>>> Few PCIe endpoints like NVMe and WLANs are always expecting the device
>>>> to be in D0 state and the link to be active (or in l1ss) all the time
>>>> (including in S3 state).
>>> What does this have to do with the patch? I don't see any NVMe or
>>> WLAN patches here.
>> Existing NVMe driver expecting NVMe device to be in D0 during S3 also. If we
>> turn off the link in
>> suspend, the NVMe resume path is broken as the state machine is getting
>> reset in the NVMe device.
>> Due to this, the host driver state machine and the device state machine are
>> going out of sync, and all NVMe commands
>> after resumes are getting timed out.
>>
>> IIRC, Tegra is also facing this issue with NVMe.
>>
>> This issue has been discussed below threads:
>>
>> https://lore.kernel.org/all/Yl+6V3pWuyRYuVV8@infradead.org/T/
>>
>> https://lore.kernel.org/linux-nvme/20220201165006.3074615-1-kbusch@kernel.org/
> The problem is that this commit log doesn't explain the problem and
> doesn't give us anything to connect the NVMe and WLAN assumptions with
> this special driver behavior. There needs to be some explicit
> property of NVMe and WLAN that the PM core or drivers like qcom can
> use to tell whether the clocks can be turned off.
Not only that NVMe is expecting the device state to be always in D0. So
any PCIe
drivers should not turn off the link in suspend and do link retraining
in the resume.
As this is considered a power cycle by the NVMe device and eventually
increases the
wear of the NVMe flash.
We are trying to keep the device in D0 and also reduce the power
consumption when the system
is in S3 by turning off clocks and phy with this patch series.
>
>>>> In qcom platform PCIe resources( clocks, phy etc..) can released
>>>> when the link is in L1ss to reduce the power consumption. So if the link
>>>> is in L1ss, release the PCIe resources. And when the system resumes,
>>>> enable the PCIe resources if they released in the suspend path.
>>> What's the connection with L1.x? Links enter L1.x based on activity
>>> and timing. That doesn't seem like a reliable indicator to turn PHYs
>>> off and disable clocks.
>> This is a Qcom PHY-specific feature (retaining the link state in L1.x with
>> clocks turned off).
>> It is possible only with the link being in l1.x. PHY can't retain the link
>> state in L0 with the
>> clocks turned off and we need to re-train the link if it's in L2 or L3. So
>> we can support this feature only with L1.x.
>> That is the reason we are taking l1.x as the trigger to turn off clocks (in
>> only suspend path).
> This doesn't address my question. L1.x is an ASPM feature, which
> means hardware may enter or leave L1.x autonomously at any time
> without software intervention. Therefore, I don't think reading the
> current state is a reliable way to decide anything.
After the link enters the L1.x it will come out only if there is some
activity on the link.
AS system is suspended and NVMe driver is also suspended( queues willÂ
freeze in suspend)
who else can initiate any data. As long the link stays in L1ss we can
turn off clocks and phy.
When the system resumes we turn off clocks and phy before resuming the
NVMe, this makes sure
the clocks and phy are up before there is any activity to bring up the
link back to L0 state from L1.x.
>
>> ...
>>>> Its observed that access to Ep PCIe space to mask MSI/MSIX is happening
>>>> at the very late stage of suspend path (access by affinity changes while
>>>> making CPUs offline during suspend, this will happen after devices are
>>>> suspended (after all phases of suspend ops)). If we turn off clocks in
>>>> any PM callback, afterwards running into crashes due to un-clocked access
>>>> due to above mentioned MSI/MSIx access.
>>>> So, we are making use of syscore framework to turn off the PCIe clocks
>>>> which will be called after making CPUs offline.
Powered by blists - more mailing lists