lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Mar 2023 15:04:10 +0200
From:   Konrad Dybcio <konrad.dybcio@...aro.org>
To:     Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
        Frank Li <frank.li@....com>
Cc:     "lpieralisi@...nel.org" <lpieralisi@...nel.org>,
        "kw@...ux.com" <kw@...ux.com>, "robh@...nel.org" <robh@...nel.org>,
        "andersson@...nel.org" <andersson@...nel.org>,
        "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        "linux-arm-msm@...r.kernel.org" <linux-arm-msm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "quic_krichai@...cinc.com" <quic_krichai@...cinc.com>,
        "johan+linaro@...nel.org" <johan+linaro@...nel.org>,
        "steev@...i.org" <steev@...i.org>,
        "mka@...omium.org" <mka@...omium.org>, Dhruva Gole <d-gole@...com>
Subject: Re: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
 and resume



On 29.03.2023 15:02, Manivannan Sadhasivam wrote:
> On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote:
>>
>>
>>> -----Original Message-----
>>> From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
>>> Sent: Monday, March 27, 2023 8:38 AM
>>> To: lpieralisi@...nel.org; kw@...ux.com; robh@...nel.org
>>> Cc: andersson@...nel.org; konrad.dybcio@...aro.org;
>>> bhelgaas@...gle.com; linux-pci@...r.kernel.org; linux-arm-
>>> msm@...r.kernel.org; linux-kernel@...r.kernel.org;
>>> quic_krichai@...cinc.com; johan+linaro@...nel.org; steev@...i.org;
>>> mka@...omium.org; Manivannan Sadhasivam
>>> <manivannan.sadhasivam@...aro.org>; Dhruva Gole <d-gole@...com>
>>> Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
>>> and resume
>>>
>>> Caution: EXT Email
>>>
>>> During the system suspend, vote for minimal interconnect bandwidth and
>>> also turn OFF the resources like clock and PHY if there are no active
>>> devices connected to the controller. For the controllers with active
>>> devices, the resources are kept ON as removing the resources will
>>> trigger access violation during the late end of suspend cycle as kernel
>>> tries to access the config space of PCIe devices to mask the MSIs.
>>
>> I remember I met similar problem before. It is relate ASPM settings of NVME.
>> NVME try to use L1.2 at suspend to save restore time. 
>>
>> It should be user decided if PCI enter L1.2( for better resume time) or L2
>> For batter power saving.  If NVME disable ASPM,  NVME driver will free
>> Msi irq before enter suspend,  so not issue access config space by MSI
>> Irq disable function. 
>>
> 
> The NVMe driver will only shutdown the device if ASPM is completely disabled in
> the kernel. They also take powerdown path for some Intel platforms though. For
> others, they keep the device in power on state and expect power saving with
> ASPM.
> 
>> This is just general comment. It is not specific for this patches.  Many platform
>> Will face the similar problem.  Maybe need better solution to handle
>> L2/L3 for better power saving in future. 
>>
> 
> The only argument I hear from them is that, when the NVMe device gets powered
> down during suspend, then it may detoriate the life time of it as the suspend
> cycle is going to be high.
I think I asked that question before, but.. Do we know what Windows/macOS do?

Konrad
> 
> - Mani
> 
>> Frank Li
>>  
>>>
>>> Also, it is not desirable to put the link into L2/L3 state as that
>>> implies VDD supply will be removed and the devices may go into powerdown
>>> state. This will affect the lifetime of storage devices like NVMe.
>>>
>>> And finally, during resume, turn ON the resources if the controller was
>>> truly suspended (resources OFF) and update the interconnect bandwidth
>>> based on PCIe Gen speed.
>>>
>>> Suggested-by: Krishna chaitanya chundru <quic_krichai@...cinc.com>
>>> Acked-by: Dhruva Gole <d-gole@...com>
>>> Signed-off-by: Manivannan Sadhasivam
>>> <manivannan.sadhasivam@...aro.org>
>>> ---
>>>  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
>>>  1 file changed, 62 insertions(+)
>>>
>>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c
>>> b/drivers/pci/controller/dwc/pcie-qcom.c
>>> index a232b04af048..f33df536d9be 100644
>>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>>> @@ -227,6 +227,7 @@ struct qcom_pcie {
>>>         struct gpio_desc *reset;
>>>         struct icc_path *icc_mem;
>>>         const struct qcom_pcie_cfg *cfg;
>>> +       bool suspended;
>>>  };
>>>
>>>  #define to_qcom_pcie(x)                dev_get_drvdata((x)->dev)
>>> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct
>>> platform_device *pdev)
>>>         return ret;
>>>  }
>>>
>>> +static int qcom_pcie_suspend_noirq(struct device *dev)
>>> +{
>>> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
>>> +       int ret;
>>> +
>>> +       /*
>>> +        * Set minimum bandwidth required to keep data path functional during
>>> +        * suspend.
>>> +        */
>>> +       ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
>>> +       if (ret) {
>>> +               dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
>>> +               return ret;
>>> +       }
>>> +
>>> +       /*
>>> +        * Turn OFF the resources only for controllers without active PCIe
>>> +        * devices. For controllers with active devices, the resources are kept
>>> +        * ON and the link is expected to be in L0/L1 (sub)states.
>>> +        *
>>> +        * Turning OFF the resources for controllers with active PCIe devices
>>> +        * will trigger access violation during the end of the suspend cycle,
>>> +        * as kernel tries to access the PCIe devices config space for masking
>>> +        * MSIs.
>>> +        *
>>> +        * Also, it is not desirable to put the link into L2/L3 state as that
>>> +        * implies VDD supply will be removed and the devices may go into
>>> +        * powerdown state. This will affect the lifetime of the storage devices
>>> +        * like NVMe.
>>> +        */
>>> +       if (!dw_pcie_link_up(pcie->pci)) {
>>> +               qcom_pcie_host_deinit(&pcie->pci->pp);
>>> +               pcie->suspended = true;
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int qcom_pcie_resume_noirq(struct device *dev)
>>> +{
>>> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
>>> +       int ret;
>>> +
>>> +       if (pcie->suspended) {
>>> +               ret = qcom_pcie_host_init(&pcie->pci->pp);
>>> +               if (ret)
>>> +                       return ret;
>>> +
>>> +               pcie->suspended = false;
>>> +       }
>>> +
>>> +       qcom_pcie_icc_update(pcie);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>>  static const struct of_device_id qcom_pcie_match[] = {
>>>         { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
>>>         { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
>>> @@ -1856,12 +1913,17 @@
>>> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302,
>>> qcom_fixup_class);
>>>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000,
>>> qcom_fixup_class);
>>>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001,
>>> qcom_fixup_class);
>>>
>>> +static const struct dev_pm_ops qcom_pcie_pm_ops = {
>>> +       NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq,
>>> qcom_pcie_resume_noirq)
>>> +};
>>> +
>>>  static struct platform_driver qcom_pcie_driver = {
>>>         .probe = qcom_pcie_probe,
>>>         .driver = {
>>>                 .name = "qcom-pcie",
>>>                 .suppress_bind_attrs = true,
>>>                 .of_match_table = qcom_pcie_match,
>>> +               .pm = &qcom_pcie_pm_ops,
>>>         },
>>>  };
>>>  builtin_platform_driver(qcom_pcie_driver);
>>> --
>>> 2.25.1
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ