[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7hlr3x9IrT/Kg82@google.com>
Date: Fri, 6 Jan 2023 18:17:19 +0000
From: Matthias Kaehlcke <mka@...omium.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Cc: Dhruva Gole <d-gole@...com>, lpieralisi@...nel.org,
robh@...nel.org, andersson@...nel.org, konrad.dybcio@...aro.org,
kw@...ux.com, bhelgaas@...gle.com, linux-pci@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
quic_krichai@...cinc.com, johan+linaro@...nel.org, steev@...i.org
Subject: Re: [PATCH 1/1] PCI: qcom: Add support for system suspend and resume
On Thu, Jan 05, 2023 at 07:06:39PM +0530, Manivannan Sadhasivam wrote:
> On Tue, Jan 03, 2023 at 04:46:11PM +0530, Dhruva Gole wrote:
> >
> >
> > On 03/01/23 13:19, Manivannan Sadhasivam wrote:
> > > During the system suspend, vote for minimal interconnect bandwidth and
> > > also turn OFF the resources like clock and PHY if there are no active
> > > devices connected to the controller. For the controllers with active
> > > devices, the resources are kept ON as removing the resources will
> > > trigger access violation during the late end of suspend cycle as kernel
> > > tries to access the config space of PCIe devices to mask the MSIs.
> > >
> > > Also, it is not desirable to put the link into L2/L3 state as that
> > > implies VDD supply will be removed and the devices may go into powerdown
> > > state. This will affect the lifetime of storage devices like NVMe.
> > >
> > > And finally, during resume, turn ON the resources if the controller was
> > > truly suspended (resources OFF) and update the interconnect bandwidth
> > > based on PCIe Gen speed.
> > >
> > > Suggested-by: Krishna chaitanya chundru <quic_krichai@...cinc.com>
> > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> > > ---
> >
> > Nice to have another driver added to the list of system suspend
> > support!
> >
> > Acked-by: Dhruva Gole <d-gole@...com>
> >
> > > drivers/pci/controller/dwc/pcie-qcom.c | 52 ++++++++++++++++++++++++++
> > > 1 file changed, 52 insertions(+)
> > >
> > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> > > index 5696e327795b..48810f1f2dba 100644
> > > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> > > @@ -227,6 +227,7 @@ struct qcom_pcie {
> > > struct gpio_desc *reset;
> > > struct icc_path *icc_mem;
> > > const struct qcom_pcie_cfg *cfg;qcom_pcie_icc_update
> > > + bool suspended;
> > > };
> > > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
> > > @@ -1835,6 +1836,52 @@ static int qcom_pcie_remove(struct platform_device *pdev)
> > > return 0;
> > > }
> > > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > > +{
> > > + struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > + int ret;
> > > +
> > > + ret = icc_set_bw(pcie->icc_mem, 0, 0);
> > > + if (ret) {
> > > + dev_err(pcie->pci->dev, "Failed to set interconnect bandwidth: %d\n", ret);
> > > + return ret;
> > > + }
> > > +
> > > + /*
> > > + * Turn OFF the resources only for controllers without active PCIe devices. For controllers
> > > + * with active devices, the resources are kept ON and the link is expected to be in L0/L1
> > > + * (sub)states.
> > > + *
> > > + * Turning OFF the resources for controllers with active PCIe devices will trigger access
> > > + * violation during the end of the suspend cycle, as kernel tries to access the PCIe devices
> > > + * config space for masking MSIs.
> > > + *
> > > + * Also, it is not desirable to put the link into L2/L3 state as that implies VDD supply
> > > + * will be removed and the devices may go into powerdown state. This will affect the
> > > + * lifetime of the storage devices like NVMe.
> > > + */
> > > + if (!dw_pcie_link_up(pcie->pci)) {
> > > + qcom_pcie_host_deinit(&pcie->pci->pp);
> > > + pcie->suspended = true;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int qcom_pcie_resume_noirq(struct device *dev)
> > > +{
> > > + struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > +
> > > + if (pcie->suspended) {
> > > + qcom_pcie_host_init(&pcie->pci->pp);
> > > + pcie->suspended = false;
> > > + }
> > > +
> > > + qcom_pcie_icc_update(pcie);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > static const struct of_device_id qcom_pcie_match[] = {
> > > { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
> > > { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
> > > @@ -1870,12 +1917,17 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class);
> > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class);
> > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
> > > +static const struct dev_pm_ops qcom_pcie_pm_ops = {
> > > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, qcom_pcie_resume_noirq)
> > > +};
> > > +
> > > static struct platform_driver qcom_pcie_driver = {
> > > .probe = qcom_pcie_probe,
> > > .remove = qcom_pcie_remove,
> > > .driver = {
> > > .name = "qcom-pcie",
> > > .of_match_table = qcom_pcie_match,
> > > + .pm = &qcom_pcie_pm_ops,
> > > },
> > > };
> > > module_platform_driver(qcom_pcie_driver);
> >
> > Out of curiosity, were you able to measure how much power you were able
> > to save after adding suspend support for PCIe? I don't know if clock
> > gating really saves much amount of power, but yeah its true that we
> > can't really cut off the power domain entirely in this case.
> >
>
> I did not measure the power consumption and I agree that we won't save much
> power with setting icc bandwidth to 0. But it is better to have something
> than nothing. And in the coming days, I have plans to look into other power
> saving measures also.
On a sc7280 system I see a reduction of ~30mW with this patch when no PCI
card is plugged in. The reduction seems to come from powering the PHY down.
Interestingly on that system power consumption during suspend (without this
patch) is ~30mW higher *without* a PCI card vs. with a card. Maybe the PHY
doesn't enter a low power mode when no card is plugged in?
Powered by blists - more mailing lists