[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230329125232.GB5575@thinkpad>
Date: Wed, 29 Mar 2023 18:22:32 +0530
From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To: Johan Hovold <johan@...nel.org>
Cc: lpieralisi@...nel.org, kw@...ux.com, robh@...nel.org,
andersson@...nel.org, konrad.dybcio@...aro.org,
bhelgaas@...gle.com, linux-pci@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
quic_krichai@...cinc.com, johan+linaro@...nel.org, steev@...i.org,
mka@...omium.org, Dhruva Gole <d-gole@...com>
Subject: Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and
resume
On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
> On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
> > During the system suspend, vote for minimal interconnect bandwidth and
> > also turn OFF the resources like clock and PHY if there are no active
> > devices connected to the controller. For the controllers with active
> > devices, the resources are kept ON as removing the resources will
> > trigger access violation during the late end of suspend cycle as kernel
> > tries to access the config space of PCIe devices to mask the MSIs.
> >
> > Also, it is not desirable to put the link into L2/L3 state as that
> > implies VDD supply will be removed and the devices may go into powerdown
> > state. This will affect the lifetime of storage devices like NVMe.
> >
> > And finally, during resume, turn ON the resources if the controller was
> > truly suspended (resources OFF) and update the interconnect bandwidth
> > based on PCIe Gen speed.
> >
> > Suggested-by: Krishna chaitanya chundru <quic_krichai@...cinc.com>
> > Acked-by: Dhruva Gole <d-gole@...com>
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> > ---
> > drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
> > 1 file changed, 62 insertions(+)
> >
> > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> > index a232b04af048..f33df536d9be 100644
> > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> > @@ -227,6 +227,7 @@ struct qcom_pcie {
> > struct gpio_desc *reset;
> > struct icc_path *icc_mem;
> > const struct qcom_pcie_cfg *cfg;
> > + bool suspended;
> > };
> >
> > #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
> > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> > return ret;
> > }
> >
> > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > +{
> > + struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > + int ret;
> > +
> > + /*
> > + * Set minimum bandwidth required to keep data path functional during
> > + * suspend.
> > + */
> > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
>
> This isn't really the minimum bandwidth you're setting here.
>
> I think you said off list that you didn't see real impact reducing the
> bandwidth, but have you tried requesting the real minimum which would be
> kBps_to_icc(1)?
>
> Doing so works fine here with both the CRD and X13s and may result in
> some further power savings.
>
No, we shouldn't be setting random value as the bandwidth. Reason is, these
values are computed by the bus team based on the requirement of the interconnect
paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about
the potential implication even if it happens to work.
- Mani
> > + if (ret) {
> > + dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> > + return ret;
> > + }
>
> Johan
--
மணிவண்ணன் சதாசிவம்
Powered by blists - more mailing lists