[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230329140150.GE5575@thinkpad>
Date: Wed, 29 Mar 2023 19:31:50 +0530
From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To: Johan Hovold <johan@...nel.org>
Cc: lpieralisi@...nel.org, kw@...ux.com, robh@...nel.org,
andersson@...nel.org, konrad.dybcio@...aro.org,
bhelgaas@...gle.com, linux-pci@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
quic_krichai@...cinc.com, johan+linaro@...nel.org, steev@...i.org,
mka@...omium.org, Dhruva Gole <d-gole@...com>
Subject: Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and
resume
On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote:
> On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote:
> > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
> > > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
>
> > > > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > > > +{
> > > > + struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > > + int ret;
> > > > +
> > > > + /*
> > > > + * Set minimum bandwidth required to keep data path functional during
> > > > + * suspend.
> > > > + */
> > > > + ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> > >
> > > This isn't really the minimum bandwidth you're setting here.
> > >
> > > I think you said off list that you didn't see real impact reducing the
> > > bandwidth, but have you tried requesting the real minimum which would be
> > > kBps_to_icc(1)?
> > >
> > > Doing so works fine here with both the CRD and X13s and may result in
> > > some further power savings.
> > >
> >
> > No, we shouldn't be setting random value as the bandwidth. Reason is, these
> > values are computed by the bus team based on the requirement of the interconnect
> > paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about
> > the potential implication even if it happens to work.
>
> Why would you need PCIe gen1 speed during suspend?
>
That's what the suggestion I got from Qcom PCIe team. But I didn't compare the
value you added during icc support patch with downstream. More below...
> These numbers are already somewhat random as, for example, the vendor
> driver is requesting 500 kBps (800 peak) during runtime, while we are
> now requesting five times that during suspend (the vendor driver gets a
> away with 0).
>
Hmm, then I should've asked you this question when you added icc support.
I thought you inherited those values from downstream but apparently not.
Even in downstream they are using different bw votes for different platforms.
I will touch base with PCIe and ICC teams to find out the actual value that
needs to be used.
Regarding 0 icc vote, downstream puts all the devices in D3Cold (poweroff)
state during suspend. So for them 0 icc vote will work but not for us as we need
to keep the device and link intact.
- Mani
> Sure, this indicates that the interconnect driver is broken and we
> should indeed be using values that at least makes some sense (and
> eventually fix the interconnect driver).
>
> Just not sure that you need to request that much bandwidth during
> suspend (e.g. for just a couple of register accesses).
>
> Johan
--
மணிவண்ணன் சதாசிவம்
Powered by blists - more mailing lists