[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zmgb+OjdBNw71sC1@hu-varada-blr.qualcomm.com>
Date: Tue, 11 Jun 2024 15:12:16 +0530
From: Varadarajan Narayanan <quic_varada@...cinc.com>
To: Konrad Dybcio <konrad.dybcio@...aro.org>
CC: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
Georgi Djakov
<djakov@...nel.org>, <andersson@...nel.org>,
<mturquette@...libre.com>, <sboyd@...nel.org>, <robh@...nel.org>,
<krzk+dt@...nel.org>, <conor+dt@...nel.org>, <quic_anusha@...cinc.com>,
<linux-arm-msm@...r.kernel.org>, <linux-clk@...r.kernel.org>,
<devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-pm@...r.kernel.org>
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider
ability to gcc
On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > <quic_varada@...cinc.com> wrote:
> >>
> >> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> >>> Hi Varada,
> >>>
> >>> Thank you for your work on this!
> >>>
> >>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
> >>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> >>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> >>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
> >>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
> >>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
> >>>>>>>> However, these should not be exposed as just clocks and align
> >>>>>>>> with other Qualcomm SoCs that handle these clocks from a
> >>>>>>>> interconnect provider.
> >>>>>>>>
> >>>>>>>> Hence include icc provider capability to the gcc node so that
> >>>>>>>> peripherals can use the interconnect facility to enable these
> >>>>>>>> clocks.
> >>>>>>>>
> >>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
> >>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@...cinc.com>
> >>>>>>>> ---
> >>>>>>>
> >>>>>>> If this is all you do to enable interconnect (which is not the case,
> >>>>>>> as this patch only satisfies the bindings checker, the meaningful
> >>>>>>> change happens in the previous patch) and nothing explodes, this is
> >>>>>>> an apparent sign of your driver doing nothing.
> >>>>>>
> >>>>>> It appears to do nothing because, we are just enabling the clock
> >>>>>> provider to also act as interconnect provider. Only when the
> >>>>>> consumers are enabled with interconnect usage, this will create
> >>>>>> paths and turn on the relevant NOC clocks.
> >>>>>
> >>>>> No, with sync_state it actually does "something" (sets the interconnect
> >>>>> path bandwidths to zero). And *this* patch does nothing functionally,
> >>>>> it only makes the dt checker happy.
> >>>>
> >>>> I understand.
> >>>>
> >>>>>> This interconnect will be used by the PCIe and NSS blocks. When
> >>>>>> those patches were posted earlier, they were put on hold until
> >>>>>> interconnect driver is available.
> >>>>>>
> >>>>>> Once this patch gets in, PCIe for example will make use of icc.
> >>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/.
> >>>>>>
> >>>>>> The 'pcieX' nodes will include the following entries.
> >>>>>>
> >>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> >>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> >>>>>> interconnect-names = "pcie-mem", "cpu-pcie";
> >>>>>
> >>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> >>>>
> >>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> >>>> clock. Hence, interconnect is not specified there.
> >>>>
> >>>> MEMNOC to System NOC interfaces seem to be enabled automatically.
> >>>> Software doesn't have to turn on or program specific clocks.
> >>>>
> >>>>>>> The expected reaction to "enabling interconnect" without defining the
> >>>>>>> required paths for your hardware would be a crash-on-sync_state, as all
> >>>>>>> unused (from Linux's POV) resources ought to be shut down.
> >>>>>>>
> >>>>>>> Because you lack sync_state, the interconnects silently retain the state
> >>>>>>> that they were left in (which is not deterministic), and that's precisely
> >>>>>>> what we want to avoid.
> >>>>>>
> >>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
> >>>>>> didn't see any crash.
> >>>>>
> >>>>> Have you confirmed that the registers are actually written to, and with
> >>>>> correct values?
> >>>>
> >>>> I tried the following combinations:-
> >>>>
> >>>> 1. Top of tree linux-next + This patch set
> >>>>
> >>>> * icc_sync_state called
> >>>> * No crash or hang observed
> >>>> * From /sys/kernel/debug/clk/clk_summary can see the
> >>>> relevant clocks are set to the expected rates (compared
> >>>> with downstream kernel)
> >>>>
> >>>> 2. Top of tree linux-next + This patch set + PCIe enablement
> >>>>
> >>>> * icc_sync_state NOT called
> >>>
> >>> If sync_state() is not being called, that usually means that there
> >>> are interconnect consumers that haven't probed successfully (PCIe?)
> >>> or their dependencies. That can be checked in /sys/class/devlink/.../status
> >>> But i am not sure how this works for PCI devices however.
> >>>
> >>> You can also manually force a call to sync_state by writing "1" to
> >>> the interconnect provider's /sys/devices/.../state_synced
> >>>
> >>> Anyway, the question is if PCIe and NSS work without this driver?
> >>
> >> No.
> >>
> >>> If they work, is this because the clocks are turned on by default
> >>> or by the boot loader?
> >>
> >> Initially, the PCIe/NSS driver enabled these clocks directly
> >> by having them in their DT nodes itself. Based on community
> >> feedback this was removed and after that PCIe/NSS did not work.
> >>
> >>> Then if an interconnect path (clock) gets disabled either when we
> >>> reach a sync_state (with no bandwidth requests) or we explicitly
> >>> call icc_set_bw() with 0 bandwidth values, i would expect that
> >>> these PCIe and NSS devices would not function anymore (it might
> >>> save some power etc) and if this is unexpected we should see a
> >>> a crash or hang...
> >>>
> >>> Can you confirm this?
> >>
> >> With ICC enabled, icc_set_bw (with non-zero values) is called by
> >> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> >> values.
> >>
> >> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> >> NSS: ppe_icc_init -> icc_set_bw
> >>
> >> I believe sync_state is not getting called since there is a
> >> non-zero set bandwidth request. Which seems to be aligned with
> >> your explanation.
> >
> > This doesn't look correct. sync_state is being called once all
> > consumers are probed. It doesn't matter whether those consumers have
> > non-zero bandwidth requests or no.
>
> /sys/kernel/debug/devices_deferred may have some useful info, too
/sys/kernel/debug/devices_deferred seems to be empty
# mount | grep -w debugfs
none on /sys/kernel/debug type debugfs (rw,relatime)
# cat /sys/kernel/debug/devices_deferred | wc -l
0
Added the following print to icc_sync_state,
@@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
struct icc_node *n;
static int count;
+ printk("--> %s: %d %d\n", __func__, providers_count, count);
count++;
if (count < providers_count)
return;
icc_sync_state seems to be called once,
# dmesg | grep icc_sync_state
[ 12.260544] --> icc_sync_state: 2 0
Since 'providers_count' is greated than 'count' icc_sync_state
seems to return before doing anything.
Thanks
Varada
Powered by blists - more mailing lists