lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zi88Hdx6UgBo/gti@hu-varada-blr.qualcomm.com>
Date: Mon, 29 Apr 2024 11:50:13 +0530
From: Varadarajan Narayanan <quic_varada@...cinc.com>
To: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
CC: Alex G. <mr.nuke.me@...il.com>, Bjorn Andersson <andersson@...nel.org>,
        Konrad Dybcio <konrad.dybcio@...aro.org>,
        Lorenzo Pieralisi
	<lpieralisi@...nel.org>,
        Krzysztof WilczyƄski
	<kw@...ux.com>,
        Rob Herring <robh@...nel.org>, Bjorn Helgaas
	<bhelgaas@...gle.com>,
        Krzysztof Kozlowski <krzk+dt@...nel.org>,
        Conor Dooley
	<conor+dt@...nel.org>, Vinod Koul <vkoul@...nel.org>,
        Kishon Vijay Abraham I
	<kishon@...nel.org>,
        Michael Turquette <mturquette@...libre.com>,
        Stephen
 Boyd <sboyd@...nel.org>,
        Manivannan Sadhasivam
	<manivannan.sadhasivam@...aro.org>,
        <linux-arm-msm@...r.kernel.org>, <linux-pci@...r.kernel.org>,
        <devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <linux-phy@...ts.infradead.org>, <linux-clk@...r.kernel.org>
Subject: Re: [PATCH v3 6/7] phy: qcom-qmp-pcie: add support for ipq9574
 gen3x2 PHY

On Wed, Apr 17, 2024 at 12:50:49AM +0300, Dmitry Baryshkov wrote:
> On Wed, 17 Apr 2024 at 00:25, Alex G. <mr.nuke.me@...il.com> wrote:
> >
> > Hi Dmitry,
> >
> > On 4/15/24 16:25, mr.nuke.me@...il.com wrote:
> > >
> > >
> > > On 4/15/24 15:10, Dmitry Baryshkov wrote:
> > >> On Mon, 15 Apr 2024 at 21:23, Alexandru Gagniuc <mr.nuke.me@...il.com>
> > >> wrote:
> > >>>
> > >>> Add support for the gen3x2 PCIe PHY on IPQ9574, ported form downstream
> > >>> 5.4 kernel. Only the serdes and pcs_misc tables are new, the others
> > >>> being reused from IPQ8074 and IPQ6018 PHYs.
> > >>>
> > >>> Signed-off-by: Alexandru Gagniuc <mr.nuke.me@...il.com>
> > >>> ---
> > >>>   drivers/phy/qualcomm/phy-qcom-qmp-pcie.c      | 136 +++++++++++++++++-
> > >>>   .../phy/qualcomm/phy-qcom-qmp-pcs-pcie-v5.h   |  14 ++
> > >>>   2 files changed, 149 insertions(+), 1 deletion(-)
> > >>>
> > >>
> > >> [skipped]
> > >>
> > >>> @@ -2448,7 +2542,7 @@ static inline void qphy_clrbits(void __iomem
> > >>> *base, u32 offset, u32 val)
> > >>>
> > >>>   /* list of clocks required by phy */
> > >>>   static const char * const qmp_pciephy_clk_l[] = {
> > >>> -       "aux", "cfg_ahb", "ref", "refgen", "rchng", "phy_aux",
> > >>> +       "aux", "cfg_ahb", "ref", "refgen", "rchng", "phy_aux",
> > >>> "anoc", "snoc"
> > >>
> > >> Are the NoC clocks really necessary to drive the PHY? I think they are
> > >> usually connected to the controllers, not the PHYs.
> > >
> > > The system will hang if these clocks are not enabled. They are also
> > > attached to the PHY in the QCA 5.4 downstream kernel.
>
> Interesting.
> I see that Varadarajan is converting these clocks into interconnects.
> Maybe it's better to wait for those patches to land and use
> interconnects instead. I think it would better suit the
> infrastructure.
>
> Varadarajan, could you please comment, are these interconnects
> connected to the PHY too or just to the PCIe controller?

Sorry for the late response. Missed this e-mail.

These 2 clks are related to AXI port clk on Aggnoc/SNOC, not
directly connected to PCIE wrapper, but it should be enabled to
generate pcie traffic.

Thanks
Varada

> > They are named "anoc_lane", and "snoc_lane" in the downstream kernel.
> > Would you like me to use these names instead?
>
> I'm fine either way.
>
> > e>>>   };
> > >>>
> > >>>   /* list of regulators */
> > >>> @@ -2499,6 +2593,16 @@ static const struct qmp_pcie_offsets
> > >>> qmp_pcie_offsets_v4x1 = {
> > >>>          .rx             = 0x0400,
> > >>>   };
> > >>>
> > >>> +static const struct qmp_pcie_offsets qmp_pcie_offsets_ipq9574 = {
> > >>> +       .serdes         = 0,
> > >>> +       .pcs            = 0x1000,
> > >>> +       .pcs_misc       = 0x1400,
> > >>> +       .tx             = 0x0200,
> > >>> +       .rx             = 0x0400,
> > >>> +       .tx2            = 0x0600,
> > >>> +       .rx2            = 0x0800,
> > >>> +};
> > >>> +
> > >>>   static const struct qmp_pcie_offsets qmp_pcie_offsets_v4x2 = {
> > >>>          .serdes         = 0,
> > >>>          .pcs            = 0x0a00,
> > >>> @@ -2728,6 +2832,33 @@ static const struct qmp_phy_cfg
> > >>> sm8250_qmp_gen3x1_pciephy_cfg = {
> > >>>          .phy_status             = PHYSTATUS,
> > >>>   };
> > >>>
> > >>> +static const struct qmp_phy_cfg ipq9574_pciephy_gen3x2_cfg = {
> > >>> +       .lanes                  = 2,
> > >>> +
> > >>> +       .offsets                = &qmp_pcie_offsets_ipq9574,
> > >>> +
> > >>> +       .tbls = {
> > >>> +               .serdes         = ipq9574_gen3x2_pcie_serdes_tbl,
> > >>> +               .serdes_num     =
> > >>> ARRAY_SIZE(ipq9574_gen3x2_pcie_serdes_tbl),
> > >>> +               .tx             = ipq8074_pcie_gen3_tx_tbl,
> > >>> +               .tx_num         = ARRAY_SIZE(ipq8074_pcie_gen3_tx_tbl),
> > >>> +               .rx             = ipq6018_pcie_rx_tbl,
> > >>> +               .rx_num         = ARRAY_SIZE(ipq6018_pcie_rx_tbl),
> > >>> +               .pcs            = ipq6018_pcie_pcs_tbl,
> > >>> +               .pcs_num        = ARRAY_SIZE(ipq6018_pcie_pcs_tbl),
> > >>> +               .pcs_misc       = ipq9574_gen3x2_pcie_pcs_misc_tbl,
> > >>> +               .pcs_misc_num   =
> > >>> ARRAY_SIZE(ipq9574_gen3x2_pcie_pcs_misc_tbl),
> > >>> +       },
> > >>> +       .reset_list             = ipq8074_pciephy_reset_l,
> > >>> +       .num_resets             = ARRAY_SIZE(ipq8074_pciephy_reset_l),
> > >>> +       .vreg_list              = NULL,
> > >>> +       .num_vregs              = 0,
> > >>> +       .regs                   = pciephy_v4_regs_layout,
> > >>
> > >> So, is it v4 or v5?
> > >
> > > Please give me a day or so to go over my notes and give you a more
> > > coherent explanation of why this versioning was chosen. I am only
> > > working from the QCA 5.4 downstream sources. I don't have any
> > > documentation for the silicon
> >
> > The downstream QCA kernel uses the same table for ipq6018, ipq8074-gen3,
> > and ipq9574. It is named "ipq_pciephy_gen3_regs_layout". Thus, it made
> > sense to use the same upstream table for ipq9574, "pciephy_v4_regs_layout".
> >
> > As far as the register tables go, the pcs/pcs_misc are squashed into the
> > same table in the downstream 5.4 kernel. I was able to separate the two
> > tables because the pcs_misc registers were defined with an offset of
> > 0x400. For example:
> >
> > /* QMP V2 PHY for PCIE gen3 2 Lane ports - PCS Misc registers */
> > #define PCS_PCIE_X2_POWER_STATE_CONFIG2                    0x40c
> > #define PCS_PCIE_X2_POWER_STATE_CONFIG4                    0x414
> > #define PCS_PCIE_X2_ENDPOINT_REFCLK_DRIVE                  0x420
> > #define PCS_PCIE_X2_L1P1_WAKEUP_DLY_TIME_AUXCLK_L          0x444
> > #define PCS_PCIE_X2_L1P1_WAKEUP_DLY_TIME_AUXCLK_H          0x448
> > #define PCS_PCIE_X2_L1P2_WAKEUP_DLY_TIME_AUXCLK_L          0x44c
> > #define PCS_PCIE_X2_L1P2_WAKEUP_DLY_TIME_AUXCLK_H          0x450
> > ...
> >
> > Here, QPHY_V4_PCS_PCIE_POWER_STATE_CONFIG2 = 0xc would be correct,
> > assuming a pcs_misc offset of 0x400. However, starting with
> > ENDPOINT_REFCLK_DRIVE, the register would be
> > QPHY_V4_PCS_PCIE_ENDPOINT_REFCLK_DRIVE = 0x1c. Our offsets are off-by 0x4.
> >
> > The existing V5 offsets, on the other hand, were all correct. For this
> > reason, I considered that V5 is the most likely place to add the missing
> > PCS misc definitions.
>
> Ok, sounds sane. Please use _v5 for the regs layout.
>
> >
> > Is this explanation sufficiently convincing? Where does the v4/v5 scheme
> > in upstream kernel originate?
>
> Sometimes it's vendor kernels, sometimes it's a feedback from devs
> that have access to actual specs.
>
>
> --
> With best wishes
> Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ