[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aeaf0b2a-7fdf-4e23-97bc-7bfc3fd05f41@quicinc.com>
Date: Fri, 15 Dec 2023 11:15:16 +0530
From: Praveenkumar I <quic_ipkumar@...cinc.com>
To: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
CC: <agross@...nel.org>, <andersson@...nel.org>, <konrad.dybcio@...aro.org>,
<mturquette@...libre.com>, <sboyd@...nel.org>, <robh+dt@...nel.org>,
<krzysztof.kozlowski+dt@...aro.org>, <conor+dt@...nel.org>,
<bhelgaas@...gle.com>, <lpieralisi@...nel.org>, <kw@...ux.com>,
<vkoul@...nel.org>, <kishon@...nel.org>, <mani@...nel.org>,
<quic_nsekar@...cinc.com>, <quic_srichara@...cinc.com>,
<linux-arm-msm@...r.kernel.org>, <linux-clk@...r.kernel.org>,
<devicetree@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-pci@...r.kernel.org>, <linux-phy@...ts.infradead.org>,
<quic_varada@...cinc.com>, <quic_devipriy@...cinc.com>,
<quic_kathirav@...cinc.com>, <quic_anusha@...cinc.com>
Subject: Re: [PATCH 06/10] phy: qcom: ipq5332: Add support for g3x1 and g3x2
PCIe PHYs
On 12/14/2023 12:42 PM, Dmitry Baryshkov wrote:
> On Thu, 14 Dec 2023 at 08:30, Praveenkumar I <quic_ipkumar@...cinc.com> wrote:
>> Add support for single-lane and dual-lane PCIe UNIPHY found on
>> Qualcomm IPQ5332 platform. This UNIPHY is similar to the one
>> present in Qualcomm IPQ5018.
>>
>> Signed-off-by: Praveenkumar I <quic_ipkumar@...cinc.com>
>> ---
>> This patch depends on the below series which adds PCIe support in
>> Qualcomm IPQ5018
>> https://lore.kernel.org/all/20231003120846.28626-1-quic_nsekar@quicinc.com/
>>
>> .../phy/qualcomm/phy-qcom-uniphy-pcie-28lp.c | 44 +++++++++++++++++++
>> 1 file changed, 44 insertions(+)
>>
>> diff --git a/drivers/phy/qualcomm/phy-qcom-uniphy-pcie-28lp.c b/drivers/phy/qualcomm/phy-qcom-uniphy-pcie-28lp.c
>> index 9f9a03faf6fa..aa71b85eb50e 100644
>> --- a/drivers/phy/qualcomm/phy-qcom-uniphy-pcie-28lp.c
>> +++ b/drivers/phy/qualcomm/phy-qcom-uniphy-pcie-28lp.c
>> @@ -34,6 +34,10 @@
>> #define SSCG_CTRL_REG_6 0xb0
>> #define PCS_INTERNAL_CONTROL_2 0x2d8
>>
>> +#define PHY_CFG_PLLCFG 0x220
>> +#define PHY_CFG_EIOS_DTCT_REG 0x3e4
>> +#define PHY_CFG_GEN3_ALIGN_HOLDOFF_TIME 0x3e8
>> +
>> #define PHY_MODE_FIXED 0x1
>>
>> enum qcom_uniphy_pcie_type {
>> @@ -112,6 +116,21 @@ static const struct uniphy_regs ipq5018_regs[] = {
>> },
>> };
>>
>> +static const struct uniphy_regs ipq5332_regs[] = {
>> + {
>> + .offset = PHY_CFG_PLLCFG,
>> + .val = 0x30,
>> + },
>> + {
>> + .offset = PHY_CFG_EIOS_DTCT_REG,
>> + .val = 0x53ef,
>> + },
>> + {
>> + .offset = PHY_CFG_GEN3_ALIGN_HOLDOFF_TIME,
>> + .val = 0xCf,
>> + },
>> +};
>> +
>> static const struct uniphy_pcie_data ipq5018_2x2_data = {
>> .lanes = 2,
>> .lane_offset = 0x800,
>> @@ -121,6 +140,23 @@ static const struct uniphy_pcie_data ipq5018_2x2_data = {
>> .pipe_clk_rate = 125000000,
>> };
>>
>> +static const struct uniphy_pcie_data ipq5332_x2_data = {
>> + .lanes = 2,
>> + .lane_offset = 0x800,
>> + .phy_type = PHY_TYPE_PCIE_GEN3,
>> + .init_seq = ipq5332_regs,
>> + .init_seq_num = ARRAY_SIZE(ipq5332_regs),
>> + .pipe_clk_rate = 250000000,
>> +};
>> +
>> +static const struct uniphy_pcie_data ipq5332_x1_data = {
>> + .lanes = 1,
>> + .phy_type = PHY_TYPE_PCIE_GEN3,
>> + .init_seq = ipq5332_regs,
>> + .init_seq_num = ARRAY_SIZE(ipq5332_regs),
>> + .pipe_clk_rate = 250000000,
>> +};
> Please keep structs sorted out.
sure, will address in next patch set.
>
>> +
>> static void qcom_uniphy_pcie_init(struct qcom_uniphy_pcie *phy)
>> {
>> const struct uniphy_pcie_data *data = phy->data;
>> @@ -270,6 +306,14 @@ static const struct of_device_id qcom_uniphy_pcie_id_table[] = {
>> .compatible = "qcom,ipq5018-uniphy-pcie-gen2x2",
>> .data = &ipq5018_2x2_data,
>> },
>> + {
>> + .compatible = "qcom,ipq5332-uniphy-pcie-gen3x2",
>> + .data = &ipq5332_x2_data,
>> + },
>> + {
>> + .compatible = "qcom,ipq5332-uniphy-pcie-gen3x1",
>> + .data = &ipq5332_x1_data,
> The entries here should be sorted out.
will take care.
>
>> + },
>> { /* Sentinel */ },
>> };
>> MODULE_DEVICE_TABLE(of, qcom_uniphy_pcie_id_table);
>> --
>> 2.34.1
>>
>>
>
--
Thanks,
Praveenkumar
Powered by blists - more mailing lists