[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50346565-20d0-4ef9-80a5-e08070fdefb6@oss.qualcomm.com>
Date: Thu, 18 Sep 2025 14:40:12 +0800
From: Xiangxu Yin <xiangxu.yin@....qualcomm.com>
To: Dmitry Baryshkov <dmitry.baryshkov@....qualcomm.com>
Cc: Vinod Koul <vkoul@...nel.org>, Kishon Vijay Abraham I
<kishon@...nel.org>,
Rob Herring <robh@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>,
Conor Dooley
<conor+dt@...nel.org>,
Philipp Zabel <p.zabel@...gutronix.de>,
Rob Clark <robin.clark@....qualcomm.com>,
Dmitry Baryshkov
<lumag@...nel.org>,
Abhinav Kumar <abhinav.kumar@...ux.dev>,
Jessica Zhang <jessica.zhang@....qualcomm.com>,
Sean Paul <sean@...rly.run>,
Marijn Suijten <marijn.suijten@...ainline.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
linux-arm-msm@...r.kernel.org, linux-phy@...ts.infradead.org,
devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, freedreno@...ts.freedesktop.org,
fange.zhang@....qualcomm.com, yongxing.mou@....qualcomm.com,
li.liu@....qualcomm.com, tingwei.zhang@....qualcomm.com,
Bjorn Andersson <andersson@...nel.org>,
Konrad Dybcio <konradybcio@...nel.org>
Subject: Re: [PATCH v4 13/13] drm/msm/dp: Add support for lane mapping
configuration
On 9/12/2025 6:42 PM, Dmitry Baryshkov wrote:
> On Thu, Sep 11, 2025 at 10:55:10PM +0800, Xiangxu Yin wrote:
>> QCS615 platform requires non-default logical-to-physical lane mapping due
>> to its unique hardware routing. Unlike the standard mapping sequence
>> <0 1 2 3>, QCS615 uses <3 2 0 1>, which necessitates explicit
>> configuration via the data-lanes property in the device tree. This ensures
>> correct signal routing between the DP controller and PHY.
>>
>> Signed-off-by: Xiangxu Yin <xiangxu.yin@....qualcomm.com>
>> ---
>> drivers/gpu/drm/msm/dp/dp_ctrl.c | 10 +++++-----
>> drivers/gpu/drm/msm/dp/dp_link.c | 12 ++++++++++--
>> drivers/gpu/drm/msm/dp/dp_link.h | 1 +
>> 3 files changed, 16 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> index c42fd2c17a328f6deae211c9cd57cc7416a9365a..cbcc7c2f0ffc4696749b6c43818d20853ddec069 100644
>> --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
>> @@ -423,13 +423,13 @@ static void msm_dp_ctrl_config_ctrl(struct msm_dp_ctrl_private *ctrl)
>>
>> static void msm_dp_ctrl_lane_mapping(struct msm_dp_ctrl_private *ctrl)
>> {
>> - u32 ln_0 = 0, ln_1 = 1, ln_2 = 2, ln_3 = 3; /* One-to-One mapping */
>> + u32 *lane_map = ctrl->link->lane_map;
>> u32 ln_mapping;
>>
>> - ln_mapping = ln_0 << LANE0_MAPPING_SHIFT;
>> - ln_mapping |= ln_1 << LANE1_MAPPING_SHIFT;
>> - ln_mapping |= ln_2 << LANE2_MAPPING_SHIFT;
>> - ln_mapping |= ln_3 << LANE3_MAPPING_SHIFT;
>> + ln_mapping = lane_map[0] << LANE0_MAPPING_SHIFT;
>> + ln_mapping |= lane_map[1] << LANE1_MAPPING_SHIFT;
>> + ln_mapping |= lane_map[2] << LANE2_MAPPING_SHIFT;
>> + ln_mapping |= lane_map[3] << LANE3_MAPPING_SHIFT;
>>
>> msm_dp_write_link(ctrl, REG_DP_LOGICAL2PHYSICAL_LANE_MAPPING,
>> ln_mapping);
>> diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c
>> index caca947122c60abb2a01e295f3e254cf02e34502..7c7a4aa584eb42a0ca7c6ec45de585cde8639cb4 100644
>> --- a/drivers/gpu/drm/msm/dp/dp_link.c
>> +++ b/drivers/gpu/drm/msm/dp/dp_link.c
>> @@ -1242,6 +1242,7 @@ static int msm_dp_link_parse_dt(struct msm_dp_link *msm_dp_link)
>> struct msm_dp_link_private *link;
>> struct device_node *of_node;
>> int cnt;
>> + u32 lane_map[DP_MAX_NUM_DP_LANES] = {0};
>>
>> link = container_of(msm_dp_link, struct msm_dp_link_private, msm_dp_link);
>> of_node = link->dev->of_node;
>> @@ -1255,10 +1256,17 @@ static int msm_dp_link_parse_dt(struct msm_dp_link *msm_dp_link)
>> cnt = drm_of_get_data_lanes_count(of_node, 1, DP_MAX_NUM_DP_LANES);
>> }
>>
>> - if (cnt > 0)
>> + if (cnt > 0) {
>> + struct device_node *endpoint;
>> +
>> msm_dp_link->max_dp_lanes = cnt;
>> - else
>> + endpoint = of_graph_get_endpoint_by_regs(of_node, 1, -1);
>> + of_property_read_u32_array(endpoint, "data-lanes", lane_map, cnt);
>> + } else {
>> msm_dp_link->max_dp_lanes = DP_MAX_NUM_DP_LANES; /* 4 lanes */
>> + }
>> +
>> + memcpy(msm_dp_link->lane_map, lane_map, msm_dp_link->max_dp_lanes * sizeof(u32));
> This will break all the cases when data-lanes is not present in DT: you
> are storing the empty lane map instead of the 1:1 lane mapping that was
> in place beforehand.
You are right. It would overwrite the mapping with zeros when data-lanes is missing.
In the next patch I will:
1. Initialize to a default 1:1 mapping (<0 1 2 3>).
2. Validate and apply data-lanes only if present and valid.
3. Always produce a full 4-lane mapping, filling unused lanes with remaining physical lanes.
>>
>> msm_dp_link->max_dp_link_rate = msm_dp_link_link_frequencies(of_node);
>> if (!msm_dp_link->max_dp_link_rate)
Powered by blists - more mailing lists