[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98cc6d55-f9c7-a369-6004-42b242d01339@quicinc.com>
Date: Thu, 15 Dec 2022 09:08:04 -0800
From: Kuogee Hsieh <quic_khsieh@...cinc.com>
To: Stephen Boyd <swboyd@...omium.org>, <agross@...nel.org>,
<airlied@...il.com>, <andersson@...nel.org>, <daniel@...ll.ch>,
<devicetree@...r.kernel.org>, <dianders@...omium.org>,
<dmitry.baryshkov@...aro.org>, <dri-devel@...ts.freedesktop.org>,
<konrad.dybcio@...ainline.org>,
<krzysztof.kozlowski+dt@...aro.org>, <robdclark@...il.com>,
<robh+dt@...nel.org>, <sean@...rly.run>, <vkoul@...nel.org>
CC: <quic_abhinavk@...cinc.com>, <quic_sbillaka@...cinc.com>,
<freedreno@...ts.freedesktop.org>, <linux-arm-msm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and
link-frequencies property
On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> Any response?
yes, i did that at my local patch already.
>
>>>> @@ -193,6 +217,8 @@ examples:
>>>> reg = <1>;
>>>> endpoint {
>>>> remote-endpoint = <&typec>;
>>>> + data-lanes = <0 1>;
>>>> + link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>> };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.
>
>> Link training only happen between dp controller and sink since link
>> status is reported by sink (read back from sink's dpcd register directly).
>>
>> T achieve link symbol locked, link training will start from reduce link
>> rate until lowest rate, if it still failed, then it will reduce lanes
>> with highest rate and start training again.
>>
>> it will repeat same process until lowest lane (one lane), if it still
>> failed, then it will give up and declare link training failed.
> Yes, that describes the link training algorithm. I don't see why
> phy_configure() return value can't be checked and either number of lanes
> or link frequencies be checked. If only two lanes are supported, then
> phy_configure() will fail for the 4 link rates and the algorithm will
> reduce the number of lanes and go back to the highest rate. Then when
> the highest rate isn't supported it will drop link rate until the link
> rate is supported.
>
>> Therefore I think add data-lanes and link-frequencies properties in the
>> DP PHY binding directly will not helps.
>>
> I didn't follow your logic. Sorry.
Sorry, probably i did not understand your proposal clearly.
1) move both data-lanes and link-frequencies property from dp controller
endpoint to phy
2) phy_configure() return succeed if both data-lanes and link
frequencies are supported. otherwise return failed.
is above two summary items correct?
Currently phy_configure() is part of link training process and called
if link lanes or rate changes.
however, since current phy_configure() implementation always return 0,
the return value is not checking.
This proposal is new, can we discuss more detail at meeting and decide
to implement it or not.
Meanwhile can we merge current implementation (both data-lanes and
link-frequqncies at dp controller end point) first?
Powered by blists - more mailing lists