[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <011c1ce3-605a-1ad5-d7df-b91e8c6808bc@linaro.org>
Date: Thu, 15 Dec 2022 23:12:49 +0200
From: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
To: Stephen Boyd <swboyd@...omium.org>,
Kuogee Hsieh <quic_khsieh@...cinc.com>, agross@...nel.org,
airlied@...il.com, andersson@...nel.org, daniel@...ll.ch,
devicetree@...r.kernel.org, dianders@...omium.org,
dri-devel@...ts.freedesktop.org, konrad.dybcio@...ainline.org,
krzysztof.kozlowski+dt@...aro.org, robdclark@...il.com,
robh+dt@...nel.org, sean@...rly.run, vkoul@...nel.org
Cc: quic_abhinavk@...cinc.com, quic_sbillaka@...cinc.com,
freedreno@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v12 2/5] dt-bindings: msm/dp: add data-lanes and
link-frequencies property
On 15/12/2022 02:38, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
>
> Any response?
>
>>>> @@ -193,6 +217,8 @@ examples:
>>>> reg = <1>;
>>>> endpoint {
>>>> remote-endpoint = <&typec>;
>>>> + data-lanes = <0 1>;
>>>> + link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>> };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>>
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>>
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
>
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.
I'd toss another coin into the argument. We have previously discussed
using the link-frequencies property in the context of limiting link
speeds for the DSI. There we have both hardware (SoC) limitations and
the board limitations as in some cases the DSI lanes can not sustain
some high rate. I still hope for these patches to materialize at some point.
For the DP this is more or less the same story. We have the hardware
(SoC, PHY, etc) limitations, but also we have the board/device
limitations. For example some of the board might not be able to support
HBR3 e.g. because of the PCB design. And while it might be logical to
also add the 'max bit rate' support to the eDP & combo PHYs, it
definitely makes sense to be able to limit the rate on the DP <->
`something' link.
Now, for all the practical purposes this `something' for the DP is the
DP connector, the eDP panel or the USB-C mux (with the possible
redrivers in the middle).
Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
outbound endpoint. This is the link which will be driven by the data
stream from the Linux point of view. The PHY is linked through the
'phys' property, but it doesn't participate in the USB-C (or in the
connector/panel) graph.
Now let's discuss the data lanes. Currently we have them in the DP
property itself. Please correct me if I'm wrong, but I think that we can
drop it for all the practical purposes. Judging by the DP compat string
the driver can determine if it uses 2 lanes (eDP) or 4 lanes
(full-featured DP). In case of USB-C when the altmode dictates whether
to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
mode and pin configuration, then inform the DP controller about the
selected amount of lanes. Then DP informs the PHY about the selection
(note, PHY doesn't have control at all in this scenario).
The only problematic case is the mixed mode ports, which if I understand
correctly, can be configured either to eDP or DP modes. I'm not sure who
specifies and limits the amount of lanes available to the DP controller.
--
With best wishes
Dmitry
Powered by blists - more mailing lists