lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 18 Jun 2022 11:52:48 -0400
From:   Sean Anderson <sean.anderson@...o.com>
To:     Ioana Ciornei <ioana.ciornei@....com>,
        "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Madalin Bucur <madalin.bucur@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Russell King <linux@...linux.org.uk>,
        Eric Dumazet <edumazet@...gle.com>,
        Jonathan Corbet <corbet@....net>,
        Kishon Vijay Abraham I <kishon@...com>,
        Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
        Rob Herring <robh+dt@...nel.org>,
        Vinod Koul <vkoul@...nel.org>,
        "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
        "linux-phy@...ts.infradead.org" <linux-phy@...ts.infradead.org>
Subject: Re: [PATCH net-next 03/28] phy: fsl: Add QorIQ SerDes driver

Hi Ioana,

On 6/18/22 8:39 AM, Ioana Ciornei wrote:
>>> Subject: [PATCH net-next 03/28] phy: fsl: Add QorIQ SerDes driver
>>>
> 
> Sorry for the previous HTML formatted email...
> 
>>
>> Hi Sean,
>>
>> I am very much interested in giving this driver a go on other SoCs as well
>> but at the moment I am in vacation until mid next week.

Please let me know your results. I have documented how to add support for
additional SoCs, so hopefully it should be fairly straightforward.

>>> This adds support for the "SerDes" devices found on various NXP QorIQ SoCs.
>>> There may be up to four SerDes devices on each SoC, each supporting up to
>>> eight lanes. Protocol support for each SerDes is highly heterogeneous, with
>>> each SoC typically having a totally different selection of supported
>>> protocols for each lane. Additionally, the SerDes devices on each SoC also
>>> have differing support. One SerDes will typically support Ethernet on most
>>> lanes, while the other will typically support PCIe on most lanes.
>>>
>>> There is wide hardware support for this SerDes. I have not done extensive
>>> digging, but it seems to be used on almost every QorIQ device, including
>>> the AMP and Layerscape series. Because each SoC typically has specific
>>> instructions and exceptions for its SerDes, I have limited the initial
>>> scope of this module to just the LS1046A. Additionally, I have only added
>>> support for Ethernet protocols. There is not a great need for dynamic
>>> reconfiguration for other protocols (SATA and PCIe handle rate changes in
>>> hardware), so support for them may never be added.>
>>> Nevertheless, I have tried to provide an obvious path for adding support
>>> for other SoCs as well as other protocols. SATA just needs support for
>>> configuring LNmSSCR0. PCIe may need to configure the equalization
>>> registers. It also uses multiple lanes. I have tried to write the driver
>>> with multi-lane support in mind, so there should not need to be any large
>>> changes. Although there are 6 protocols supported, I have only tested SGMII
>>> and XFI. The rest have been implemented as described in the datasheet.
>>>
>>> The PLLs are modeled as clocks proper. This lets us take advantage of the
>>> existing clock infrastructure. I have not given the same treatment to the
>>> lane "clocks" (dividers) because they need to be programmed in-concert with
>>> the rest of the lane settings. One tricky thing is that the VCO (pll) rate
>>> exceeds 2^32 (maxing out at around 5GHz). This will be a problem on 32-bit
>>> platforms, since clock rates are stored as unsigned longs. To work around
>>> this, the pll clock rate is generally treated in units of kHz.>
>>> The PLLs are configured rather interestingly. Instead of the usual direct
>>> programming of the appropriate divisors, the input and output clock rates
>>> are selected directly. Generally, the only restriction is that the input
>>> and output must be integer multiples of each other. This suggests some kind
>>> of internal look-up table. The datasheets generally list out the supported
>>> combinations explicitly, and not all input/output combinations are
>>> documented. I'm not sure if this is due to lack of support, or due to an
>>> oversight. If this becomes an issue, then some combinations can be
>>> blacklisted (or whitelisted). This may also be necessary for other SoCs
>>> which have more stringent clock requirements.
>>
>>
>> I didn't get a change to go through the driver like I would like, but are you
>> changing the PLL's rate at runtime?

Yes.

>> Do you take into consideration that a PLL might still be used by a PCIe or SATA
>> lane (which is not described in the DTS) and deny its rate reconfiguration
>> if this happens?

Yes.

When the device is probed, we go through the PCCRs and reserve any lane which is in
use for a protocol we don't support (PCIe, SATA). We also get both PLL's rates
exclusively and mark them as enabled.

>> I am asking this because when I added support for the Lynx 28G SerDes block what
>> I did in order to support rate change depending of the plugged SFP module was
>> just to change the PLL used by the lane, not the PLL rate itself.
>> This is because I was afraid of causing more harm then needed for all the
>> non-Ethernet lanes.

Yes. Since There is not much need for dynamic reconfiguration for other protocols,
I suspect that non-ethernet support will not be added soon (or perhaps ever).

>>>
>>> The general API call list for this PHY is documented under the driver-api
>>> docs. I think this is rather standard, except that most driverts configure
>>> the mode (protocol) at xlate-time. Unlike some other phys where e.g. PCIe
>>> x4 will use 4 separate phys all configured for PCIe, this driver uses one
>>> phy configured to use 4 lanes. This is because while the individual lanes
>>> may be configured individually, the protocol selection acts on all lanes at
>>> once. Additionally, the order which lanes should be configured in is
>>> specified by the datasheet.  To coordinate this, lanes are reserved in
>>> phy_init, and released in phy_exit.
>>>
>>> When getting a phy, if a phy already exists for those lanes, it is reused.
>>> This is to make things like QSGMII work. Four MACs will all want to ensure
>>> that the lane is configured properly, and we need to ensure they can all
>>> call phy_init, etc. There is refcounting for phy_init and phy_power_on, so
>>> the phy will only be powered on once. However, there is no refcounting for
>>> phy_set_mode. A "rogue" MAC could set the mode to something non-QSGMII and
>>> break the other MACs. Perhaps there is an opportunity for future
>>> enhancement here.
>>>
>>> This driver was written with reference to the LS1046A reference manual.
>>> However, it was informed by reference manuals for all processors with
>>> MEMACs, especially the T4240 (which appears to have a "maxed-out"
>>> configuration).
>>>
>>> Signed-off-by: Sean Anderson <sean.anderson@...o.com>
>>> ---
>>> This appears to be the same underlying hardware as the Lynx 28G phy
>>> added in 8f73b37cf3fb ("phy: add support for the Layerscape SerDes
>>> 28G").
>>
>> The SerDes block used on L1046A (and a lot of other SoCs) is not the same
>> one as the Lynx 28G that I submitted. The Lynx 28G block is only included
>> on the LX2160A SoC and its variants.

OK. I looked over it quickly and it seemed to share many of the same registers.

>> The SerDes block that you are adding a driver for is the Lynx 10G SerDes,
>> which is why I would suggest renaming it to phy-fsl-lynx-10g.c.

Ah, thanks. Is this documented anywhere?

--Sean

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ