lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250211233430.GA55431@bhelgaas>
Date: Tue, 11 Feb 2025 17:34:30 -0600
From: Bjorn Helgaas <helgaas@...nel.org>
To: Chen Wang <unicorn_wang@...look.com>
Cc: Chen Wang <unicornxw@...il.com>, kw@...ux.com,
	u.kleine-koenig@...libre.com, aou@...s.berkeley.edu, arnd@...db.de,
	bhelgaas@...gle.com, conor+dt@...nel.org, guoren@...nel.org,
	inochiama@...look.com, krzk+dt@...nel.org, lee@...nel.org,
	lpieralisi@...nel.org, manivannan.sadhasivam@...aro.org,
	palmer@...belt.com, paul.walmsley@...ive.com, pbrobinson@...il.com,
	robh@...nel.org, devicetree@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
	linux-riscv@...ts.infradead.org, chao.wei@...hgo.com,
	xiaoguang.xing@...hgo.com, fengchun.li@...hgo.com
Subject: Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host

On Sun, Jan 26, 2025 at 10:27:27AM +0800, Chen Wang wrote:
> On 2025/1/23 6:21, Bjorn Helgaas wrote:
> > On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> > > From: Chen Wang <unicorn_wang@...look.com>
> > > 
> > > Add binding for Sophgo SG2042 PCIe host controller.
> > > +  sophgo,link-id:
> > > +    $ref: /schemas/types.yaml#/definitions/uint32
> > > +    description: |
> > > +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
> > > +      & link1 as Cadence's term). Each core corresponds to a host bridge,
> > > +      and each host bridge has only one root port. Their configuration
> > > +      registers are completely independent. SG2042 integrates two Cadence IPs,
> > > +      so there can actually be up to four host bridges. "sophgo,link-id" is
> > > +      used to identify which core/link the PCIe host bridge node corresponds to.
> > IIUC, the registers of Cadence IP 1 and IP 2 are completely
> > independent, and if you describe both of them, you would have separate
> > "pcie@...00000" stanzas with separate 'reg' and 'ranges' properties.
> 
> To be precise, for two cores of a cadence IP, each core has a separate set
> of configuration registers, that is, the configuration of each core is
> completely independent. This is also what I meant in the binding by "Each
> core corresponds to a host bridge, and each host bridge has only one root
> port. Their configuration registers are completely independent.". Maybe the
> "Their" here is a bit unclear. My original intention was to refer to the
> core. I can improve this description next time.
> 
> >  From the driver, it does not look like the registers for Link0 and
> > Link1 are independent, since the driver claims the
> > "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
> > pcie->link_id to select the correct register address and bit mask.
> In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one core,
> not two. So, you can see in patch 4 of this pathset [1], 3 pcie host-bridge
> nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding to one core.
> 
> [1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/
> 
> I also need to explain that link0 and link1 are actually completely
> independent in PCIE processing, but when sophgo implements the internal msi
> controller for PCIE, its design is not good enough, and the registers for
> processing msi are not made separately for link0 and link1, but mixed
> together, which is what I said cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two
> new register files added by sophgo (only involving MSI processing), take the
> second cadence IP as an example, some registers are used to control the msi
> controller of pcie_rc1 (corresponding to link0), and some registers are used
> to control the msi controller of pcie_rc2 (corresponding to link1). In a
> more complicated situation, some bits in a register are used to control
> pcie_rc1, and some bits are used to control pcie_rc2. This is why I have to
> add the link_id attribute to know whether the current PCIe host bridge
> corresponds to link0 or link1, so that when processing the msi controller
> related to this pcie host bridge, we can find the corresponding register or
> even the bit of a register in cdns_pcieX_ctrl.
> 
> > "sophgo,link-id" corresponds to Cadence documentation, but I think it
> > is somewhat misleading in the binding because a PCIe "Link" refers to
> > the downstream side of a Root Port.  If we use "link-id" to identify
> > either Core0 or Core1 of a Cadence IP, it sort of bakes in the
> > idea that there can never be more than one Root Port per Core.
>
> The fact is that for the cadence IP used by sg2042, only one root port is
> supported per core.

1) That's true today but may not be true forever.

2) Even if there's only one root port forever, "link" already means
something specific in PCIe, and this usage means something different,
so it's a little confusing.  Maybe a comment to say that this refers
to a "Core", not a PCIe link, is the best we can do.

> ...
> Based on the above analysis, I think the introduction of a three-layer
> structure (pcie-core-port) looks a bit too complicated for candence IP. In
> fact, the source of the discussion at the beginning of this issue was
> whether some attributes should be placed under the host bridge or the root
> port. I suggest that adding the root port layer on the basis of the existing
> patch may be enough. What do you think?
> 
> e.g.,
> 
> pcie_rc0: pcie@...0000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <4>;
>     }
> };
> 
> pcie_rc1: pcie@...2000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
> 
> pcie_rc2: pcie@...2800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
> };

Where does linux,pci-domain go?

Can you show how link-id 0 and link-id 1 would both be used?  I assume
they need to be connected somehow, since IIUC there's some register
shared between them?

Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ