[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <vwipw5opvx42e6l7ofphg47rlsoebbts7dwzq5knaf4iz4hmhw@gk6lo4b6plxh>
Date: Wed, 25 Jun 2025 15:46:54 -0600
From: Manivannan Sadhasivam <mani@...nel.org>
To: Mike Looijmans <mike.looijmans@...ic.nl>
Cc: Bjorn Helgaas <helgaas@...nel.org>, linux-pci@...r.kernel.org,
Bjorn Helgaas <bhelgaas@...gle.com>, Krzysztof Wilczyński <kwilczynski@...nel.org>,
Lorenzo Pieralisi <lpieralisi@...nel.org>, Michal Simek <michal.simek@....com>,
Rob Herring <robh@...nel.org>, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/2] PCI: xilinx: Wait for link-up status during
initialization
On Wed, Jun 11, 2025 at 08:48:51AM +0200, Mike Looijmans wrote:
>
> Met vriendelijke groet / kind regards,
>
> Mike Looijmans
> System Expert
>
>
> TOPIC Embedded Products B.V.
> Materiaalweg 4, 5681 RJ Best
> The Netherlands
>
> T: +31 (0) 499 33 69 69
> E: mike.looijmans@...ic.nl
> W: www.topic.nl
>
> Please consider the environment before printing this e-mail
> On 10-06-2025 20:57, Bjorn Helgaas wrote:
> > On Tue, Jun 10, 2025 at 04:39:03PM +0200, Mike Looijmans wrote:
> > > When the driver loads, the transceiver and endpoint may still be setting
> > > up a link. Wait for that to complete before continuing. This fixes that
> > > the PCIe core does not work when loading the PL bitstream from
> > > userspace. Existing reference designs worked because the endpoint and
> > > PL were initialized by a bootloader. If the endpoint power and/or reset
> > > is supplied by the kernel, or if the PL is programmed from within the
> > > kernel, the link won't be up yet and the driver just has to wait for
> > > link training to finish.
> > >
> > > As the PCIe spec allows up to 100 ms time to establish a link, we'll
> > > allow up to 200ms before giving up.
> > > +static int xilinx_pci_wait_link_up(struct xilinx_pcie *pcie)
> > > +{
> > > + u32 val;
> > > +
> > > + /*
> > > + * PCIe r6.0, sec 6.6.1 provides 100ms timeout. Since this is FPGA
> > > + * fabric, we're more lenient and allow 200 ms for link training.
> > > + */
> > > + return readl_poll_timeout(pcie->reg_base + XILINX_PCIE_REG_PSCR, val,
> > > + (val & XILINX_PCIE_REG_PSCR_LNKUP), 2 * USEC_PER_MSEC,
> > > + 2 * PCIE_T_RRS_READY_MS * USEC_PER_MSEC);
> > > +}
> > I don't think this is what PCIE_T_RRS_READY_MS is for. Sec 6.6.1
> > requires 100ms *after* the link is up before sending config requests:
> >
> > For cases where system software cannot determine that DRS is
> > supported by the attached device, or by the Downstream Port above
> > the attached device:
> >
> > ◦ With a Downstream Port that does not support Link speeds greater
> > than 5.0 GT/s, software must wait a minimum of 100 ms following
> > exit from a Conventional Reset before sending a Configuration
> > Request to the device immediately below that Port.
> >
> > ◦ With a Downstream Port that supports Link speeds greater than
> > 5.0 GT/s, software must wait a minimum of 100 ms after Link
> > training completes before sending a Configuration Request to the
> > device immediately below that Port. Software can determine when
> > Link training completes by polling the Data Link Layer Link
> > Active bit or by setting up an associated interrupt (see §
> > Section 6.7.3.3). It is strongly recommended for software to
> > use this mechanism whenever the Downstream Port supports it.
> >
> Yeah, true, I misread the comment in pci.h. I cannot find any #define to
> match the "how long to wait for link training". Each driver appears to use
> its own timeout. So I should just create my own?
>
We recently merged a series that moved the timing definitions to
drivers/pci/pci.h:
https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?h=controller/linkup-fix&id=470f10f18b482b3d46429c9e6723ff0f7854d049
So if you base your series on top this branch, you could reuse the definitions.
- Mani
--
மணிவண்ணன் சதாசிவம்
Powered by blists - more mailing lists