[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<ZQ0PR01MB0981BC562E837B232B419AC28229A@ZQ0PR01MB0981.CHNPR01.prod.partner.outlook.cn>
Date: Thu, 14 Mar 2024 02:18:38 +0000
From: Kevin Xie <kevin.xie@...rfivetech.com>
To: Lorenzo Pieralisi <lpieralisi@...nel.org>, Palmer Dabbelt
<palmer@...belt.com>
CC: Minda Chen <minda.chen@...rfivetech.com>, Conor Dooley <conor@...nel.org>,
"kw@...ux.com" <kw@...ux.com>, "robh+dt@...nel.org" <robh+dt@...nel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "daire.mcnamara@...rochip.com"
<daire.mcnamara@...rochip.com>, "emil.renner.berthing@...onical.com"
<emil.renner.berthing@...onical.com>, "krzysztof.kozlowski+dt@...aro.org"
<krzysztof.kozlowski+dt@...aro.org>, "devicetree@...r.kernel.org"
<devicetree@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-riscv@...ts.infradead.org"
<linux-riscv@...ts.infradead.org>, "linux-pci@...r.kernel.org"
<linux-pci@...r.kernel.org>, Paul Walmsley <paul.walmsley@...ive.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>, "p.zabel@...gutronix.de"
<p.zabel@...gutronix.de>, Mason Huo <mason.huo@...rfivetech.com>, Leyfoon Tan
<leyfoon.tan@...rfivetech.com>
Subject: Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
workaround to host drivers.
> Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
> workaround to host drivers.
>
> On Mon, Mar 04, 2024 at 10:08:06AM -0800, Palmer Dabbelt wrote:
> > On Thu, 29 Feb 2024 07:08:43 PST (-0800), lpieralisi@...nel.org wrote:
> > > On Tue, Feb 27, 2024 at 06:35:21PM +0800, Minda Chen wrote:
> > > > From: Kevin Xie <kevin.xie@...rfivetech.com>
> > > >
> > > > As the Starfive JH7110 hardware can't keep two inbound post write
> > > > in order all the time, such as MSI messages and NVMe completions.
> > > > If the NVMe completion update later than the MSI, an NVMe IRQ handle
> will miss.
> > >
> > > Please explain what the problem is and what "NVMe completions" means
> > > given that you are talking about posted writes.
Sorry, we made a casual conclusion here.
Not any two of inbound post requests can`t be kept in order in JH7110 SoC,
the only one case we found is NVMe completions with MSI interrupts.
To be more precise, they are the pending status in nvme_completion struct and
nvme_irq handler in nvme/host/pci.c.
We have shown the original workaround patch before:
https://lore.kernel.org/lkml/CAJM55Z9HtBSyCq7rDEDFdw644pOWCKJfPqhmi3SD1x6p3g2SLQ@mail.gmail.com/
We put it in our github branch and works fine for a long time.
Looking forward to better advices from someone familiar with NVMe drivers.
> > >
> > > If you have a link to an erratum write-up it would certainly help.
> >
That`s not a certain hardware issue with existing erratum, and we are doing
more investigation for it in these days.
The next version we will skip this workaround patch, and then summit separate
patch (with erratum) for the fix after more debug on the issue.
> > I think we really need to see that errata document. Our formal memory
> > model doesn't account for device interactions so it's possible there's
> > just some arch fence we can make stronger in order to get things
> > ordered again -- we've had similar problems with some other RISC-V
> > chips, and while it ends up being slow at least it's correct.
> >
> > > This looks completely broken to me, if the controller can't
> > > guarantee PCIe transactions ordering it is toast, there is not even
> > > a point considering mainline merging.
> >
> > I wouldn't be at all surprised if that's the case. Without some
> > concrete ISA mechanisms here we're sort of just stuck hoping the SOC
> > vendors do the right thing, which is always terrifying.
> >
> > I'm not really a PCIe person so this is all a bit vague, but IIRC we
> > had a bunch of possible PCIe ordering violations in the SiFive memory
> > system back when I was there and we never really got a scheme for
> > making sure things were correct.
> >
> > So I think we really do need to see that errata document to know
> > what's possible here. Folks have been able to come up with clever
> > solutions to these problems before, maybe we'll get lucky again.
> >
> > > > As a workaround, we will wait a while before going to the generic
> > > > handle here.
> > > >
> > > > Verified with NVMe SSD, USB SSD, R8169 NIC.
> > > > The performance are stable and even higher after this patch.
> > >
> > > I assume this is a joke even though it does not make me laugh.
> >
> > So you're new to RISC-V, then? It gets way worse than this ;)
>
> To me this is just a PCI controller driver, arch does not matter.
>
> What annoyed me is that we really can't state that this patch improves
> performance, sorry, the patch itself is not acceptable, let's try not to rub it in :)
>
I'm sorry, the description here is confusing too.
The performance is compared with the version that hasn't been fixed.
It is reasonable that we can get stable performance when NVMe SSD works
without any timeouts, but that doesn't means it can improve the performance
for any other platform.
Thank you for your comments, Lorenzo & Palmer.
Kevin.
> Please post an erratum write-up and we shall see what can be done.
>
> Thanks,
> Lorenzo
>
> > > Thanks,
> > > Lorenzo
> > >
> > > >
> > > > Signed-off-by: Kevin Xie <kevin.xie@...rfivetech.com>
> > > > Signed-off-by: Minda Chen <minda.chen@...rfivetech.com>
> > > > ---
> > > > drivers/pci/controller/plda/pcie-plda-host.c | 12 ++++++++++++
> > > > drivers/pci/controller/plda/pcie-plda.h | 1 +
> > > > drivers/pci/controller/plda/pcie-starfive.c | 1 +
> > > > 3 files changed, 14 insertions(+)
> > > >
> > > > diff --git a/drivers/pci/controller/plda/pcie-plda-host.c
> > > > b/drivers/pci/controller/plda/pcie-plda-host.c
> > > > index a18923d7cea6..9e077ddf45c0 100644
> > > > --- a/drivers/pci/controller/plda/pcie-plda-host.c
> > > > +++ b/drivers/pci/controller/plda/pcie-plda-host.c
> > > > @@ -13,6 +13,7 @@
> > > > #include <linux/msi.h>
> > > > #include <linux/pci_regs.h>
> > > > #include <linux/pci-ecam.h>
> > > > +#include <linux/delay.h>
> > > >
> > > > #include "pcie-plda.h"
> > > >
> > > > @@ -44,6 +45,17 @@ static void plda_handle_msi(struct irq_desc *desc)
> > > > bridge_base_addr + ISTATUS_LOCAL);
> > > > status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
> > > > for_each_set_bit(bit, &status, msi->num_vectors) {
> > > > + /*
> > > > + * As the Starfive JH7110 hardware can't keep two
> > > > + * inbound post write in order all the time, such as
> > > > + * MSI messages and NVMe completions.
> > > > + * If the NVMe completion update later than the MSI,
> > > > + * an NVMe IRQ handle will miss.
> > > > + * As a workaround, we will wait a while before
> > > > + * going to the generic handle here.
> > > > + */
> > > > + if (port->msi_quirk_delay_us)
> > > > + udelay(port->msi_quirk_delay_us);
> > > > ret = generic_handle_domain_irq(msi->dev_domain, bit);
> > > > if (ret)
> > > > dev_err_ratelimited(dev, "bad MSI IRQ %d\n", diff --git
> > > > a/drivers/pci/controller/plda/pcie-plda.h
> > > > b/drivers/pci/controller/plda/pcie-plda.h
> > > > index 04e385758a2f..feccf285dfe8 100644
> > > > --- a/drivers/pci/controller/plda/pcie-plda.h
> > > > +++ b/drivers/pci/controller/plda/pcie-plda.h
> > > > @@ -186,6 +186,7 @@ struct plda_pcie_rp {
> > > > int msi_irq;
> > > > int intx_irq;
> > > > int num_events;
> > > > + u16 msi_quirk_delay_us;
> > > > };
> > > >
> > > > struct plda_event {
> > > > diff --git a/drivers/pci/controller/plda/pcie-starfive.c
> > > > b/drivers/pci/controller/plda/pcie-starfive.c
> > > > index 9bb9f0e29565..5cfc30572b7f 100644
> > > > --- a/drivers/pci/controller/plda/pcie-starfive.c
> > > > +++ b/drivers/pci/controller/plda/pcie-starfive.c
> > > > @@ -391,6 +391,7 @@ static int starfive_pcie_probe(struct
> > > > platform_device *pdev)
> > > >
> > > > plda->host_ops = &sf_host_ops;
> > > > plda->num_events = PLDA_MAX_EVENT_NUM;
> > > > + plda->msi_quirk_delay_us = 1;
> > > > /* mask doorbell event */
> > > > plda->events_bitmap = GENMASK(PLDA_INT_EVENT_NUM - 1, 0)
> > > > & ~BIT(PLDA_AXI_DOORBELL)
> > > > --
> > > > 2.17.1
> > > >
> > > >
> > > > _______________________________________________
> > > > linux-riscv mailing list
> > > > linux-riscv@...ts.infradead.org
> > > > http://lists.infradead.org/mailman/listinfo/linux-riscv
Powered by blists - more mailing lists