lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <16a1e6c6-2e2f-08e5-8da0-1462cec57e1f@gmail.com>
Date: Wed, 20 Mar 2024 00:12:22 -0700
From: Bo Gan <ganboing@...il.com>
To: Keith Busch <kbusch@...nel.org>, Kevin Xie <kevin.xie@...rfivetech.com>
Cc: Lorenzo Pieralisi <lpieralisi@...nel.org>,
 Palmer Dabbelt <palmer@...belt.com>, Minda Chen
 <minda.chen@...rfivetech.com>, Conor Dooley <conor@...nel.org>,
 "kw@...ux.com" <kw@...ux.com>, "robh+dt@...nel.org" <robh+dt@...nel.org>,
 "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
 "tglx@...utronix.de" <tglx@...utronix.de>,
 "daire.mcnamara@...rochip.com" <daire.mcnamara@...rochip.com>,
 "emil.renner.berthing@...onical.com" <emil.renner.berthing@...onical.com>,
 "krzysztof.kozlowski+dt@...aro.org" <krzysztof.kozlowski+dt@...aro.org>,
 "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
 "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
 Paul Walmsley <paul.walmsley@...ive.com>,
 "aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
 "p.zabel@...gutronix.de" <p.zabel@...gutronix.de>,
 Mason Huo <mason.huo@...rfivetech.com>,
 Leyfoon Tan <leyfoon.tan@...rfivetech.com>
Subject: Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
 workaround to host drivers.

On 3/13/24 7:51 PM, Keith Busch wrote:
> On Thu, Mar 14, 2024 at 02:18:38AM +0000, Kevin Xie wrote:
>>> Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
>>> workaround to host drivers.
>>>
>>> On Mon, Mar 04, 2024 at 10:08:06AM -0800, Palmer Dabbelt wrote:
>>>> On Thu, 29 Feb 2024 07:08:43 PST (-0800), lpieralisi@...nel.org wrote:
>>>>> On Tue, Feb 27, 2024 at 06:35:21PM +0800, Minda Chen wrote:
>>>>>> From: Kevin Xie <kevin.xie@...rfivetech.com>
>>>>>>
>>>>>> As the Starfive JH7110 hardware can't keep two inbound post write
>>>>>> in order all the time, such as MSI messages and NVMe completions.
>>>>>> If the NVMe completion update later than the MSI, an NVMe IRQ handle
>>> will miss.
>>>>>
>>>>> Please explain what the problem is and what "NVMe completions" means
>>>>> given that you are talking about posted writes.

Echoing Keith here. Why are you treating NVMe completions + MSI as a special case?
What's special about this combination other than two posted writes? I own JH7110
visionfive 2 boards myself, and if I'm not mistaken, there are two identical PCIe
controllers in JH7110. The first one connects the onboard USB controller of vf2,
which also enables MSI interrupts. How come this exact problem not affecting the
USB controller? The commit message from Minda strongly suggests it does, and also
for R8169 NIC. Thus, why would you suggest the problem is confined to NVMe?

Bo

>>
>> Sorry, we made a casual conclusion here.
>> Not any two of inbound post requests can`t be kept in order in JH7110 SoC,
>> the only one case we found is NVMe completions with MSI interrupts.
>> To be more precise, they are the pending status in nvme_completion struct and
>> nvme_irq handler in nvme/host/pci.c.
>>
>> We have shown the original workaround patch before:
>> https://lore.kernel.org/lkml/CAJM55Z9HtBSyCq7rDEDFdw644pOWCKJfPqhmi3SD1x6p3g2SLQ@mail.gmail.com/
>> We put it in our github branch and works fine for a long time.
>> Looking forward to better advices from someone familiar with NVMe drivers.
> 
> So this platform treats strictly ordered writes the same as if relaxed
> ordering was enabled? I am not sure if we could reasonably work around
> such behavior. An arbitrary delay is likely too long for most cases, and
> too short for the worst case.
> 
> I suppose we could quirk a non-posted transaction in the interrupt
> handler to force flush pending memory updates, but that will noticeably
> harm your nvme performance. Maybe if you constrain such behavior to the
> spurious IRQ_NONE condition, then it might be okay? I don't know.
> 

Also copied Keith's latest reply below, and I also have the same doubt.

> Hm, that may not be good enough: if nvme completions can be reordered
> with their msi's, then I assume data may reorder with their completion.
> Your application will inevitably see stale and corrupted data, so it
> sounds like you need some kind of barrier per completion. Ouch!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ