lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 13 Mar 2024 21:39:34 -0600
From: Keith Busch <kbusch@...nel.org>
To: Kevin Xie <kevin.xie@...rfivetech.com>
Cc: Lorenzo Pieralisi <lpieralisi@...nel.org>,
	Palmer Dabbelt <palmer@...belt.com>,
	Minda Chen <minda.chen@...rfivetech.com>,
	Conor Dooley <conor@...nel.org>, "kw@...ux.com" <kw@...ux.com>,
	"robh+dt@...nel.org" <robh+dt@...nel.org>,
	"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"daire.mcnamara@...rochip.com" <daire.mcnamara@...rochip.com>,
	"emil.renner.berthing@...onical.com" <emil.renner.berthing@...onical.com>,
	"krzysztof.kozlowski+dt@...aro.org" <krzysztof.kozlowski+dt@...aro.org>,
	"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	Paul Walmsley <paul.walmsley@...ive.com>,
	"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
	"p.zabel@...gutronix.de" <p.zabel@...gutronix.de>,
	Mason Huo <mason.huo@...rfivetech.com>,
	Leyfoon Tan <leyfoon.tan@...rfivetech.com>
Subject: Re: [PATCH v15,RESEND 22/23] PCI: starfive: Offload the NVMe timeout
 workaround to host drivers.

On Wed, Mar 13, 2024 at 08:51:29PM -0600, Keith Busch wrote:
> I suppose we could quirk a non-posted transaction in the interrupt
> handler to force flush pending memory updates, but that will noticeably
> harm your nvme performance. Maybe if you constrain such behavior to the
> spurious IRQ_NONE condition, then it might be okay? I don't know.

Hm, that may not be good enough: if nvme completions can be reordered
with their msi's, then I assume data may reorder with their completion.
Your application will inevitably see stale and corrupted data, so it
sounds like you need some kind of barrier per completion. Ouch!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ