lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <193ab67e768.1047ccb051074383.2860231262134590879@collabora.com>
Date: Mon, 09 Dec 2024 12:32:13 +0000
From: Robert Beckett <bob.beckett@...labora.com>
To: "Keith Busch" <kbusch@...nel.org>
Cc: "Pawel Anikiel" <panikiel@...gle.com>, "axboe" <axboe@...nel.dk>,
	"hch" <hch@....de>, "kernel" <kernel@...labora.com>,
	"linux-kernel" <linux-kernel@...r.kernel.org>,
	"linux-nvme" <linux-nvme@...ts.infradead.org>,
	"sagi" <sagi@...mberg.me>
Subject: Re: [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk






 ---- On Fri, 22 Nov 2024 19:36:51 +0000  Keith Busch  wrote --- 
 > On Thu, Nov 14, 2024 at 04:28:48PM +0000, Robert Beckett wrote:
 > >  ---- On Thu, 14 Nov 2024 14:13:52 +0000  Paweł Anikiel  wrote --- 
 > >  > On Thu, Nov 14, 2024 at 2:24 PM Robert Beckett
 > >  > bob.beckett@...labora.com> wrote:
 > >  > > This is interesting.
 > >  > > I had the same idea previously. I initially just changed the hard coded 256 / 8 to use 31 instead, which should have ensured the last entry of each segment never gets used.
 > >  > > When I tested that, it not longer failed, which was a good sign. So then I modified it to only do that on the last 256 byte segment of a page, but then is started failing again.
 > >  > 
 > >  > Could you elaborate the "only do that on the last 256 byte segment of
 > >  > a page" part? How did you check which chunk of the page would be
 > >  > allocated before choosing the dma pool?
 > >  > 
 > >  > > I never saw any bus error during my testing, just wrong data
 > >  > > read, which then fails image verification. I was expecting iommu
 > >  > > error logs if it was trying to access a chain in to nowhere if it
 > >  > > always interpreted last entry in page as a link. I never saw any
 > >  > > iommu errors.
 > >  > 
 > >  > Maybe I misspoke, the "bus error" part was just my speculation, I
 > >  > didn't look at the IOMMU logs or anything like that.
 > >  > 
 > >  > > I'd be glad to if you could share your testing method.
 > >  > 
 > >  > I dumped all the nvme transfers before the crash happened (using
 > >  > tracefs), and I saw a read of size 264 = 8 + 256, which led me to the
 > >  > chaining theory. To test this claim, I wrote a simple pci device
 > >  > driver which creates one IO queue and submits a read command where the
 > >  > PRP list is set up in a way that tests if the controller treats it as
 > >  > a chained list or not. I ran it, and it indeed treated the last PRP
 > >  > entry as a chained pointer.
 > > hmm, I guess a simple debugfs trigger file could be used to construct
 > > specially formulated requests. Would work as a debug tool.
 > >
 > > Though at this point, the simple dmapool alignment param usage fixes
 > > both of these scenarios, so it will be kind of academic to continue
 > > putting effort in to understand this. I am trying to get answers out
 > > of the vendor to confirm any of these theories, which I hope will be
 > > more conclusive than our combined inference from testing.
 > 
 > Any updates on this? I'm satisfied with the quirk patch, so we can move
 > this forward if you're okay with the current understanding.
 > 
apologies for late reply, I think this got missed during a holiday. Thanks for prompting on the previous thread.

I have no further updates on this. I have received no further info from the vendor.
I think we can go ahead and use the alignment patch as is. The only outstanding question was whether it is an
implicit last entry per page chain vs simple alisngment requirement. Either way, using the dmapool
alignment fixes both of these potential causes, so we should just take it as is.
If we ever get any better info and can do a more specific patch in future, we can rework it then.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ