[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM5zL5pvxrpWEdskp=8xNuUM+1npJkVLCUTZh3hCYTeHrCR5ZA@mail.gmail.com>
Date: Mon, 9 Dec 2024 16:33:01 +0100
From: Paweł Anikiel <panikiel@...gle.com>
To: Robert Beckett <bob.beckett@...labora.com>
Cc: Keith Busch <kbusch@...nel.org>, axboe <axboe@...nel.dk>, hch <hch@....de>,
kernel <kernel@...labora.com>, linux-kernel <linux-kernel@...r.kernel.org>,
linux-nvme <linux-nvme@...ts.infradead.org>, sagi <sagi@...mberg.me>
Subject: Re: [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk
On Mon, Dec 9, 2024 at 1:33 PM Robert Beckett <bob.beckett@...labora.com> wrote:
> [...]
> I have no further updates on this. I have received no further info from the vendor.
> I think we can go ahead and use the alignment patch as is. The only outstanding question was whether it is an
> implicit last entry per page chain vs simple alisngment requirement. Either way, using the dmapool
> alignment fixes both of these potential causes, so we should just take it as is.
> If we ever get any better info and can do a more specific patch in future, we can rework it then.
I think the 512 byte alignment fix is good. I tried coming up with
something more specific, but everything I could think of was either
too complicated or artificial.
Regarding the question of whether this is an alignment requirement or
the last PRP entry issue, I strongly believe it's the latter. I have a
piece of code that clearly demonstrates the hardware bug when run on a
device with the nvme bridge. I would really appreciate it if this was
verified and my explanation was included in the patch.
Powered by blists - more mailing lists