[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <19327249129.1131b0ddd320896.1106904208694557670@collabora.com>
Date: Wed, 13 Nov 2024 20:08:48 +0000
From: Robert Beckett <bob.beckett@...labora.com>
To: "Keith Busch" <kbusch@...nel.org>
Cc: "Christoph Hellwig" <hch@....de>, "Jens Axboe" <axboe@...nel.dk>,
"Sagi Grimberg" <sagi@...mberg.me>, "kernel" <kernel@...labora.com>,
"linux-nvme" <linux-nvme@...ts.infradead.org>,
"linux-kernel" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk
---- On Wed, 13 Nov 2024 18:05:53 +0000 Keith Busch wrote ---
> On Wed, Nov 13, 2024 at 05:31:51AM +0100, Christoph Hellwig wrote:
> > On Tue, Nov 12, 2024 at 07:50:00PM +0000, Bob Beckett wrote:
> > > From: Robert Beckett bob.beckett@...labora.com>
> > >
> > > We initially put in a quick fix of limiting the queue depth to 1
> > > as experimentation showed that it fixed data corruption on 64GB
> > > steamdecks.
> > >
> > > After further experimentation, it appears that the corruption
> > > is fixed by aligning the small dma pool segments to 512 bytes.
> > > Testing via desync image verification shows that it now passes
> > > thousands of verification loops, where previously
> > > it never managed above 7.
> >
> > As suggested before, instead of changing the pool size please just
> > always use the large pool for this device.
>
> Well, he's doing what I suggested. I thought this was better because it
> puts the decision making in the initialization path instead of the IO
> path.
>
yep, this avoids any extra conditional in the fast path
Powered by blists - more mailing lists