[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL0q8a62bcjkJdKjTUbEOXMRfaCr1eB4YWeNugdRO1GjLLQe0g@mail.gmail.com>
Date: Thu, 3 Jul 2025 16:13:33 +0100
From: Ben Copeland <ben.copeland@...aro.org>
To: Keith Busch <kbusch@...nel.org>
Cc: Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org, lkft-triage@...ts.linaro.org,
regressions@...ts.linux.dev, linux-nvme@...ts.infradead.org,
Dan Carpenter <dan.carpenter@...aro.org>, axboe@...nel.dk, sagi@...mberg.me,
iommu@...ts.linux.dev, Leon Romanovsky <leonro@...dia.com>
Subject: Re: next-20250627: IOMMU DMA warning during NVMe I/O completion after 06cae0e3f61c
On Thu, 3 Jul 2025 at 15:29, Keith Busch <kbusch@...nel.org> wrote:
>
> On Thu, Jul 03, 2025 at 11:30:42AM +0200, Christoph Hellwig wrote:
> > I think the idea to reconstruct the dma addresses from PRPs should
> > be considered a failure by now. It works fine for SGLs, but for
> > PRPs we're better off just stashing them away. Bob, can you try
>
> s/Bob/Ben
>
> > something like the patch below? To be fully safe it needs a mempool,
> > and it could use some cleanups, but it does pass testing on my setup
> > here, so I'd love to see if if fixes your issue.
I have tested it on my system and can no longer see the regression.
Happy to retest when the patch goes through.
Tested-by: Ben Copeland <ben.copeland@...aro.org>
Thank you!
Ben
>
> Thanks for confirming.
>
> While this is starting to look a bit messy, I believe it's still an
> overall win: you've cut down the vector walking in the setup path from 3
> to 1, which reduces a non-trivial amount of overhead for even moderately
> sized IO.
Powered by blists - more mailing lists