[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aYIzZNdjWOlJ_Oao@kbusch-mbp>
Date: Tue, 3 Feb 2026 10:41:56 -0700
From: Keith Busch <kbusch@...nel.org>
To: Robin Murphy <robin.murphy@....com>
Cc: Leon Romanovsky <leon@...nel.org>, Christoph Hellwig <hch@....de>,
Pradeep P V K <pradeep.pragallapati@....qualcomm.com>,
axboe@...nel.dk, sagi@...mberg.me, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, nitin.rawat@....qualcomm.com,
Marek Szyprowski <m.szyprowski@...sung.com>, iommu@...ts.linux.dev
Subject: Re: [PATCH V1] nvme-pci: Fix NULL pointer dereference in
nvme_pci_prp_iter_next
On Tue, Feb 03, 2026 at 01:50:12PM +0000, Robin Murphy wrote:
> > Can dev->dma_skip_sync be modified in parallel with this check?
> > If so, dma_need_unmap() may return different results depending on the
> > time at which it is invoked.
>
> It can if another thread is making mappings in parallel, however as things
> currently stand that would only lead to the current thread thinking it must
> save the unmap state for the mappings it's already made even if it
> technically didn't need to.
>
> In principle it could also change back the other way if another thread reset
> the device's DMA mask, but doing that with active mappings would
> fundamentally break things in regard to the dma_skip_sync mechanism anyway.
We can handle a change in dma_needs_unmap() from false -> true. Worst
case scenario is we save off a descriptor when we didn't really need to,
but it's just a small memory use for the lifetime of the IO.
We don't correctly handle a transition from true -> false, though. We'd
currently leak memory if that happened. It sounds like that transition
is broken for other reasons too, so I won't bother trying to handle it.
Powered by blists - more mailing lists