[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <563080fc-e5b3-4ff3-9c27-74a167246544@oss.qualcomm.com>
Date: Wed, 4 Feb 2026 19:34:42 +0530
From: Pradeep Pragallapati <pradeep.pragallapati@....qualcomm.com>
To: Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...nel.org>
Cc: Robin Murphy <robin.murphy@....com>, axboe@...nel.dk, sagi@...mberg.me,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
nitin.rawat@....qualcomm.com, Leon Romanovsky <leon@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>, iommu@...ts.linux.dev
Subject: Re: [PATCH V1] nvme-pci: Fix NULL pointer dereference in
nvme_pci_prp_iter_next
On 2/3/2026 7:35 PM, Pradeep Pragallapati wrote:
>
>
> On 2/3/2026 10:57 AM, Christoph Hellwig wrote:
>> On Mon, Feb 02, 2026 at 11:59:04AM -0700, Keith Busch wrote:
>>> In the case where this iteration caused dma_need_unmap() to toggle to
>>> true, this is the iteration that allocates the dma_vecs, and it
>>> initializes the first entry to this iter. But the next lines proceed to
>>> the save this iter in the next index, so it's doubly accounted for and
>>> will get unmapped twice in the completion.
>>
>> Yeah.
>>
>>> Also, if the allocation fails, we should set iter->status to
>>> BLK_STS_RESOURCE so the callers know why the iteration can't continue.
>>> Otherwise, the caller will think the request is badly formed if you
>>> return false from here without setting iter->status.
>>>
>>> Here's my quick take. Boot tested with swiotlb enabled, but haven't
>>> tried to test the changing dma_need_unmap() scenario.
>>
>> Looks much better. Cosmetic nits below.
>>
>> Pradeep, can you test this with your setup?
> Sure, testing has started, and I will share the findings soon.
> Also, I did not pick up the initialization of dma_vecs during testing.
I ran testing for over 20 hours and did not observe the issue on my
setup. It appears to be helping.
>
>>
>>> + if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev))
>>> + return nvme_pci_prp_save_mapping(iter, req);
>>
>>> + if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(nvmeq->dev-
>>> >dev))
>>> + if (!nvme_pci_prp_save_mapping(iter, req))
>>> + return iter->status;
>>
>> I'd move the dma_use_iova / dma_need_unmap checks into
>> nvme_pci_prp_save_mapping to simplify this a bit more.
>>
>>> /*
>>> * PRP1 always points to the start of the DMA transfers.
>>> @@ -1218,6 +1231,8 @@ static blk_status_t nvme_prep_rq(struct request
>>> *req)
>>> iod->nr_descriptors = 0;
>>> iod->total_len = 0;
>>> iod->meta_total_len = 0;
>>> + iod->nr_dma_vecs = 0;
>>> + iod->dma_vecs = NULL;
>>
>> I don't think we need the dma_vecs initialization here, as everything
>> is keyed off nr_dma_vecs.
>
Powered by blists - more mailing lists