[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b73250f3-2dd6-36da-4d69-3149959f2e67@amazon.com>
Date: Mon, 10 Oct 2022 10:55:42 -0700
From: "Bhatnagar, Rishabh" <risbhat@...zon.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
CC: "hch@....de" <hch@....de>, "sagi@...mberg.me" <sagi@...mberg.me>,
"axboe@...com" <axboe@...com>,
"kbusch@...nel.org" <kbusch@...nel.org>,
"Bacco, Mike" <mbacco@...zon.com>,
"Herrenschmidt, Benjamin" <benh@...zon.com>,
"Park, SeongJae" <sjpark@...zon.com>
Subject: Re: [PATCH v2] nvme-pci: Set min align mask before calculating
max_hw_sectors
On 10/4/22 9:27 AM, Bhatnagar, Rishabh wrote:
> On 9/29/22, 11:23 AM, "Rishabh Bhatnagar" <risbhat@...zon.com> wrote:
>
> In cases where swiotlb is enabled dma_max_mapping_size takes into
> account the min align mask for the device. Right now the mask is
> set after the max hw sectors are calculated which might result in
> a request size that overflows the swiotlb buffer.
> Set the min align mask for nvme driver before calling
> dma_max_mapping_size while calculating max hw sectors.
>
> Fixes: 7637de311bd2 ("nvme-pci: limit max_hw_sectors based on the DMA max mapping size")
> Cc: stable@...r.kernel.org
> Signed-off-by: Rishabh Bhatnagar <risbhat@...zon.com>
> ---
> Changes in V2:
> - Add Cc: <stable@...r.kernel.org> tag
> - Improve the commit text
> - Add patch version
>
> Changes in V1:
> - Add fixes tag
>
> drivers/nvme/host/pci.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 98864b853eef..30e71e41a0a2 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2834,6 +2834,8 @@ static void nvme_reset_work(struct work_struct *work)
> nvme_start_admin_queue(&dev->ctrl);
> }
>
> + dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
> +
> /*
> * Limit the max command size to prevent iod->sg allocations going
> * over a single page.
> @@ -2846,7 +2848,6 @@ static void nvme_reset_work(struct work_struct *work)
> * Don't limit the IOMMU merged segment size.
> */
> dma_set_max_seg_size(dev->dev, 0xffffffff);
> - dma_set_min_align_mask(dev->dev, NVME_CTRL_PAGE_SIZE - 1);
>
> mutex_unlock(&dev->shutdown_lock);
>
> --
> 2.37.1
>
>
Hi. Any review on this patch would be much appreciated!
Thanks
Rishabh
Powered by blists - more mailing lists