[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b14b2a0-cf1c-fbfb-5028-d7a6974ef39f@oracle.com>
Date: Tue, 18 Apr 2023 11:19:50 +0100
From: John Garry <john.g.garry@...cle.com>
To: Vasant Hegde <vasant.hegde@....com>,
Robin Murphy <robin.murphy@....com>, joro@...tes.org
Cc: will@...nel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
On 18/04/2023 10:23, Vasant Hegde wrote:
> [ 172.017120] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.022955] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.028720] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.031815] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.031816] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.038727] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.038726] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.038917] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.038968] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.038970] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.039007] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.039091] nvme 0000:41:00.0: Using 64-bit DMA addresses
> [ 172.039102] nvme 0000:41:00.0: Using 64-bit DMA addresses
>
> Otherwise patch worked fine for us.
Hi Vasant,
JFYI, Since you are using NVMe, you could also alternatively try
something like which I did for some SCSI storage controller drivers to
limit the request_queue max_sectors soft limit, like:
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c2730b116dc6..0a99c9a629c9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1814,6 +1814,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl
*ctrl,
max_segments = min_not_zero(max_segments, ctrl->max_segments);
blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
q->limits.max_sectors = min(q->limits.max_hw_sectors,
+ (unsigned int)dma_opt_mapping_size(ctrl->dev));
blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
}
blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
lines 1-25/25 (END)
dma_opt_mapping_size() will return the max IOVA caching size for iommu
dma ops, so this would mean that we avoid alloc'ing and free'ing IOVAs
at such a high rate (which I assume was your problem).
Thanks,
John
Powered by blists - more mailing lists