[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e2324dc-2ab1-6a35-46ab-72d970cc466c@acm.org>
Date: Tue, 7 Jun 2022 15:43:42 -0700
From: Bart Van Assche <bvanassche@....org>
To: John Garry <john.garry@...wei.com>,
damien.lemoal@...nsource.wdc.com, joro@...tes.org, will@...nel.org,
jejb@...ux.ibm.com, martin.petersen@...cle.com, hch@....de,
m.szyprowski@...sung.com, robin.murphy@....com
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-ide@...r.kernel.org, iommu@...ts.linux-foundation.org,
linux-scsi@...r.kernel.org, liyihang6@...ilicon.com,
chenxiang66@...ilicon.com, thunder.leizhen@...wei.com
Subject: Re: [PATCH v3 0/4] DMA mapping changes for SCSI core
On 6/6/22 02:30, John Garry wrote:
> As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching
> limit may see a big performance hit.
>
> This series introduces a new DMA mapping API, dma_opt_mapping_size(), so
> that drivers may know this limit when performance is a factor in the
> mapping.
>
> Robin didn't like using dma_max_mapping_size() for this [1].
>
> The SCSI core code is modified to use this limit.
>
> I also added a patch for libata-scsi as it does not currently honour the
> shost max_sectors limit.
>
> Note: Christoph has previously kindly offered to take this series via the
> dma-mapping tree, so I think that we just need an ack from the
> IOMMU guys now.
>
> [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
> [1] https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@arm.com/
Regarding [0], that patch reverts commit 4e89dce72521 ("iommu/iova:
Retry from last rb tree node if iova search fails"). Reading the
description of that patch, it seems to me that the iova allocator can be
improved. Shouldn't the iova allocator be improved such that we don't
need this patch series? There are algorithms that handle fragmentation
much better than the current iova allocator algorithm, e.g. the
https://en.wikipedia.org/wiki/Buddy_memory_allocation algorithm.
Thanks,
Bart.
Powered by blists - more mailing lists