[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffc15010-3283-7761-c534-7b226f46d79a@huawei.com>
Date: Wed, 8 Jun 2022 18:39:10 +0100
From: John Garry <john.garry@...wei.com>
To: Bart Van Assche <bvanassche@....org>,
<damien.lemoal@...nsource.wdc.com>, <joro@...tes.org>,
<will@...nel.org>, <jejb@...ux.ibm.com>,
<martin.petersen@...cle.com>, <hch@....de>,
<m.szyprowski@...sung.com>, <robin.murphy@....com>
CC: <linux-scsi@...r.kernel.org>, <linux-doc@...r.kernel.org>,
<liyihang6@...ilicon.com>, <linux-kernel@...r.kernel.org>,
<linux-ide@...r.kernel.org>, <iommu@...ts.linux-foundation.org>
Subject: Re: [PATCH v3 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
On 08/06/2022 18:26, Bart Van Assche wrote:
> On 6/6/22 02:30, John Garry via iommu wrote:
>> +unsigned long iova_rcache_range(void)
>> +{
>> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
>> +}
>
> My understanding is that iova cache entries may be smaller than
> IOVA_RANGE_CACHE_MAX_SIZE and hence that even if code that uses the DMA
> mapping API respects this limit that a cache miss can still happen.
Sure, a cache miss may still happen - however once we have stressed the
system for a while then the rcaches fill up and don't fail often, or
often enough to be noticeable compared to not having a cached IOVAs at all.
Thanks,
john
Powered by blists - more mailing lists