[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <99c1e8ab-a064-c770-072f-23ef9e9abb82@arm.com>
Date: Thu, 15 Jun 2023 12:41:56 +0100
From: Robin Murphy <robin.murphy@....com>
To: John Garry <john.g.garry@...cle.com>,
Jakub Kicinski <kuba@...nel.org>,
Joerg Roedel <joro@...tes.org>
Cc: will@...nel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
On 2023-06-15 11:11, John Garry wrote:
> On 15/06/2023 10:04, Robin Murphy wrote:
>>> Since we're at rc6 time and a cautious approach was wanted to merge
>>> this change, I doubt that this will be merged for this cycle. That's
>>> quite unfortunate.
>>>
>>> Please note what I mentioned earlier about using
>>> dma_opt_mapping_size(). This API is used by some block storage
>>> drivers to avoid your same problem, by clamping max_sectors_kb at
>>> this size - see sysfs-block Doc for info there. Maybe it can be used
>>> similarly for network drivers.
>>
>> It's not the same problem - in this case the mappings are already
>> small enough to use the rcaches, and it seems more to do with the
>> total number of unusable cached IOVAs being enough to keep the 32-bit
>> space almost-but-not-quite full most of the time, defeating the
>> max32_alloc_size optimisation whenever the caches run out of the right
>> size entries.
>
> Sure, not the same problem.
>
> However when we switched storage drivers to use dma_opt_mapping_size()
> then performance is similar to iommu.forcedac=1 - that's what I found,
> anyway.
>
> This tells me that that even though IOVA allocator performance is poor
> when the 32b space fills, it was those large IOVAs which don't fit in
> the rcache which were the major contributor to hogging the CPU in the
> allocator.
The root cause is that every time the last usable 32-bit IOVA is
allocated, the *next* PCI caller to hit the rbtree for a SAC allocation
is burdened with walking the whole 32-bit subtree to determine that it's
full again and re-set max32_alloc_size. That's the overhead that
forcedac avoids.
In the storage case with larger buffers, dma_opt_mapping_size() also
means you spend less time in the rbtree, but because you're inherently
hitting it less often at all, since most allocations can now hopefully
be fulfilled by the caches. That's obviously moot when the mappings are
already small enough to be cached and the only reason for hitting the
rbtree is overflow/underflow in the depot because the working set is
sufficiently large and the allocation pattern sufficiently "bursty".
Thanks,
Robin.
Powered by blists - more mailing lists