[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30c4dc3d-dc49-c533-3af0-3d804aaf1407@amd.com>
Date: Tue, 18 Apr 2023 18:35:54 +0530
From: Vasant Hegde <vasant.hegde@....com>
To: Robin Murphy <robin.murphy@....com>, joro@...tes.org
Cc: will@...nel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jakub Kicinski <kuba@...nel.org>,
John Garry <john.g.garry@...cle.com>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
Robin,
On 4/18/2023 4:27 PM, Robin Murphy wrote:
> On 2023-04-18 10:23, Vasant Hegde wrote:
>> Robin,
>>
>>
>> On 4/13/2023 7:10 PM, Robin Murphy wrote:
>>> Per the reasoning in commit 4bf7fda4dce2 ("iommu/dma: Add config for
>>> PCI SAC address trick") and its subsequent revert, this mechanism no
>>> longer serves its original purpose, but now only works around broken
>>> hardware/drivers in a way that is unfortunately too impactful to remove.
>>>
>>> This does not, however, prevent us from solving the performance impact
>>> which that workaround has on large-scale systems that don't need it.
>>> Once the 32-bit IOVA space fills up and a workload starts allocating and
>>> freeing on both sides of the boundary, the opportunistic SAC allocation
>>> can then end up spending significant time hunting down scattered
>>> fragments of free 32-bit space, or just reestablishing max32_alloc_size.
>>> This can easily be exacerbated by a change in allocation pattern, such
>>> as by changing the network MTU, which can increase pressure on the
>>> 32-bit space by leaving a large quantity of cached IOVAs which are now
>>> the wrong size to be recycled, but also won't be freed since the
>>> non-opportunistic allocations can still be satisfied from the whole
>>> 64-bit space without triggering the reclaim path.
>>>
>>> However, in the context of a workaround where smaller DMA addresses
>>> aren't simply a preference but a necessity, if we get to that point at
>>> all then in fact it's already the endgame. The nature of the allocator
>>> is currently such that the first IOVA we give to a device after the
>>> 32-bit space runs out will be the highest possible address for that
>>> device, ever. If that works, then great, we know we can optimise for
>>> speed by always allocating from the full range. And if it doesn't, then
>>> the worst has already happened and any brokenness is now showing, so
>>> there's little point in continuing to try to hide it.
>>>
>>> To that end, implement a flag to refine the SAC business into a
>>> per-device policy that can automatically get itself out of the way if
>>> and when it stops being useful.
>>>
>>> CC: Linus Torvalds <torvalds@...ux-foundation.org>
>>> CC: Jakub Kicinski <kuba@...nel.org>
>>> Reviewed-by: John Garry <john.g.garry@...cle.com>
>>> Signed-off-by: Robin Murphy <robin.murphy@....com>
>>
>> We hit kernel softlockup while running stress-ng test system having 384 CPU and
>> NVMe disk. This patch helped to solve one soft lockup in allocation path.
>>
>>> ---
>>>
>>> v4: Rebase to use the new bitfield in dev_iommu, expand commit message.
>>>
>>> drivers/iommu/dma-iommu.c | 26 ++++++++++++++++++++------
>>> drivers/iommu/dma-iommu.h | 8 ++++++++
>>> drivers/iommu/iommu.c | 3 +++
>>> include/linux/iommu.h | 2 ++
>>> 4 files changed, 33 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 99b2646cb5c7..9193ad5bc72f 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -630,7 +630,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct
>>> iommu_domain *domain,
>>> {
>>> struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>> struct iova_domain *iovad = &cookie->iovad;
>>> - unsigned long shift, iova_len, iova = 0;
>>> + unsigned long shift, iova_len, iova;
>>> if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
>>> cookie->msi_iova += size;
>>> @@ -645,15 +645,29 @@ static dma_addr_t iommu_dma_alloc_iova(struct
>>> iommu_domain *domain,
>>> if (domain->geometry.force_aperture)
>>> dma_limit = min(dma_limit, (u64)domain->geometry.aperture_end);
>>> - /* Try to get PCI devices a SAC address */
>>> - if (dma_limit > DMA_BIT_MASK(32) && !iommu_dma_forcedac && dev_is_pci(dev))
>>> + /*
>>> + * Try to use all the 32-bit PCI addresses first. The original SAC vs.
>>> + * DAC reasoning loses relevance with PCIe, but enough hardware and
>>> + * firmware bugs are still lurking out there that it's safest not to
>>> + * venture into the 64-bit space until necessary.
>>> + *
>>> + * If your device goes wrong after seeing the notice then likely either
>>> + * its driver is not setting DMA masks accurately, the hardware has
>>> + * some inherent bug in handling >32-bit addresses, or not all the
>>> + * expected address bits are wired up between the device and the IOMMU.
>>> + */
>>> + if (dma_limit > DMA_BIT_MASK(32) && dev->iommu->pci_32bit_workaround) {
>>> iova = alloc_iova_fast(iovad, iova_len,
>>> DMA_BIT_MASK(32) >> shift, false);
>>> + if (iova)
>>> + goto done;
>>> - if (!iova)
>>> - iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
>>> - true);
>>> + dev->iommu->pci_32bit_workaround = false;
>>> + dev_notice(dev, "Using %d-bit DMA addresses\n", bits_per(dma_limit));
>>
>> May be dev_notice_once? Otherwise we may see this message multiple time for same
>> device like below:
>
> Oh, that's a bit irritating. Of course multiple threads can reach this
> in parallel, silly me :(
>
> I would really prefer the notice to be once per device rather than once
> globally, since there's clearly no guarantee that the first device to
Agree. Makes sense.
> hit this case is going to be the one which is liable to go wrong. Does
> the (untested) diff below do any better?
Thanks for the patch. I have tested and its working fine.
-Vasant
Powered by blists - more mailing lists