lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <568df53c-41a7-94d7-6662-f8f7c72e5178@oracle.com>
Date:   Thu, 15 Jun 2023 11:11:08 +0100
From:   John Garry <john.g.garry@...cle.com>
To:     Robin Murphy <robin.murphy@....com>,
        Jakub Kicinski <kuba@...nel.org>,
        Joerg Roedel <joro@...tes.org>
Cc:     will@...nel.org, iommu@...ts.linux.dev,
        linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick

On 15/06/2023 10:04, Robin Murphy wrote:
>> Since we're at rc6 time and a cautious approach was wanted to merge 
>> this change, I doubt that this will be merged for this cycle. That's 
>> quite unfortunate.
>>
>> Please note what I mentioned earlier about using 
>> dma_opt_mapping_size(). This API is used by some block storage drivers 
>> to avoid your same problem, by clamping max_sectors_kb at this size - 
>> see sysfs-block Doc for info there. Maybe it can be used similarly for 
>> network drivers.
> 
> It's not the same problem - in this case the mappings are already small 
> enough to use the rcaches, and it seems more to do with the total number 
> of unusable cached IOVAs being enough to keep the 32-bit space 
> almost-but-not-quite full most of the time, defeating the 
> max32_alloc_size optimisation whenever the caches run out of the right 
> size entries.

Sure, not the same problem.

However when we switched storage drivers to use dma_opt_mapping_size() 
then performance is similar to iommu.forcedac=1 - that's what I found, 
anyway.

This tells me that that even though IOVA allocator performance is poor 
when the 32b space fills, it was those large IOVAs which don't fit in 
the rcache which were the major contributor to hogging the CPU in the 
allocator.

> 
> The manual workaround for now would be to boot with "iommu.forcedac=1" 
> and hope that no other devices break because of it.

Thanks,
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ